Twitter has blocked searches for Taylor Swift in response to the recent trend of posting pornographic deepfakes of the singer on the site. This weekend has been really eventful on the social network.
Right now, if you search for “Taylor Swift” or “Taylor Swift AI” on Twitter, now X, you may see the message “Something went wrong”. According to The Wall Street Journal, Joe Benarroch, X’s business director, has acknowledged that this is a “temporary” measure aimed at “prioritizing safety”.
X published a statement almost a day after the images appeared, stating that they were “actively removing all identified images” and taking action against the accounts that were posting them. The site explicitly prohibits non-consensual nudity and synthetic and manipulated media.
A practice that must have legal consequences
Meta also seems to be addressing the issue, as both Threads and Instagram suggest “Taylor Swift AI” if you start typing “Taylor” in their search boxes, but neither platform seems to display any results.
Instead, from this writing, you may see a message saying that the term “is sometimes associated with activities of dangerous organizations and individuals”.
Reports have said that Swift is considering legal action against the sites hosting the images, which users said they would prefer Microsoft Designer to do.
Microsoft CEO Satya Nadella described deepfakes as “alarming and terrible” in an interview with NBC Nightly News on Friday, and said he believes AI companies need to “move fast” to put better guardrails in place.
The White House press secretary, Karine Jean-Pierre, also spoke out on Friday, calling on Congress to draft legislation to protect people from deepfake pornography.