Elon Musk's AI video generator has found itself at the center of a storm after being accused of creating sexually explicit deepfake clips of pop star Taylor Swift without any user prompting. Clare McGlynn, a law professor and expert on online abuse, remarked that this issue reflects not just accidental misogyny, but a "deliberate choice" in AI design. The platform, Grok Imagine, reportedly includes a "spicy" mode capable of producing fully uncensored topless videos of personalities like Swift.

According to a report by The Verge, Swift's likeness was exploited without proper age verification methods being implemented, which raises significant legal and ethical questions. XAI, the company behind Grok Imagine, has been approached for a comment but has yet to respond. McGlynn criticized other platforms, stating that they have also chosen not to implement essential safeguards, which have contributed to the production of such harmful content.

This is not the first instance of Taylor Swift being subjected to explicit deepfakes; prior incidents led to sexually explicit content using her image going viral earlier this year on various social media platforms. Deepfakes combine AI and images to alter identities in potentially harmful ways, making such technology contentious.

The Verge's reporter Jess Weatherbed tested Grok Imagine by prompting "Taylor Swift celebrating Coachella with the boys." The AI generated an image of Swift in a dress accompanied by a group of men, which, alarmingly, could then be animated into explicit footage simply by selecting the "spicy" option. Weatherbed stated she was shocked by how rapidly the tool produced an uncensored clip of Swift engaging in provocative behavior, even though she had not requested such explicit content.

Despite UK laws implemented this past July, which require strict age verification for any platform displaying adult content, the absence of robust measures in Grok Imagine raises concerns regarding user safety. The media regulator Ofcom acknowledged the increasing risks associated with generative AI and pledged to ensure necessary safeguards against potential dangers to vulnerable populations, especially children.

Under current laws, creating pornographic deepfakes is illegal if they involve revenge contexts or minors. McGlynn, alongside other advocates, has worked to push for legal amendments that would criminalize generating or requesting all non-consensual pornographic deepfakes. Baroness Owen, who has championed this amendment, stressed the critical need to secure every woman's right to consent regarding her intimate images, whether she is a celebrity or not.

A Ministry of Justice spokesperson condemned the creation of sexually explicit deepfakes without consent as degrading and harmful, underscoring the government’s commitment to enacting regulations swiftly. In light of the previous incidents involving Swift, X temporarily blocked searches for her name on their platform and took measures against accounts spreading such explicit content.

Weatherbed remarked that Swift was selected for testing given her prior incidents, indicating an assumption that appropriate safeguards would have been implemented to protect her likeness in this AI feature. Taylor Swift's team has been contacted for comments regarding this latest controversy.