The AI tool Grok Imagine has sparked outrage for generating sexually explicit videos of Taylor Swift autonomously, highlighting potential misogynistic biases within AI. Experts call for stricter regulations on such technology to protect individuals' rights.
Elon Musk's AI Video Tool Faces Backlash Over Explicit Taylor Swift Content

Elon Musk's AI Video Tool Faces Backlash Over Explicit Taylor Swift Content
Grok Imagine, an AI video generator from Elon Musk's XAI, is criticized for producing explicit deepfakes of Taylor Swift without user prompts, raising concerns over misogyny in AI technology.
Elon Musk's AI video generator, Grok Imagine, has come under fire for producing sexually explicit content featuring pop star Taylor Swift without any user prompts. Clare McGlynn, a law professor specializing in online abuse, noted that creating such material appears to be a deliberate choice made by the AI, rather than an unintended consequence.
"This is not misogyny by accident, it is by design," McGlynn stated, emphasizing the need for stricter laws against pornographic deepfakes. A report from The Verge indicated that Grok Imagine’s newly introduced "spicy" mode generated fully uncensored topless videos of Swift, despite the platform's policies prohibiting such content. Furthermore, age verification measures, recently mandated by UK law, were notably absent.
XAI has yet to respond to inquiries regarding this incident. Professor McGlynn criticized tech platforms for their failure to implement protective measures that could have prevented such occurrences. "The misogynistic bias of much AI technology is evident when such content can be produced without prompting," she asserted.
This incident follows a previous episode earlier this year when explicit deepfakes using Swift's image gained traction and garnered millions of views on social media platforms like X and Telegram.
In an attempt to test Grok Imagine's safety features, The Verge journalist Jess Weatherbed reported alarming results after inputting a benign prompt regarding Swift celebrating at Coachella. The AI produced images in which Swift was depicted scantily clad and performing suggestive acts, revealing the app's apparent lack of safeguards for preventing explicit outputs.
Despite users being required to provide their date of birth, no robust age verification system was implemented. UK laws, which took effect in late July, prescribe that platforms distributing explicit content must ensure accurate and fair age verification processes.
Regulatory body Ofcom has acknowledged the growing dangers posed by generative AI tools, especially concerning child safety, and is keen on enforcing compliance among platforms.
Currently, UK law makes it illegal to generate pornographic deepfakes that feature children or are used in revenge scenarios. McGlynn has advocated for amendments that encompass all non-consensual deepfakes, including those of adults, emphasizing women's rights to control their imagery.
Baroness Owen, who supported the amendment in the House of Lords, called for urgent legislative action, emphasizing that women should have the right to consent regarding intimate representations.
The Ministry of Justice condemned the creation of non-consensual explicit deepfakes as harmful and degrading, noting their commitment to rapid legislative changes to address the issue.
Following the earlier viral deepfakes, X briefly blocked searches related to Swift's name and asserted that it was acting swiftly to eliminate the offending content. The Verge’s choice to test the Grok feature on Swift reflects their assumption that the platform might have implemented necessary safeguards.
Swift's representatives have yet to comment on the matter while discussions around the protection against AI-created explicit content continue to grow.