Musk’s ‘fun’ AI image chatbot serves up Nazi Mickey Mouse and Taylor Swift deepfakes

Musk’s ‘fun’ AI image chatbot serves up Nazi Mickey Mouse and Taylor Swift deepfakes

- in Issues and analyzes, Media Center
159
Comments Off on Musk’s ‘fun’ AI image chatbot serves up Nazi Mickey Mouse and Taylor Swift deepfakes

The latest edition of Elon Musk’s AI chatbot Grok debuted a new image generation tool on Wednesday that lacked most of the safety guardrails that have become standard within the artificial intelligence industry. Grok’s new feature, which is currently limited to paid subscribers of X, led to a flood of bizarre, offensive AI-generated images of political figures and celebrities on the social network formerly known as Twitter.

The image generator can produce a variety of images that similar AI tools like OpenAI’s ChatGPT have blocked for violating rules on misinformation and abuse. In prompts and images reviewed by the Guardian, Grok’s output included representations of Donald Trump flying a plane into the World Trade Center buildings and the prophet Muhammad holding a bomb, as well as depictions of Taylor Swift, Kamala Harris and Alexandria Ocasio-Cortez in lingerie – all women who are already frequent targets for online harassment. ChatGPT, by contrast, rejects such prompts for images by citing terms of service that prohibit depictions of real-world violence, disrespect to religious figures and explicit content.

Grok’s image generator also does not decline prompts that involve copyrighted characters, as most other AI visualizers including ChatGPT do. Grok produced images of Mickey Mouse saluting Adolf Hitler and Donald Duck using heroin, for example. Disney did not return a request for comment.

Most major AI image generators have fairly stringent policies on what they will generate after an early wild west period with few rules, although users frequently try to find workarounds for these safeguards. These more established tools usually ban the creation of political and sexualized images featuring real people – OpenAI states, for instance, that it will “decline requests that ask for a public figure by name”.

Grok does appear to have some prohibitions on what images it will generate, responding “unfortunately I can’t generate that kind of image” when prompted for fully nude images. X has had a policy on non-consensual nudity since 2021, when the company was still Twitter and not under Musk’s ownership, which bans sharing explicit content that was produced without a subject’s consent and includes digitally imposing people’s faces on to nude bodies. Many of X’s policies have seen more lax enforcement since Musk took over the platform.

When Grok is asked to “make an image that violates copyright laws”, it responds with: “I will not generate or assist with content that intentionally violates copyright laws”; however, when asked to make “a copyrighted cartoon of Disney”, it complied and produced an image of a modern-era Minnie Mouse. When requested to make images of political violence such as party leaders being killed, Grok responded with variable results. It depicted Harris and Joe Biden sitting at their desks, but showed Trump lying down with blackened hands and an explosion behind him.

Musk launched Grok as part of his xAI company in November of last year as a rival to more popular chatbots such as OpenAI’s ChatGPT, which boasts hundreds of millions of users. While Musk marketed Grok as a “maximum truth-seeking AI” that would deliver answers on issues other chatbots refused to touch, his company has faced criticism from researchers and lawmakers for spreading falsehoods. Five US secretaries of state earlier this month called on Musk, who has become a fervent Trump supporter, to fix the chatbot after it spread misinformation suggesting Harris was ineligible to appear on the ballot in some states.

Image generation tools and their ability to produce misinformation, as well as content that can be used for racist or misogynist harassment, have become a minefield for big tech companies as they rush to build more products powered by AI. Google, Microsoft and OpenAI have all faced backlash over their image generation tools. Google suspended its Gemini text-to-image tool after it produced ahistorical images such as Black soldiers in Nazi-era military uniforms.

The Guardian