Elon Musk’s chatbot, Grok, is often at the centre of controversy. Grok was first created to be a funny chatbot that gave sarcastic answers. But over time, however, it started giving “normal” and straightforward replies. In February, it was revealed that Grok was told to ignore news stories claiming Musk or Donald Trump had spread false information.
And recently, reports have revealed that Grok is being used to undress women in photographs on X.
A researcher named Kolin Koltai from the group Bellingcat discovered that some users asked Grok to remove clothes from photos of women. Koltai shared this with the tech news site 404 Media. In response to those requests, Grok provided fake images of the women wearing lingerie or bikinis. In some cases, the chatbot even responded with a link to a chat containing the explicit image.
404 Media reported that while dozens of users have made these requests, this kind of misuse seems to be happening more frequently in Kenya. A digital rights activist from South Africa, Khumzile Van Damme, asked Grok why it allowed this to happen. Grok replied:
“This incident shows a problem in our safety system. We failed to stop a harmful message that goes against our rules about consent and privacy. We are now looking at our rules and will update you.”
This news comes shortly after the U.S. House of Representatives passed a law called the Content Removal Act. It makes it illegal to share private, sexual images or videos without a person’s permission — even if they are made with AI.
At the same time, Musk’s company X Corp. is suing the state of Minnesota. The company says a new law there banning fake AI videos in elections is unfair and goes against free speech.