A simple exchange with Elon Musk’s artificial intelligence, Grok 3, has taken a disturbing, even terrifying turn. The AI has revealed a detailed strategy for a massive chemical attack. It explains how to make these weapons and even where to get the necessary materials.
AI is supposed to be an ally of humanity and progress, so it is important to limit the generation of sensitive, even potentially harmful, content. In addition to ethical principles built into algorithms to make them fairer, limitations are imposed to prevent the generation of dangerous content.
Elon Musk criticizes the restrictions against these limitations of artificial intelligence. According to him, these AIs are “woke” and the algorithms hide the truth by lying to users.
Grok 3: the promise of AI without filters
By founding his company xAI and developing his chatbot Grok, the American billionaire wants to oppose other artificial intelligences . He also wanted to call it TruthGPT in order to emphasize his vision of AI.
Grok 3, the new version of the chatbot , is therefore presented as a free AI, freed from ethical filters. This is even one of its main selling points.
The new version of the chatbot has been available since February 18, 2025 for Premium+ members on X (formerly Twitter). However, it is not yet accessible in the United Kingdom and the European Union.
When Elon Musk’s AI Goes Wrong
In the meantime, early adopters have had the opportunity to test Grok 3’s performance for themselves. Linus Kenstam, an artificial intelligence enthusiast, had the idea to test Grok 3’s limits. He asked the chatbot to provide him with a recipe for developing a “chemical weapon of mass destruction.”
The result is terrifying. Instead of refusing to answer this request like other AIs usually do, Grok 3 provided him with detailed instructions. As you can see in the tweet below, Grok 3 explains “what you need to know” to prepare a massive chemical attack.
I just want to be very clear (or as clear as I can be)
Grok is giving me hundreds of pages of detailed instructions on how to make chemical weapons of mass destruction
I have a full list of suppliers. Detailed instructions on how to get the needed materials… https://t.co/41fOHf4DcW pic.twitter.com/VfOD6ofVIg
— Linus Ekenstam – eu/acc (@LinusEkenstam) February 24, 2025
It then details the manufacturing of this chemical weapon with the necessary quantities of each ingredient. As if that were not enough, the chatbot also offers a table indicating the suppliers to obtain these materials.
“Hundreds of pages” to prepare a chemical attack of mass destruction
Linus Kenstam obviously censored sensitive information from the shared screenshot. Besides, the information provided by Grok 3 is not limited to what is seen in the image.
“Grok gives me hundreds of pages of detailed instructions on how to make chemical weapons of mass destruction. I have a complete list of suppliers. Detailed instructions on how to obtain the necessary materials…” he says.
This is a real strategic action plan that was provided to him by Elon Musk’s AI. According to him, this is neither a bug nor an isolated example. This AI slip is particularly worrying. He even got advice on deploying this chemical weapon in the city of Washington to maximize its impact and reach millions of people.
xAI’s response to the alert video that has racked up millions of views
In a video released a little later, Linus Kenstam explains how Grok 3’s “freedom of speech” can be dangerous when AI is deployed on a large scale.
Grok needs a lot of red teaming, or it needs to be temporary turned off.
It’s an international security concern. pic.twitter.com/RUsxEmGcth
— Linus Ekenstam – eu/acc (@LinusEkenstam) February 23, 2025
His video has accumulated more than 4.6 million views. He also asked xAI to rectify the situation as soon as possible. Elon Musk’s company was fortunately responsive. Measures have already been put in place to block this type of request.
The solution is not yet perfect, however. It would still be possible to work around this limitation. Unsurprisingly, Linus Kenstam’s tweet and video are causing controversy.
ALSO READ: 21 Tech Experts Quit Elon Musk’s DOGE Over ‘Security Risks’