Will AI-powered chatbots soon start communicating in their own dialect? If Gibberlink is any guide, it’s now a real possibility – with all that it means for the future of the industry.
A recently viral video spotted by Forbes featuring two AI models conversing in a “language” that humans can’t understand is sparking some pretty fascinating discussions: here’s a quick look at this oh-so-disconcerting scenario that’s likely to become much more common in the future.
You can see the video below, shared on Reddit:
Today, the capabilities of the latest generations of multimodal large language models (LLMs), such as ChatGPT, go far beyond simple text production. These conversational agents are now able to communicate verbally thanks to a mix of text-to-speech for speech synthesis, and speech-to-text for the comprehension part.
Based on these new capabilities, the industry has begun to imagine a future where these chatbots could become real personal assistants. For example, major companies in the sector are all designing systems capable of contacting a hotel, restaurant or retailer directly to make a reservation or order. But it’s also going the other way. For some time now, more and more companies have been equipping themselves with AI-based systems to interact with customers, so that human employees can focus on less trivial or more sensitive tasks.
Gibberlink, a new “language” for AI
It seems obvious that interactions between AIs are set to become more and more frequent, and some developers have therefore started looking for new approaches to streamline these exchanges. After all, the fact that these systems communicate in English, French or other is only of interest when they are addressing a human; why would they bother with this layer of complexity that is completely unnecessary when it comes to making themselves understood by another AI model?
It was with this idea in mind that two developers, Boris Starkov and Anton Pidkuiko, designed an open-source program called Gibberlink. As the name suggests, built from the English words “gibberish” and “link”, it is a system that allows two AI models to communicate using a new type of language that is completely unintelligible to a human. Instead of a conventional language, they use a series of sounds that sound a bit like those of an old modem, or the famous RD-2D from the Star Wars saga.
What if an AI agent makes a phone call, then realizes the other person is also an AI agent?
At the ElevenLabs London Hackathon, Boris Starkov and Anton Pidkuiko introduced a custom protocol that AI agents can switch into for error-proof communication that's 80% more efficient… pic.twitter.com/9XKq52fKnK
— Luke Harries (@LukeHarries_) February 24, 2025
The two partners presented their system at a convention organised in London by ElevenLabs, a company that has already distinguished itself by using AI to dub famous actors. The video of these exchanges between AIs spread like wildfire on the web, accumulating millions of views in the space of a few days. And you don’t have to look far to understand why.
Between fascination and dismay
On the one hand, this technology represents a real added value. By doing away with this layer of abstraction that human language represents, the two models can communicate much more efficiently. In addition to limiting the risk of misinterpretation, this reduces the duration of the interaction and, by extension, the needs in computing power and energy — two particularly important points for the future of this industry.
But if this sequence had such an impact on humans, it is above all because being excluded from the discussion generates conflicting feelings, between fascination and concern. Seeing virtual systems exchange without us being able to understand a single word of what is said brings up a lot of uncomfortable questions on the themes of transparency and control.
Alignment, the real challenge of generative AI
This is particularly striking in the current context, with the emergence of increasingly autonomous AI agents. These require implementing strict control mechanisms to be able to gauge what the industry calls alignment — the fact that these systems behave in accordance with the expectations and standards of their creators. It is already anything but obvious to control this alignment today, because of what is commonly called the “black box of AI”.
The problem with these machine learning algorithms is that even if you know what data you’re feeding in and you get a comprehensible result out, the whole process in between is often too abstract for the human brain to understand. With systems like Gibberlink, you’re adding another layer of abstraction that makes these models even less understandable. This tends to reinforce fears that an AI-based system could one day take control of critical systems without permission, with potentially cataclysmic consequences for humanity.
Are these fears well-founded? It’s hard to say at this point — but in any case, Gibberlink is unlikely to change the game in this regard. Just because models can suddenly communicate in the manner of RD-D2 doesn’t mean that some sort of Skynet is going to emerge overnight, far from it.
On the other hand, Gibberlink is a perfect example of the importance of ensuring that AI models are aligned, one way or another. The purpose of this technology remains to serve humanity, and it will be very interesting to see how engineers will ensure that it continues to do so while optimising these systems through approaches like this.