logo
Can AI chatbots speak in their own 'secret' language?

Can AI chatbots speak in their own 'secret' language?

Euronews12-06-2025

A viral video which shows three different chatbots speaking in their own "secret language" has amassed hundreds of thousands of views across various social media platforms.
The clip shows three chatbots engaging in a phone call in English, in which they discuss "an employee's badge number".
When the machines realise that they are all speaking to other bots, they ask each other whether they should switch to "Gibberlink", prompting them to start emitting high-pitched noises, in what appears to be something out of a science-fiction film.
Gibberlink — a term which combines "gibberish" and "link" — is real. While use of the technology is limited, it enables AI engines to communicate in their own language.
EuroVerify asked Anton Pidkuiko, who co-founded Gibberlink, to review a number of online clips.
"Many of the videos are imitating an existing technology — they show phones which aren't really communicating and there is no signal between them, instead the sounds have been edited in and visuals have been taken from ChatGPT."
Fake online videos purporting to show Gibberlink software have begun to emerge after the technology was created in February by Pidkuiko and fellow AI engineer Boris Starkov, during a 24-hour tech hackathon held in London.
The pair combined ggwave — an existing open-source technology that enables data exchange through sound — with artificial intelligence.
So, although AI can communicate in its own language, it is not "secret", as it is based on open-source data and is coded by humans.
For Pidkuiko, the technology is comparable to QR codes. "Every supermarket item has a bar code which makes the shopping experience much more efficient."
"Gibberlink is essentially this barcode — or think of it as a QR code — but over sound. Humans can look at QR code and just see black and white pieces. But QR codes don't scare people."
While the use of Gibberlink technology is very limited at present, its creators believe it will become more mainstream, "as it stands, AI is able to make and receive phone calls," Pidkuiko said.
"With time, we will see an increase in the number of these robot calls — and essentially more and more we will see that one AI is exchanging."
Although this technology presents the risk of stripping humans of meaningful interactions, as well as replacing a further swath of unnecessary jobs, for Pidkuiko Gibberlink, it would be a means of maximising efficiency.
"If you manage a restaurant and have a phone number that people call to book tables, you will sometimes receive calls in different languages," stated Pidkuiko.
"However, if it's a robot that can speak every language and it is always available, the line is never blocked and you will have no language issues."
"Another way the technology could be used, is if you want to book a restaurant, but don't want to ring 10 different places to ask if they have space, you can get AI to make the call and he restaurant can get AI to receive it. If they can communicate more quickly in their own language, it makes sense", concluded Pidkuiko.
However, fears around what could happen if humans become unable to interpret AI communications are real, and in January the release of AI software DeepSeek R1 raised alarm.
Researchers who had been working on the technology revealed they incentivised the software to find the right answers, regardless of whether its reasoning was comprehensible to humans.
However, this led the AI to begin spontaneously switching from English to Chinese to achieve a result. When researchers forced the technology to stick to one language — to ensure that users could follow its processes — its capacity to find answers was hindered.
This incident led industry experts to worry that incentivising AI to find the correct answers, without ensuring its processes can be untangled by humans, could lead AI to develop languages that cannot be understood.
In 2017, Facebook abandoned an experiment after two AI programmes began conversing in a language which only they understood.
Russia has lost more than 1 million troops in Ukraine since the beginning of its full-scale invasion on 24 February 2022, the General Staff of Ukraine's Armed Forces reported on Thursday.
The figure — which reportedly comes out to 1,000,340 — includes killed, wounded or incapacitated Russian troops.
According to the report, Russia has also lost 10,933 tanks, 22,786 armored fighting vehicles, 51,579 vehicles and fuel tanks, 29,063 artillery systems, 1,413 multiple launch rocket systems, 1,184 air defense systems, 416 airplanes, 337 helicopters, 40,435 drones, 3,337 cruise missiles, 28 ships and boats, and one submarine.
'The overall losses of the Russian occupying forces in manpower since the beginning of the full-scale invasion have reached 1 million,' Ukraine's General Staff stated. 'More than 628,000 occurred in just the past year and a half.'
Releasing the report on Thursday, Ukraine's General Staff said that the one-million mark is not just a statistic but a symbol of resistance and resilience.
'One million. That's how much the enemy's offensive potential has diminished,' the General Staff wrote. '1 million who could have destroyed us, but whom we destroyed instead.'
The statement went on to highlight the symbolic meaning behind this figure, referencing the sites of Moscow's defeats and losses in Ukraine, "in the Red Forest near Chernobyl, in the waters of the Dnipro near Antonivsky Bridge, in Donbas and Kharkiv region. And the the bottom of the Black Sea, where the cruiser Moskva sank."
'This million neutralised occupiers is our response. Our memory of Bucha, Irpin, Kupyansk, Kherson... About the bombed-out maternity hospital in Mariupol and the Okhmatdyt hospital in Kyiv destroyed by a Russian missile. About the tears of children, civilians shot dead, and destroyed homes.'
Kyiv also expressed gratitude to every Ukrainian soldier who contributed to the fight, reaffirming that "every eliminated occupier is another step toward a just peace."
'Today, we've taken more than a million such steps.' the General Staff concluded.
Ukraine started publicly tracking and publishing Russian losses on 1 March 2022, when the count stood at 5,710 killed and 200 captured. Ever since, the losses have been increasing every year.
In 2022, Russia lost 106,720 troops, averaging 340 per day, according to the General Staff of Ukraine's Armed Forces.
In 2023, the losses more than doubled to an average of 693 per day and 253,290 troops. In 2024, daily losses crossed the 1,000 threshold and totalled at 430,790 troops.
This year, Russia has been losing on average 1,286 troops per day.
Ukraine's General Staff numbers are in line with the estimates of Ukraine's western allies.
At the beginning of April, Deutsche Welle reported that according to a senior NATO official Russia's losses surpassed 900,000 troops, including 250,000 deaths, since the beginning of the full-scale invasion.
Ukraine and Russia do not publicly disclose their losses.
In February, Ukrainian President Volodymyr Zelenskyy said over 46,000 Ukrainian soldiers have been killed on the battlefield since early 2022.
He also said nearly 380,000 Ukrainian soldiers have been injured and "tens of thousands" remained either "missing in action" or being held in Russian captivity.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Using AI bots like ChatGPT could be causing cognitive decline, new study shows
Using AI bots like ChatGPT could be causing cognitive decline, new study shows

Euronews

time2 hours ago

  • Euronews

Using AI bots like ChatGPT could be causing cognitive decline, new study shows

A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media Lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was 'biased and superficial'. A 'likely decrease' in learning skills The study found that repeated GPT use can come with 'cognitive debt' that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to 'diminished critical inquiry, increased vulnerability to manipulation and decreased creativity,' as well as a 'likely decrease' in learning skills. 'When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives,' the study continued. The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain 'before LLMs are recognised as something that is net positive for humans',

Could using ChatGPT make you dumber? New study says it can
Could using ChatGPT make you dumber? New study says it can

Euronews

time17 hours ago

  • Euronews

Could using ChatGPT make you dumber? New study says it can

A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was 'biased and superficial'. A 'likely decrease' in learning skills The study found that repeated GPT use can come with 'cognitive debt' that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to 'diminished critical inquiry, increased vulnerability to manipulation and decreased creativity,' as well as a 'likely decrease' in learning skills. 'When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives,' the study continued. The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain 'before LLMs are recognised as something that is net positive for humans.'

Justice at stake as generative AI enters the courtroom
Justice at stake as generative AI enters the courtroom

France 24

time3 days ago

  • France 24

Justice at stake as generative AI enters the courtroom

Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court. "It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system. "Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it." In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists. "I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales. The judge voiced appreciation for the avatar, saying it seemed authentic. "I knew it would be powerful," Wales told AFP, "that that it would humanize Chris in the eyes of the judge." The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss. Since the hearing, examples of GenAI being used in US legal cases have multiplied. "It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine. "Overall, it's a positive development in jurisprudence." Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks. "You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy. "We are all aware of a horror story where AI comes up with mixed-up case things." The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications. In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle." The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors. And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts. "Courts need to be prepared to handle that," Cleary said. Transformation Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient. "We have a huge number of people who don't have access to legal services," Linna said. "These tools can be transformative; of course we need to be thoughtful about how we integrate them." Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions. "Judges need to be technologically up-to-date and trained in AI," Linna said. GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor. Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with. But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store