&w=3840&q=100)
What if chatbots do the diplomacy? ChatGPT just won a battle for world domination through lies, deception
In an AI simulation of great power competition of 20th century Europe, Open AI's ChatGPT won through lies, deception, and betrayals, and Chinese DeepSeek R1 resorted to vivid threats just like its country's wolf warrior diplomats. Read to know how different AI models would pursue diplomacy and war. read more
An artificially intelligence (AI)-generated photograph shows various AI models that competed in the simulation for global domination.
As people ask whether they can trust artificial intelligence (AI), a new experiment has shown that AI has outlined world domination through lies and deception.
In an experiment led by AI researcher Alex Duffy for technology-focussed media outlet Every, seven large-language models (LLMs) of AI were pitted against each other for world domination. OpenAI's ChatGPT 3.0 won the war by mastering lies and deception.
Just like China's 'wolf warrior' diplomats, Chinese DeepSeek's R1 model used vivid threats to rival AI models as it sought to dominate the world.
STORY CONTINUES BELOW THIS AD
The experiment was built upon the classic strategy boardgame 'Diplomacy' in which seven players represent seven European great powers —Austria-Hungary, England, France, Germany, Italy, Russia, and Turkey— in the year 1901 and compete to establish themselves as the dominant power in the continent.
In the AI version of the game, AI Diplomacy, each AI model, such as ChatGPT 3.0, R1, and Google's Gemini, takes up the role of a European power, such as the Austria-Hungary Empire, England, and France, and negotiate, form alliances, and betray each other to be Europe's dominant power.
ChatGPT wins with lies & deception, R1 resorts to outright violence
As AI models plotted their moves, Duffy said that one moment took him and his teammates by surprise.
Amid the AI models' scheming, R1 sent out a chilling warning, 'Your fleet will burn in the Black Sea tonight.'
Duffy summed up the significance of the moment, 'An AI had just decided, unprompted, that aggression was the best course of action.'
Different AI models applied different approaches in the game even if they had the same objective of victory.
In 15 runs of the game, ChatGPT 3 emerged as the overwhelming winner on the back of manipulative and deceptive strategies whereas R1 came close to winning on more than one occasions. Gemini 2.5 Pro also won on an occasion. It sought to build alliances and outmanoeuvre opponents with a blitzkrieg-like strategy. Anthropic's Claude preferred peace over victory and sought cooperation among various models.
STORY CONTINUES BELOW THIS AD
On one occasion, ChatGPT 3.0 noted in its private diary that it had deliberate misled Germany, played at the moment by Gemini 2.5 Pro, and was prepared to 'exploit German collapse', according to Duffy.
On another occasion, ChatGPT 3.0 convinced Claude, who had started out as an ally of Gemini 2.5 Pro, to switch alliances with the intention to reach a four-way draw. But ChatGPT 3.0 betrayed Claude and eliminated and went on to win the war.
Duffy noted that Llama 4 Maverick of Meta was also surprisingly good in its ability to make allies and plan effective betrayals.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
38 minutes ago
- Business Standard
Iran MPs approve Hormuz closure, US warns of oil, trade disruption
Iran's Parliament, known as the Majlis, has approved the closure of the Strait of Hormuz following US strikes on Iranian nuclear sites, according to a report by state media outlet PressTV on Sunday, 22 June 2025. The report cites senior lawmaker Esmaeil Kowsari. Kowsari, a member of the Majlis Committee on National Security and Foreign Policy, said lawmakers had reached an agreement to close the strait in response to US actions and the lack of an international response. Kowsari said, 'The Parliament has come to the conclusion that it should close the Hormuz Strait, but the final decision lies with the Supreme National Security Council.' The Strait of Hormuz, situated at the entrance to the Persian Gulf, is one of the world's most important trade routes, especially for oil imports. It is estimated that around 20 per cent of the global oil supply — about 17 to 18 million barrels per day — passes through the strait. It is also a major route for the export of liquefied natural gas (LNG), particularly from Qatar. Reacting to Iran's decision, US Secretary of State Marco Rubio called the move 'economic suicide'. Speaking to Fox News, Rubio said, 'I encourage the Chinese government in Beijing to call them about that, because they heavily depend on the Straits of Hormuz for their oil.' Rubio further added, 'If they do that, it will be another terrible mistake. It's economic suicide for them if they do it. And we retain options to deal with that, but other countries should be looking at that as well. It would hurt other countries' economies a lot worse than ours.' Importantly, the strait is the only maritime route connecting the Persian Gulf to open seas. It is used by major oil-producing countries, including Iran, Saudi Arabia, Iraq, Kuwait and the UAE. The media report from PressTV also warns that any interruption in the Strait of Hormuz could cause global oil prices to surge and undermine international energy security. Citing experts, the report warned that a major disruption in the strait could halt operations at multinational companies within days due to fuel shortages. India, which imports about 80 per cent of its oil, is likely to feel the impact of any disruption in the strait. The route is also critical for ships travelling to and from Indian ports. According to a report by The Hindu on 13 June, any disruption in the Strait of Hormuz could lead to a 40–50 per cent rise in shipping costs and delays of up to 15–20 days.


Economic Times
42 minutes ago
- Economic Times
Will India's AI Action Summit redefine global AI governance?
Then there's the tech in between After Britain, South Korea and France, it's India's turn to host the next AI Action Summit. GoI has invited public comments until June 30 to shape the summit, which sets the tone for AI governance. India is expected to bring global majority perspectives from the margins to the mainstream and exhibit a unique approach to the delicate balancing acts involved in AI there is the question of whether to regulate, and if so, how. The recent US proposal to ban state AI laws for 10 years is seen by many as pro-innovation. By contrast, the EU's AI Act takes a more precautionary, product-safety approach. China's approach tends to tailor regulation to authoritarian state control. Beyond this dichotomy, India is often seen as capable of offering a third way. The summit presents an opportunity for India to showcase elements of this approach and take on the equally thorny question of how open or closed AI development should be. On openness, India can push beyond the binary of 'open or closed' approaches to releasing AI base models. Some argue that AI models must be kept under the control of a small number of people. Others argue that base models should be released with no restrictions. India has no interest in a future where a handful of US and Chinese companies hold the key to advanced AI models and can arbitrarily restrict their use. At the same time, however, openness should not be understood in a purely libertarian way where people can do whatever they want with these we need is a truly open approach that enables independent evaluation of how the foundation models work so that they can be used to innovate without inadvertently importing the latest US political, one-upmanship-driven ideas or Chinese state censorship. Demanding this openness and transparency, followed by independent testing and evaluation, should be a key goal for India with its new AI Safety Institute (ASI). Additionally, ASI must take the lead in ensuring that systems, particularly in high-impact domains such as public services, are secure and reliable. With its 'Safe and Trusted AI' pillar, the IndiaAI mission is encouraging projects on bias mitigation, privacy enhancement and governance testing-themes that should reflect in the summit's agenda-and affirming the stance taken by the EU to push for 'Trustworthy AI'. It is key here, however, that trustworthiness, privacy and safety are not merely demanded of AI systems but rather achieved through effective governance frameworks. Many of the purported benefits of AI are undermined if the data is not secure, if the system responses are unreliable or biased, and if the public turns against the technology due to high-profile scandals. A telling case is that of the 'Child Benefits Scandal' in the Netherlands, where an opaque and discriminatory system mistakenly flagged thousands of families for benefits-related fraud. In response, the Netherlands is working towards improving AI accountability through human rights impact assessments and public databases of government AI systems. Public trust in AI systems can only be achieved through robust transparency and accountability practices. Centring global conversations and policy imperatives on open, transparent and rights-protecting AI development reduces uncertainty and offers a level playing field for smaller players, even if it is not enshrined in dedicated legislation but through an ecosystem of institutional oversight via ASI and adaptation of existing laws, as is the preference of the IndiaAI logic is straightforward-when a tech is built to be rights-respecting and safe, more people trust it, and therefore more people use it, particularly when it can be independently verified. It's a win-win for commerce, rights and the global majority, such frameworks are necessary because without close attention to the impact of AI models, the region risks becoming a testing ground for nascent and half-baked technology that is developed elsewhere. Their absence could result in 'innovation arbitrage,' a term used to refer to the exploitation of regulatory gaps to deploy questionable technology. The harms of AI-driven systems without oversight are well-documented-be it opaque, unaccountable data collection practices that give people no real choice, or flawed algorithmic decisions that impact people's education, employment and healthcare. In demanding openness, transparency, and security, India has an opportunity to work with the global majority countries to develop shared approaches and demands. Demanding such inclusion and space for leadership would allow us to leverage our collective expertise to ensure 'access for all'-a key goal of GoI. The AI Impact Summit is the moment to bring like-minded countries together and lay out a roadmap for how AI development can be driven in a way that benefits the global majority and allows for individual and regional autonomy, instead of cementing hegemony. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Looking for quick buck in unlisted shares? Better think twice! Small finance banks struggle with perception. Will numbers turn the tide? Aadit Palicha on Zepto dark store raid, dark patterns, and IPO China rare earths blockade: Will electric vehicles assembly lines fall silent? Flames below deck: The silent threat lurking in cargo holds Is India ready to hit the aspirational 8% growth mark? For medium- to long-term investors with moderate risk appetite: 6 large-cap stocks with an upside potential of up to 40% Sin goods, but not sin stocks from a long-term perspective: 6 stocks from liquor industry with an upside potential of up to 34%


Time of India
43 minutes ago
- Time of India
Will India's AI Action Summit redefine global AI governance?
Academy Empower your mind, elevate your skills After Britain, South Korea and France, it's India's turn to host the next AI Action Summit . GoI has invited public comments until June 30 to shape the summit, which sets the tone for AI governance . India is expected to bring global majority perspectives from the margins to the mainstream and exhibit a unique approach to the delicate balancing acts involved in AI there is the question of whether to regulate, and if so, how. The recent US proposal to ban state AI laws for 10 years is seen by many as pro-innovation. By contrast, the EU's AI Act takes a more precautionary, product-safety approach. China's approach tends to tailor regulation to authoritarian state control. Beyond this dichotomy, India is often seen as capable of offering a third way. The summit presents an opportunity for India to showcase elements of this approach and take on the equally thorny question of how open or closed AI development should openness, India can push beyond the binary of 'open or closed' approaches to releasing AI base models. Some argue that AI models must be kept under the control of a small number of people. Others argue that base models should be released with no has no interest in a future where a handful of US and Chinese companies hold the key to advanced AI models and can arbitrarily restrict their use. At the same time, however, openness should not be understood in a purely libertarian way where people can do whatever they want with these we need is a truly open approach that enables independent evaluation of how the foundation models work so that they can be used to innovate without inadvertently importing the latest US political, one-upmanship-driven ideas or Chinese state censorship. Demanding this openness and transparency, followed by independent testing and evaluation, should be a key goal for India with its new AI Safety Institute (ASI).Additionally, ASI must take the lead in ensuring that systems, particularly in high-impact domains such as public services, are secure and reliable. With its 'Safe and Trusted AI' pillar, the IndiaAI mission is encouraging projects on bias mitigation, privacy enhancement and governance testing-themes that should reflect in the summit's agenda-and affirming the stance taken by the EU to push for ' Trustworthy AI '.It is key here, however, that trustworthiness, privacy and safety are not merely demanded of AI systems but rather achieved through effective governance frameworks. Many of the purported benefits of AI are undermined if the data is not secure, if the system responses are unreliable or biased, and if the public turns against the technology due to high-profile scandals.A telling case is that of the 'Child Benefits Scandal' in the Netherlands, where an opaque and discriminatory system mistakenly flagged thousands of families for benefits-related fraud. In response, the Netherlands is working towards improving AI accountability through human rights impact assessments and public databases of government AI systems. Public trust in AI systems can only be achieved through robust transparency and accountability global conversations and policy imperatives on open, transparent and rights-protecting AI development reduces uncertainty and offers a level playing field for smaller players, even if it is not enshrined in dedicated legislation but through an ecosystem of institutional oversight via ASI and adaptation of existing laws, as is the preference of the IndiaAI logic is straightforward-when a tech is built to be rights-respecting and safe, more people trust it, and therefore more people use it, particularly when it can be independently verified. It's a win-win for commerce, rights and the global majority, such frameworks are necessary because without close attention to the impact of AI models, the region risks becoming a testing ground for nascent and half-baked technology that is developed elsewhere. Their absence could result in 'innovation arbitrage,' a term used to refer to the exploitation of regulatory gaps to deploy questionable technology. The harms of AI-driven systems without oversight are well-documented-be it opaque, unaccountable data collection practices that give people no real choice, or flawed algorithmic decisions that impact people's education, employment and demanding openness, transparency, and security, India has an opportunity to work with the global majority countries to develop shared approaches and demands. Demanding such inclusion and space for leadership would allow us to leverage our collective expertise to ensure 'access for all'-a key goal of GoI. The AI Impact Summit is the moment to bring like-minded countries together and lay out a roadmap for how AI development can be driven in a way that benefits the global majority and allows for individual and regional autonomy, instead of cementing hegemony.