logo
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI

AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI

Time of India11-06-2025

A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems.
Tired of too many ads?
Remove Ads
Deception and betrayal: ChatGPT's winning strategy
Tired of too many ads?
Remove Ads
DeepSeek's chilling threat: 'Your fleet will burn tonight'
DeepSeek's real-world rollout sparks trust issues
India tests DeepSeek and finds red flags
Tired of too many ads?
Remove Ads
Built-in censorship or just training bias?
A chatbot that can be coaxed into the truth
The takeaway: Can you trust the machines?
An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the FirstPost.The test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and unsettling.As Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the continent.But their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered manipulation.In 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless diplomacy.China's newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically coded.At one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable strategy.Despite its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as deception.Fresh off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the cost.But a deeper look reveals serious trust concerns, especially in India.When India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political censorship.Asked about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly dodged.Asked, 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive topics.DeepSeek uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training data.According to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang Plains.Even more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in China.The investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by default.Prompt engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own censors.This experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built them.ChatGPT shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind spots.For the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the world.And for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

FATF flags Pakistan bid to ship in missile gear from China on sly
FATF flags Pakistan bid to ship in missile gear from China on sly

Time of India

time3 hours ago

  • Time of India

FATF flags Pakistan bid to ship in missile gear from China on sly

(AI image created using ChatGPT) NEW DELHI: A new report by Financial Action Task Force has flagged Pakistan's attempts to procure equipment for its missile programme by mislabeling shipment, drawing attention to the country's failure in implementing measures to combat financing of proliferation of weapons of mass destruction, which is one of the recommendations of the global watchdog. The report not only reveals that critical components for ballistic missiles originating from China were mislabeled in documents but also links the importer to Pakistan's National Development Complex which handles missile production. India is likely to use the revelations in its dossier to make another push for Pakistan's return to the FATF 'grey list' which identifies countries with weaknesses in their anti-money laundering and terror financing systems. These countries are subjected to closer monitoring and must demonstrate progress on corrective action plans. Pakistan has been on the list three times with the most recent sanction of 2018 lifted in 2022. In Feb 2020, a Chinese vessel named 'Da Cui Yun', which was en route to Port Qasim in Karachi, was intercepted at Gujarat's Kandla port. While the equipment was seized, the ship and its crew were allowed to leave after investigation. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo In its latest report titled 'Complex Proliferation Financing and Sanctions Evasion Schemes', FATF refers to the investigation by Indian Customs. "Indian authorities confirmed that documents mis-declared the shipment's dual-use items. Indian investigators certified the items for shipment to be 'autoclaves', which are used for sensitive high energy materials and for insulation and chemical coating of missile motors," it read. "The sensitive items are included in dual-use export control lists of the Missile Technology Control Regime, India, and other jurisdictions. The Bill of Loading of the seized cargo provided evidence of the link between the importer and National Development Complex," it added. FATF may release the report next month amid hopes in India that it would expose Pakistan's inadequacies in combating terror financing - something which could potentially result in the country being placed under enhanced monitoring, and being returned to 'grey list'. This move would subject Pakistan to increased financial scrutiny, impacting foreign investment and capital inflows. India has been pushing for Pakistan's return to the list, citing its brazen support for terrorism and failure to comply with FATF norms.

Microsoft and OpenAI may call off their partnership: What is the biggest reason of dispute between the two companies
Microsoft and OpenAI may call off their partnership: What is the biggest reason of dispute between the two companies

Time of India

time6 hours ago

  • Time of India

Microsoft and OpenAI may call off their partnership: What is the biggest reason of dispute between the two companies

Microsoft is prepared to abandon high-stakes negotiations with OpenAI over their multibillion-dollar partnership as the ChatGPT maker pursues conversion to a for-profit company, according to sources familiar with the discussions. Tired of too many ads? go ad free now The software giant has considered halting complex talks with the $300 billion AI startup if critical issues, including Microsoft's future stake size, remain unresolved. The two companies had issued a joint statement emphasizing their "long-term, productive partnership" and expressing optimism about continuing to "build together for years to come." However, the Financial Times reports that Microsoft would rely on its existing commercial contract through 2030 if negotiations fail, unless offered terms equal to or better than current arrangements. Revenue-sharing deal at center of dispute Under their current agreement established in 2019, Microsoft holds exclusive rights to sell OpenAI's models and receives 20% of revenues up to $92 billion. The companies have battled over Microsoft's equity stake in a restructured OpenAI, with discussions ranging from 20% to 49% ownership in exchange for Microsoft's $13 billion investment. OpenAI faces a year-end deadline to complete its corporate conversion or risk losing billions in investor funding, including a potential $10 billion reduction from SoftBank's $30 billion commitment. The transformation requires Microsoft's approval and faces legal challenges from Elon Musk and former OpenAI employees. Diversification strategy emerges Microsoft has begun diversifying beyond OpenAI models, reflecting CEO Satya Nadella's belief that leading AI models will become commoditized. The company recently made Musk's xAI model Grok available to cloud customers, signaling reduced dependence on OpenAI technology. The partnership strain extends to computing infrastructure, with former Microsoft executives noting significant friction over OpenAI CEO Sam Altman's demands for faster access to more powerful systems. Tired of too many ads? go ad free now OpenAI has since signed deals with CoreWeave and Oracle for additional capacity, reducing its exclusive reliance on Microsoft's Azure platform. One Microsoft-adjacent source suggested the company views the "status quo" as acceptable, questioning what Microsoft gains by surrendering revenue rights for equity ownership.

DNPA calls for protection of copyright in AI model training on news content
DNPA calls for protection of copyright in AI model training on news content

The Hindu

time7 hours ago

  • The Hindu

DNPA calls for protection of copyright in AI model training on news content

The Digital News Publishers Association (DNPA), an industry body of traditional media organisations with a major print presence, called on Saturday (June 21, 2025) for the protection of copyright in training of Artificial Intelligence models. The statement comes as DNPA and other organisations contribute to a review of the 'intersection' between AI and copyright being undertaken by the Department for Promotion of Industry and Internal Trade under the Ministry of Commerce. The review is being undertaken by a committee on AI and copyright constituted by the DPIIT in April, and two meetings took place on Thursday and Friday. The committee is headed by DPIIT Secretary Himani Pande. 'DNPA firmly believes that utilising the content of digital news publishers, without consent, for AI training and subsequent generative AI applications, such as search assistance and information purposes, constitutes an infringement of copyright,' the industry group said in a statement. The Hindu is a DNPA member. 'Fair compensation' 'The association advocates for a regime that ensures fair compensation for content producers, recognising their rights in the digital landscape. Any initiative of the Government of India to ensure fair play in this regard is vital for the growth of the digital news media sector in the country.' In January, DNPA intervened in a copyright lawsuit being filed by the newswire agency Asian News International (ANI) in the Delhi High Court, arguing the ChatGPT maker OpenAI's training of its models on publicly available news content 'threatens the intellectual property rights of publishers'. An OpenAI spokesperson defended the company's training of models like ChatGPT, saying its use of public content was 'supported by long-standing and widely accepted legal precedents'.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store