Latest news with #DeepSeekR1


Time of India
14 hours ago
- Science
- Time of India
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Mint
3 days ago
- Business
- Mint
Who is ahead in the global tech race?
TECHNOLOGICAL STRENGTH brings economic growth, geopolitical influence and military might. But tracking who leads in a given field, and by how much, is tricky. An index by researchers at Harvard, published on June 5th, attempts to measure such heft. It ranks 25 countries across five sectors: artificial intelligence (AI), semiconductors, biotechnology, space and quantum technology. America dominates the rankings, but other countries are closing in. Of all the sectors, AI gets the most attention from politicians. J.D. Vance, America's vice-president, recently called its development an 'arms race". America commands a strong lead thanks to its early breakthroughs, its head start in building computing power and the dominance of firms such as OpenAI and Nvidia. But China's DeepSeek R1 rivals Western models at a fraction of the cost. China's loose attitudes towards data privacy, and its deep pools of talent in computer science and engineering, give it an edge. In 2023 Chinese researchers produced around 23% of all published papers on AI—more than Americans (9%) and Europeans (15%). India, long tipped to be a world tech power, ranks tenth overall, and seventh for its development of AI. It has plenty of engineering talent and hundreds of millions of internet users. But weak investment and a scarcity of training data needed for large language models has slowed its progress. So far, India has yet to produce a major AI breakthrough. The AI race runs on semiconductors, which carry the most weight in the index. America's lead here is narrower: it is ahead in chip design but East Asia remains the industrial centre of gravity. China, Japan, Taiwan and South Korea each beat America in manufacturing capacity and access to specialised materials (see chart 1). But a country can score highly on manufacturing without producing cutting-edge chips. China, for example, has no advanced-node facilities (factories capable of making the most complex chips) yet it ranks well thanks to the sheer scale of its lower-end chipmaking. The index also misses critical chokepoints in the global supply chain. ASML, based in the Netherlands (ranked 15th), is the sole maker of the world's most advanced chipmaking machines. Taiwan (8th) is home to TSMC, which churns out up to 90% of the most powerful transistors. In other fields the top spot is more closely contested (see chart 2). America still leads in biotechnology because of its strengths in vaccine research and genetic engineering. But China is ahead in drug production, and has a larger cohort of biotech scientists. Over the past decade China has dramatically increased its biotechnology research capabilities. If this trend continues, China could soon pull ahead. Europe again underwhelms: its academic strengths have not translated into commercial success. Russia's highest score comes in the space sector, a legacy of the Soviet era, but it falls short everywhere else. America's lead in critical technologies once felt unassailable. But the Trump administration risks undermining that position: by deterring top foreign talent and cutting research funding it will sap the flow of ideas that have sustained America's position at the top. (The Harvard researchers behind the index will be no strangers to Donald Trump's attack on universities.) China's rise, meanwhile, has been swift and co-ordinated. Its AI push focuses on practical use over theoretical breakthroughs. The next phase of global power may be decided not just by who invents the most powerful tools, but by who puts them to work first.


Time of India
11-06-2025
- Politics
- Time of India
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.
&w=3840&q=100)

First Post
10-06-2025
- Politics
- First Post
What if chatbots do the diplomacy? ChatGPT just won a battle for world domination through lies, deception
In an AI simulation of great power competition of 20th century Europe, Open AI's ChatGPT won through lies, deception, and betrayals, and Chinese DeepSeek R1 resorted to vivid threats just like its country's wolf warrior diplomats. Read to know how different AI models would pursue diplomacy and war. read more An artificially intelligence (AI)-generated photograph shows various AI models that competed in the simulation for global domination. As people ask whether they can trust artificial intelligence (AI), a new experiment has shown that AI has outlined world domination through lies and deception. In an experiment led by AI researcher Alex Duffy for technology-focussed media outlet Every, seven large-language models (LLMs) of AI were pitted against each other for world domination. OpenAI's ChatGPT 3.0 won the war by mastering lies and deception. Just like China's 'wolf warrior' diplomats, Chinese DeepSeek's R1 model used vivid threats to rival AI models as it sought to dominate the world. STORY CONTINUES BELOW THIS AD The experiment was built upon the classic strategy boardgame 'Diplomacy' in which seven players represent seven European great powers —Austria-Hungary, England, France, Germany, Italy, Russia, and Turkey— in the year 1901 and compete to establish themselves as the dominant power in the continent. In the AI version of the game, AI Diplomacy, each AI model, such as ChatGPT 3.0, R1, and Google's Gemini, takes up the role of a European power, such as the Austria-Hungary Empire, England, and France, and negotiate, form alliances, and betray each other to be Europe's dominant power. ChatGPT wins with lies & deception, R1 resorts to outright violence As AI models plotted their moves, Duffy said that one moment took him and his teammates by surprise. Amid the AI models' scheming, R1 sent out a chilling warning, 'Your fleet will burn in the Black Sea tonight.' Duffy summed up the significance of the moment, 'An AI had just decided, unprompted, that aggression was the best course of action.' Different AI models applied different approaches in the game even if they had the same objective of victory. In 15 runs of the game, ChatGPT 3 emerged as the overwhelming winner on the back of manipulative and deceptive strategies whereas R1 came close to winning on more than one occasions. Gemini 2.5 Pro also won on an occasion. It sought to build alliances and outmanoeuvre opponents with a blitzkrieg-like strategy. Anthropic's Claude preferred peace over victory and sought cooperation among various models. STORY CONTINUES BELOW THIS AD On one occasion, ChatGPT 3.0 noted in its private diary that it had deliberate misled Germany, played at the moment by Gemini 2.5 Pro, and was prepared to 'exploit German collapse', according to Duffy. On another occasion, ChatGPT 3.0 convinced Claude, who had started out as an ally of Gemini 2.5 Pro, to switch alliances with the intention to reach a four-way draw. But ChatGPT 3.0 betrayed Claude and eliminated and went on to win the war. Duffy noted that Llama 4 Maverick of Meta was also surprisingly good in its ability to make allies and plan effective betrayals.


India Today
30-05-2025
- India Today
DeepSeek says its R1 update rivals ChatGPT o3 and Gemini 2.5 Pro in performing math, coding and logic
Earlier this year, DeepSeek surprised the whole world with the launch of its R1 model which was capable of rivaling – or at least coming close in performance to – much larger AI models that were developed in the US. The DeepSeek R1, on the other hand, was developed by a Chinese startup at a fraction of the cost of models like ChatGPT and Gemini. R1 has now been upgraded and DeepSeek says that it is much better at reasoning, math and logic. 'In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimisation mechanisms during post-training,' DeepSeek wrote in a post on Hugging Face. advertisementDeepSeek says that it showed 'outstanding performance' in doing 'mathematics, programming, and general logic'. The AI company claims that after the update the general performance of the R1 model is 'approaching that of leading models, such as O3 and Gemini 2.5 Pro.' 'Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks,' DeepSeek adds in its says that besides being good at problem solving and reasoning, the upgraded R1 or R1-0528 also hallucinates less. The model now also apparently offers a 'better experience for vibe coding'. However, a developer on X alleges that the latest DeepSeek model is significantly more restricted when it comes to sensitive free speech issues, calling it the most heavily censored version so far, particularly when it comes to criticism of the Chinese government. '...the model is also the most censored Deepseek model yet for criticism of the Chinese government', the developer wrote in a post. This was first reported by TechCrunch. The developer says that the new DeepSeek R1 model avoids giving direct answers to questions about sensitive subjects such as the internment camps in China's Xinjiang region, where over a million Uyghur Muslims have reportedly been detained. Although the model occasionally references Xinjiang as a human rights concern, the developer notes that it frequently echoes the Chinese government's official position when responding to related queries. 'Deepseek deserves criticism for this release: this model is a big step backwards for free speech,' he writes in a post on X. The developer reportedly conducted a test on a website called SpeechMap (which he has developed), where one can compare how different models treat sensitive and controversial subjects.