logo
How language LLMs will lead the India's AI leap

How language LLMs will lead the India's AI leap

Hindustan Times7 days ago

The next great power struggle in technology won't be about speed or scale, it'll be about whose language AI speaks. Because trust in technology begins with something deeply human: being understood.
You trust a doctor who speaks your language. You trust a banker who understands your context. So why would you trust an algorithm that doesn't know who you are, where you're from, or what your words mean?
This question is being asked by governments, developers, and communities across the Global South who have seen how powerful large language models (LLMs) can be—and how irrelevant they often are to people who don't speak English or live in Silicon Valley.
In India, the response until now has been BharatGPT. This is a collaboration between startups like CoRover.ai, government-backed platforms like Bhashini, and academic institutions such as the IITs. Its aim is not to chase ChatGPT on global benchmarks. Instead, it hopes to solve problems at home—helping citizens navigate government forms in Hindi, automating railway queries in Tamil, or enabling voice assistants in other regional languages. CoRover has already deployed multilingual chatbots in sectors like railways, insurance, and banking. The value here isn't just in automation. It's in comprehension.
This isn't unique to India. In South Africa, Lelapa AI is working on InkubaLM, a small language model trained in African languages. In Latin America, a consortium is building LatAm GPT, rooted in Spanish, Portuguese, and indigenous dialects. Each of these projects is a rebellion: against invisibility, against standardization, against a worldview where the technology speaks only in one accent.
What's driving this shift? 'Current large language models do not adequately represent the linguistic, cultural, or civic realities of many regions,' says Shrinath V, a Bengaluru-based product coach and Google for Startups mentor. 'As governments begin exploring AI-powered delivery of public services, from education and legal aid to citizen support, they recognize the need for models that reflect local languages, data, and social context. Regional LLMs are being positioned to fill that gap,' he explains.
Manoj Menon, founder of the Singapore-based research firm Twimbit, is on the same page as Shrinath: 'With AI there are several nuances that come into play—how we train them to be contextually relevant for our local, national needs.'
At the heart of it lies something more political: digital sovereignty. Shrinath breaks it down and says, 'Data sovereignty is no longer an abstract idea. Countries don't want to depend on models trained on data they don't control. Indigenous models are a way to retain that control.'
It boils down to geopolitical leverage. Nations that build their own models won't just protect cultural identity—they'll shape trade, diplomacy, and security doctrines in the AI era. 'This is a reasonable argument,' says Menon. 'How we interpret a particular subject or issue depends completely on the context. Hence geo-politics is a significant input. Also the ability to train based on local issues and context.'
Viewed through this lens, the shift underway towards frugal AI is more radical than most people realise. These are models that don't need massive GPUs or high-speed internet. They're lean, nimble, and context-rich. Think of it like this: if ChatGPT is a Tesla on a six-lane highway, BharatGPT is a motorbike designed for rough, narrow roads. Not as flashy. But it gets where it needs to go.
'Most countries will want a say in shaping how AI is adopted, governed, and deployed within a sovereign context,' points out Shrinath. This matters because AI is starting to mediate access to public services—healthcare, legal advice, welfare. And in that context, a model that doesn't understand a citizen's language isn't just ineffective. It's dangerous. It can mislead, it can exclude and it can fail silently.
So yes, Silicon Valley still leads the headlines. But away from the noise, something deeper is unfolding. A shift in who gets to define intelligence, in whose language it speaks and in whose image it is built. Regional AI, says Menon, 'won't go head-on with what is built in Silicon Valley. They will complement it and their opportunity will help AI be more relevant locally.'
These regional AI efforts don't seek applause, they seek agency. They aren't chasing scale, they're chasing significance instead. This revolution is not being televised, it's being trained.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

A Tale of Yaay! and Hmm: Is India's growth story impressive, or disappointing — or a bit of both?
A Tale of Yaay! and Hmm: Is India's growth story impressive, or disappointing — or a bit of both?

Economic Times

timean hour ago

  • Economic Times

A Tale of Yaay! and Hmm: Is India's growth story impressive, or disappointing — or a bit of both?

Purchasing power, stop running away! We're doing fine! India has become the world's 5th-largest economy, eclipsing former economic giants like Britain. In a matter of 1-2 years, it should be the 4th-largest, surpassing Japan. Post-pandemic economic growth is nothing to be scoffed at. India is the world's fastest-growing major economy. Over the past 3 years, a rather turbulent period for the world economy, India's GDP increased at nearly 8% definitely. Yet, is the rising euphoria on India's escalating economic ranking justified? Perhaps. But only after we acknowledge the statistical meaning of being among the world's top-ranked economies. India is the world most populous country. In per-capita terms, we are still ranked as low middle-income. In per-capita nominal GDP, India is 143rd in a ranking of 194 countries. Adjusting for purchasing power parity (PPP), it's at 125th - the rank going up a few notches, but not very much. Humbling, yes. But let's not minimise the importance of being among the top 5 economies in overall GDP. China is 69th in nominal per-capital GDP, and 72nd in PPP per-capita GDP. Yet, its influence on the world stage is not diminished by its per-capita income ranking. China's economic and strategic influence is next to none, other than the US', and sometimes even an example, while most nations have cowed into pleasing Donald Trump and accepted his trade deals, China has decided to fight - and appears to be winning. Many countries are weighing whether they should develop closer alliances with China or the US, and how the others will India's influence will also be measured by its overall ranking in GDP, and not just by its per-capita ranking. Yet, let's keep in view that gap between India and the top two world economies. The US economy is $30 tn in nominal GDP. The Chinese economy is $19 tn. India's is far, far below at $3.9 tn. Humbling, vs expectations: that's the other aspect of India's growth story. In 2018, GoI pledged that India would be a $5 tn economy by 2025. This was a target that many experts viewed with amused scepticism. Of course, progress was halted by the two years of the pandemic. But for those long waiting for the arrival of the $5 tn economy, it's still disappointing to see that we are just halfway towards the 2018-19, India's GDP was $2.8 tn. In 2024-25, it's still $1.1 tn short of the target. Now we hope to achieve that target by of leading sectors - where the world acknowledges India's influence - also brings a mixed tale of optimism and caution. India is the world's largest user of ChatGPT, and, according to a Microsoft, Bain & Company, and Internet and Mobile Association of India (IAMAI) report, home to 16% of the world's AI talent. Impressive, has the ambition to lead the world in AI and Narendra Modi says, 'AI will remain incomplete without India.' Yet, so far, India doesn't have an indigenous foundational language model, and it's 3-5 years away from developing domestic AI chips. It lags substantially behind other nations in attracting investment in by Stanford University researchers suggest that India received only $1.2 bn in private investment in AI. Of course, the US received the lion's share - $109 bn. But China received 7x than India. A recent article in The Economist asks whether India can be an AI winner. It cautiously concludes that it has a lot to do to lead the most-talked-about achievement on the manufacturing front is that Apple is now assembling 20% of its smartphones sold worldwide in India. By 2026, it is planning to assemble in India all smartphones it will sell in the US. Again, impressive. Yet, the humbling reality is that India is simply assembling the phones, with almost all of their parts being manufactured in China or Southeast Asia. Hopefully, this will change once Foxconn, Apple's top supplier, sets up production facilities in biggest propeller for future economic growth is investment in Rundefined the US 3.5% of its even-larger in sectors where India has emerged as a top global supplier, investment in R&D is pathetic. India often labels itself the 'pharmacy of the world'. Indian pharma supplies 20% of all generic drugs globally, and 40% of generic drugs used in the US. Generic drugs do not need R& the non-generic sector is substantially driven by R&D. According to the Journal of Medicinal Chemistry, in pharmaceuticals, China's R&D investment is 16x India's. India imports 70% of its drug ingredients from China. Clearly, in some sense, we are far behind China even in sectors where we have a major global presence. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. How Vedanta's Anil Agarwal bettered Warren Buffett in returns Rivers are moving more goods than before. But why aren't they making a splash yet? Why Infy's Parekh takes home more than TCS' CEO despite being smaller Is India ready to hit the aspirational 8% growth mark? Aadit Palicha on Zepto dark store raid, dark patterns, and IPO Stock Radar: MCX rallies over 50% in just 3 months to hit fresh highs! What should investors do in June – buy or book profits? Metal stocks: Candidates for tactical and contrarian investing? 6 metal stocks with an upside potential of up to 39% Weekly Top Picks: These stocks scored 10 on 10 on Stock Reports Plus

Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims

Time of India

time2 hours ago

  • Time of India

Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims

Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency
ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency

Time of India

time2 hours ago

  • Time of India

ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency

ChatGPT might be making you think less: MIT study raises 'red flags' about AI dependency As AI tools become part of our daily routines, a question is starting to bubble up: What happens when we rely on them too much? A new study from MIT's Media Lab takes a closer look at how tools like ChatGPT may be affecting our brains. And what the researchers found is worth paying attention to. The study focused on how people engage mentally when completing tasks with and without AI. It turns out that while ChatGPT can make writing easier, it may also be reducing how much we think. According to the research team, participants who used ChatGPT showed noticeably lower brain activity than those who did the same task using Google or no tech at all. The findings suggest that depending on AI for tasks that require effort, like writing, decision-making, or creative thinking, could weaken the very mental muscles we're trying to sharpen. ChatGPT users show lowest brain activity in MIT's groundbreaking study The experiment involved 54 participants between the ages of 18 and 39. They were split into three groups and asked to write essays in response to prompts similar to those on standardised tests. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy Brass Idols - Handmade Brass Statues for Home & Gifting Luxeartisanship Buy Now Undo Group 1 used ChatGPT to generate their answers Group 2 relied on Google Search to find and compile information Group 3 worked without any tools, using only their knowledge and reasoning While they worked, each participant wore a headset that tracked electrical activity across 32 areas of the brain. The aim was to see how engaged their minds were during the process. (The research was led by Dr. Nataliya Kosmyna along with a team that included Ashly Vivian Beresnitzky, Ye Tong Yuan, Jessica Situ, Eugene Hauptmann, Xian-Hao Liao, Iris Braunstein, and Pattie Maes.) ChatGPT may be hurting your creativity, MIT researchers warn The results were pretty clear: the group that used ChatGPT showed the lowest brain activity of all three groups. In particular, areas linked to memory, creativity, and concentration were significantly less active. In contrast, those who wrote without help from AI showed the highest mental engagement. They had to organise their thoughts, build arguments, and recall information, all things that activated the brain more deeply. Even the group using Google Search showed more engagement than the AI group, possibly because the process of looking for and evaluating information keeps the brain involved. There was another telling detail. Many in the ChatGPT group simply pasted the prompts into the tool and copied the output with little to no editing. Teachers who reviewed their essays said they felt impersonal, calling them 'soulless.' Dr. Kosmyna put it bluntly: 'They weren't thinking. They were just typing.' AI dependency Short-term efficiency, long-term cost Later in the study, researchers asked participants to rewrite one of their essays, this time without using any tools. The ChatGPT users struggled. Many couldn't remember their original arguments or structure. Since they hadn't processed the material deeply the first time, it hadn't stuck. Kosmyna described this as a red flag: 'It was efficient. But nothing was integrated into their brains.' That raises a broader concern: if AI is doing the heavy lifting, are we still learning? Or are we just moving text around while our cognitive skills fade in the background? The growing concern among psychiatrists and educators Dr. Zishan Khan, a psychiatrist who works with students, says he's already seeing signs of AI overuse in younger people. 'The neural pathways responsible for thinking, remembering, and adapting—they're weakening,' he explained. The fear is that early and frequent reliance on tools like ChatGPT might lead to long-term cognitive decline, especially in developing brains. MIT's team is now expanding their research to see how AI affects people in other fields. They've already started looking at coders who use tools like GitHub Copilot. So far, Kosmyna says the early results there are 'even worse' in terms of mental engagement. A word of warning for classrooms and beyond Interestingly, the MIT researchers shared their findings before going through the full peer review process, something that's uncommon in academic research. But Kosmyna felt the potential impact was urgent enough to make an exception. 'I'm really concerned someone might say, 'Let's introduce ChatGPT into kindergarten classrooms,'' she said. 'That would be a terrible mistake. Young brains are especially vulnerable.' To prove just how easy it is to lose the depth of complex research, the team did something clever: they planted subtle factual 'traps' in the study. When readers ran the paper through ChatGPT to summarise it, many versions came back with key errors, including details the researchers never even included. What does this mean for the future of AI use Not at all. The tool isn't the enemy. It can be incredibly helpful, especially when used wisely. But this study reminds us that how we use AI matters just as much as whether we use it. Here are a few takeaways from the researchers: Use AI as a partner, not a replacement. Let it offer ideas, but make sure you're still doing the core thinking. Stay actively involved. Skipping the process of learning or writing just to get a result means you're not absorbing anything. Be cautious in education. Children need to build foundational skills before leaning on technology. Also read | Nvidia CEO Jensen Huang swears by these 6 effective management strategies to run a company like a genius AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store