Latest news with #GnaniAI


Mint
4 days ago
- Business
- Mint
India's big AI test is here: Making sovereign language models work
Bengaluru/New Delhi: For years, the world's most powerful artificial intelligence (AI) models have spoken in English. Trained on sprawling datasets like Wikipedia, Reddit, and Common Crawl, models such as OpenAI's GPT-4, Google's Gemini 2.5, Meta's Llama, Microsoft's Bing AI, and Anthropic's Claude have mastered the dominant global internet dialect. But, they all falter when faced with the linguistic diversity of countries like India. English-dominated AI models can hallucinate (fabricate facts), mistranslate key phrases, or miss the cultural context when prompted in Indian languages. The concern is also over inclusion. With over 1.4 billion people and 22 official languages, alongside thousands of dialects, India can ill afford to be an afterthought in the AI revolution. The country is expected to total over 500 million non-English internet users by 2030. If AI models can't understand them, the digital divide will only widen. To address this, the Indian government launched a $1.2 billion IndiaAI Mission in February 2024. One of its central goals: to fund and foster the development of sovereign local language models and small language models (SLMs)—AI systems that are built, trained, and deployed entirely within India, on Indian data. While large language models (LLMs), such as GPT-4, handle broad tasks, having been trained on copious amounts of data, SLMs are smaller, typically built for specific uses. In January, the government opened a nationwide call for proposals to develop foundational AI models rooted in Indian languages and datasets. By April, more than 550 pitches had poured in from startups, researchers, and labs eager to build either SLMs or general-purpose LLMs. In April, the government selected Sarvam AI to lead the charge. The Bengaluru-based startup will develop the country's first foundational model trained on local language datasets. It would build a massive 120-billion parameter open-source model to power new digital governance tools. Parameters are settings that control how the AI model learns from data before making predictions or decisions. For instance, in a language model like ChatGPT, parameters help decide which word comes next in a sentence based on the words before it. On 30 May, the government announced three more model-development efforts—from Soket AI, Gnani AI and Gan AI. Soket AI, based in Gurugram, will build a 120-billion multilingual model focused on sectors like defence, healthcare, and education; Gnani AI, based in Bengaluru, will develop a 14-billion voice AI model for multilingual speech recognition and reasoning; Gan AI, also based in India's Silicon Valley, is working on a 70-billion parameter model aimed at advanced text-to-speech capabilities. During the launch of the three additional models, union minister for electronics and information technology, Ashwini Vaishnaw, stressed the importance of more people being able to access technology and get better opportunities. 'That's the philosophy with which IndiaAI Mission was created," the minister said. A senior official from the ministry of electronics and information technology (MeitY), speaking on condition of anonymity, told Mint that a foundational sovereign language model can be expected within the next 12 months. 'We will see many more sovereign models after a year or so, hosted on the government's AI marketplace platform," the official added. Why it matters Beyond the language gap, the global AI landscape is being shaped by rising concerns around sovereignty, data control, and geopolitical risk. As AI becomes the cornerstone of digital infrastructure, nations are racing to build their own models. In India, the move also aligns with India's broader vision of 'Atmanirbhar Bharat' (self-reliant India). India now joins a fast-growing club of countries that have developed or are developing sovereign LLMs—China (Baidu), France (Mistral), Singapore (SEA-LION), UAE (Falcon), Saudi Arabia (Mulhem), and Thailand (ThaiLLM). Even before Sarvam, India had seen an uptick in language model building activity. BharatGPT (by CoRover), Project Indus (Tech Mahindra), Hanooman (by Seetha Mahalaxmi Healthcare and 3AI), Krutrim (Ola), and Sutra (by Two AI) are some examples. In October 2024, BharatGen, a government-backed project, released Param-1, a 2.9-billion parameter bilingual model along with 19 Indian language speech models. Led by IIT Bombay, BharatGen's mission is to boost public service delivery and citizen engagement using AI in language, speech, and computer vision. Imagine a farmer in eastern Uttar Pradesh calling a helpline and interacting with a chatbot that understands and replies fluently in Bhojpuri, while also generating a clear summary for a government officer to act on. Or an AI tutor generating regional-language lessons, quizzes, and spoken explanations for students in languages like Marathi, Tamil, Telegu, or Kannada. These efforts fit into India's broader digital stack, alongside Aadhaar (digital identity), UPI (unified payments interface), ULI (unified lending interface) and ONDC (the Open Network for Digital Commerce). In a world where AI models are fast becoming a symbol of digital leadership, 'a sovereign LLM is also about owning the narrative, the data, and the future of its digital economy", said Akshay Khanna, managing partner at Avasant, a consulting firm. 'Sovereignty will be a key requirement in all nations including India," says Mitesh Agarwal, Asia-Pacific managing director at Google Cloud. He points out that Google's Gemini 1.5 processes data entirely within its India data centers. 'For sensitive projects, we also offer open-source AI models and sovereign cloud options," he added. Showing the way Founded in July 2023 by Vivek Raghavan and Pratyush Kumar, Sarvam has raised $41 million from private investors. While the IndiaAI Mission won't inject cash, it will take a minority equity stake in the startup. For now, Sarvam will receive computing power—over 4,000 Nvidia H100 graphics processing units (GPUs) for six months—to train its model. The aim is to build a multimodal foundation model (text, speech, images, video, code, etc.) capable of reasoning and conversation, optimized for voice interfaces, and fluent in Indian languages. 'When we do so, a universe of applications will unfold," Sarvam co-founder Raghavan said at the launch on 26 April. 'For citizens, this means interacting with AI that feels familiar, not foreign. For enterprises, it means unlocking intelligence without sending data beyond borders." Sarvam is developing three model variants—a large model for 'advanced reasoning and generation"; a smaller one for 'real-time interactive applications", and 'Sarvam-Edge for compact on-device tasks". It is partnering with AI4Bharat, a research lab at the Indian Institute of Technology (IIT)-Madras, supported by Infosys co-founder Nandan Nilekani and his philanthropist wife Rohini, to build these models. Sarvam has already developed Sarvam 1, a two-billion parameter multilingual language model, trained on four trillion tokens using Nvidia H100 GPUs. The company claims its custom tokenizer (that breaks text into small units, like words or parts of words, so a language model can understand and process it) is up to four times more efficient than leading English-centric models when processing Indian languages, hence reducing costs. Sarvam 1 supports 11 languages: Hindi, Bengali, Tamil, Telugu, Kannada, Malayalam, Marathi, Gujarati, Oriya, Punjabi, and English. It powers various generative AI (GenAI) agents and is also hosted on Hugging Face, enabling developers to build Indic-language apps. Hugging Face is a platform for sharing and hosting open-source AI models and datasets. meanwhile, is building voice-to-voice foundational LLMs that aim to produce near instant autonomous voice conversations, with very low latency. The models also aim to enable 'emotion aware conversations", which preserve intonation, stress and rhythm in the conversations, said Ganesh Gopalan, co-founder and CEO of 'The model will enable realistic conversations in governance, healthcare and education," he added. Wait and watch Sovereign LLMs and SLMs are likely to find strong acceptance in public service delivery and citizen engagement services across the country, just like it happened with UPI. However, enterprises will likely wait till the models show maturity, are secure enough, and hallucinate less. Current sovereign models, Sanchit Vir Gogia, founder of Greyhound Research explained, 'lack deployment maturity, robust safety mechanisms, and domain-specific accuracy." The Greyhound CIO Pulse 2025 survey found that 67% of enterprises exploring Indic LLMs report frequent failures in multilingual task execution, especially with mixed scripts (e.g., Devanagari+ Latin), identifying regional slang, or recognizing emotional cues in customer queries. Further, language in India is hyper-local. Hindi spoken in Varanasi differs significantly from Hindi in Patna—not just in accent, but in vocabulary and usage. A health insurance aggregator in Bengaluru faced real-world fallout when its LLM couldn't differentiate between 'dard' (pain) and 'peeda' (suffering), leading to claim errors. The company had to halt rollout and invest in regionally-tuned data, Gogia said. Moreover, there are limited safeguards against hallucinations. 'Without deeper fine-tuning, cultural grounding, and linguistic quality assurance, these models are too brittle for nuanced conversations and too coarse for enterprise-scale adoption," Gogia added. 'The ambition is clear—but execution still needs time and investment." The missing millions Building sovereign models without government or venture capital funding could also pose a big challenge since developing a foundational model from scratch is an expensive affair. For instance, OpenAI's GPT was in the works for more than six years and cost upwards of $100 million and used an estimated 30,000 GPUs. Chinese AI lab DeepSeek did build an open-source reasoning model for just $6 million, demonstrating that high-performing models could be developed at low costs. But critics point out that the reported $6 million cheque would have excluded expenses for prior research and experiments on architectures, algorithms, and data. Effectively, this means that only a lab which has already invested hundreds of millions in foundational research and secured access to extensive computing clusters could train a model of DeepSeek's quality with a $6 million run. Ankush Sabharwal, founder and CEO of CoRover, says that its BharatGPT chatbot is a 'very small sovereign model with 500-million parameters". He has plans to build a 70-billion parameter sovereign model. 'But, we will need about $6 million to build and deploy it," Sabharwal says. Long way to go A glance at the download numbers for the month of May from Hugging Face underlines the wide gap between some of India's local language models and similar-sized global offerings. For instance, Sarvam-1's 2-billion model saw just 3,539 downloads during the month. Krutrim, a 12-billion model from Ola-backed Krutrim SI Designs, fared similarly with only 1,451 downloads. Fractal AI's Fathom-R1 14-billion model showed the most promise with 9,582 downloads. In contrast, international models with comparable or slightly larger sizes saw exponential traction. Google's Gemma-2 (2-billion) logged 376,800 downloads during the same period, while Meta's Llama 3.2 (3-billion) surpassed 1.5 million. Chinese models, too, outpaced Indian counterparts. Alibaba's Qwen3 (8- billion) recorded over 1.1 million downloads, while a fine-tuned version of the same model—DeepSeek-R1-0528-Qwen3-8B—clocked nearly 94,500 downloads. The numbers underline the need for a stronger business case for Indian startups. The senior government official quoted earlier in the story said that sovereign models must stand on their own feet. 'The government has created a marketplace where developers can access and build apps on top of sovereign models. But the startups must be able to offer their services first to India, and then globally," he said. 'API revenue, government usage fees, and long-term planning are key," Aakrit Vaish, former CEO of Haptik and mission lead for IndiaAI until March, said. API revenue is what a company earns by letting others use its software features via an application programming interface. For example, OpenAI charges businesses to access models like ChatGPT through its API for writing, coding, or image generation. Nonetheless, API access alone won't cover costs or deliver value, Gogia of Greyhound Research said. 'Sovereign LLM builders must focus on service-led revenue: co-creating solutions with large enterprises, developing industry-specific applications, and securing government-backed rollouts," he suggested. Indian buyers, he added, want control—over tuning, deployment, and results. 'They'll pay for impact, not model access. This isn't LLM-as-a-Service; it's LLM-as-a-Stack." In short, capability alone won't cut it. To scale and endure, sovereign language models must be backed by viable business propositions and stable funding—from public and private sources alike.


Hindustan Times
5 days ago
- Business
- Hindustan Times
Access to GPUs is a national concern: Gnani AI CEO Ganesh Gopalan
The biggest issue artificial intelligence (AI) companies face today is access to compute and the cost to building large language models (LLMs), Ganesh Gopalan, CEO and founder of Gnani AI, said during an interaction with HT. Selected under the Ministry of Electronics and Information technology's (MeitY) 'India AI Mission' to build a foundational model for India, Gnani AI's CEO said the government is yet to clarify how much AI compute will be subsidised or what other benefits the company can expect under the programme. Access to Graphics Processing Units (GPUs) is a national concern, Gopalan told HT over a call, noting that more advanced versions of GPUs are continually being released. 'Some of the newer GPUs have much more computational power, but if they are going to be available after some time, is it worth waiting for them,' he said. Under the India AI Mission, the government has selected four startups — Gnani AI, Sarvam AI, Soket AI, and Gan AI — from a pool of 506 proposals to build home-grown foundational models. How will these foundational models differ from OpenAI's GPT or Meta's Llama? They will be built from the ground up using India-specific datasets, languages, and cultural contexts. While there is still some ambiguity within startups around the exact support being offered, MeitY secretary Abhishek Singh said in an April podcast that, in addition to nearly fully subsidising compute, the government will also cover costs related to engineering, personnel, and data. According to India AI's compute portal, Sarvam AI has received 4,096 fully subsidised GPUs, valued at over ₹246 crore. In total, the government has allocated 4,423 GPUs under the scheme till now, with a cumulative subsidy amounting to ₹259.89 crore. The rest of the subsidies have been allocated to researchers, early stage startups and government entities. The government has also called rounds of GPU empanelment, in which by now they have a capacity of over 34,000 GPUs, of which they have made 14,000 GPUs available online on their platform. Gopalan along with Ananth Nagaraj started the company seven years ago. Fast forward to 2025, Gnani AI has over 100 clients and is building 14 billion-parameter voice-to-voice foundational AI models for India, aimed at enabling instant and natural conversations without human intervention. The company plans to initially launch a model supporting 14 languages, gradually expanding to cover 22 languages. The startup has a three-pronged aim: reduce latency, increase accuracy and infuse emotional intelligence in their LLM. 'The models aim to enable emotionally aware conversations, preserving intonation, stress and rhythm in the conversations. The model uses a fused architecture to reduce inherent latencies and errors that cascade through the pipeline,' explained Gopalan. Gnani AI has built a substantial dataset of 14 million hours of annotated audio, with the team initially spending two to three years collecting and curating the first set of data. 'We collected a lot of data all across India when we started in 2016, often at 1/1000 the cost of what a large company would collect it at because we were a hungry startup without money,' said Gopalan jokingly. 'The dialects change in India every few kilometers. We would find out which districts we haven't collected data from and we would go there and collect it.' When asked whether the company buys data, the CEO responded, 'Why should I buy data for $100 an hour when we know how to get it for ₹100 an hour?' Gnani AI collects data using many methods like using open-source datasets, working with language experts for lesser-resourced languages, and building proprietary data through a wide range of gamified mobile apps to collect diverse speech data for training its models. 'We sometimes also requested people to send over their voice through WhatsApp. Some of those were also given free by people, when we told them 'look, we are building this (LLM) for your language,' said Gopalan. His team is also looking at AI Kosh, a government platform where government and private datasets have been made available. While Sarvam AI said in April that it will come out with it foundational model in six months, Soket AI Labs has set a one-year delivery timeline. Gnani has set a timeline of six to eight months to deliver the first version of its model once it receives access to compute resources from the government. Other companies are also showing interest in building foundational models for India. TWO CEO Pranav Mistry confirmed to HT that he has submitted his foundational model proposal to the India AI Mission.


Time of India
31-05-2025
- Business
- Time of India
India AI: 3 more startups to build indigenous foundation model; common compute capacity expanded
New Delhi: After Sarvam AI , India on Friday selected three more startups -- Soket AI , Gan AI, Gnani AI -- for building indigenous artificial intelligence foundation models. In line with its global AI ambitions backed by a comprehensive plan that entails enhanced AI infrastructure and local language model development, India has also announced availability of 16,000 more GPUs that would take the compute facility available to startups and researchers here to 34,000, with the support of industry partners. The expanded compute capacity on cloud will provide a common computational AI platform for training and inference, crucial to develop indigenous foundational models and AI solutions tailored to the Indian context. IT Minister Ashwini Vaishnaw said significant progress has been made on India AI Mission, with focus on democratisation of technology. The compute facility supercharged with 34,000 GPUs will enable India to develop AI ecosystem in a big way, he said. Seven bidders have offered their commercials for various categories of AI compute units (GPUs). These include Cyfuture India, Ishan Infotech, Locuz Enterprise Solutions, Netmagic IT Services, Sify Digital Services, Vensysco Technologies, and Yotta Data Services. At the same time, three more teams -- Soket AI, Gan AI, Gnani AI -- have been selected for building indigenous artificial intelligence foundation models. "Like Sarvam, these three teams also have a very big target ahead of them. Whichever sector they focus on, they must be among the top five in the world," Vaishnaw said. Put simply, foundation models in generative AI are large, pre-trained models that form the base for a variety of AI applications. The Minister further said that 367 datasets have already been uploaded to AI Kosh. He also highlighted IndiaAI Mission's role in driving reverse brain drain, and creating a comprehensive ecosystem entailing foundational models, compute capacity, safety standards, and talent development initiatives. Vaishnaw emphasised that these efforts are aimed at building a complete and inclusive AI ecosystem in India. In April this year, Sarvam AI was selected to build India's first indigenous AI foundational model, marking a key milestone in the country's AI innovation ecosystem. Soket AI will develop open source 120 billion parametres foundation model optimised for the country's linguistic diversity targeting sectors such as defence, healthcare, and education. Gan AI will create 70 billion parameters of multilingual foundation model targeting capabilities to surpass the current global leader. Gnani AI will build a 14 billion parameter Voice AI foundation model delivering multilingal real-time speech processing with advances reasoning capabilities. Ganesh Gopalan, Co-Founder and CEO of said in a statement, "We are honoured to be selected under the IndiaAI Mission to develop large language models that truly represent India's linguistic diversity. At our mission has always been to make technology more inclusive and accessible". Gopalan further said is keen to "lead the way in developing voice-to-voice large language models for India and the world, because we believe transformative AI must speak the language of the people it serves". Meanwhile, under the IndiaAI Applications Development Initiative, Vaishnaw also announced the winners of the IndiaAI I4C CyberGuard AI Hackathon, jointly organised with Indian Cyber Crime Coordination Centre (I4C), Ministry of Home Affairs. "The Hackathon resulted in the development of AI-based solutions to enhance the classification of cybercrime complaints and support the identification of emerging crime patterns, trends, and modus operandi on the National Cyber Crime Reporting Portal (NCRP). These models can interpret complex inputs such as handwritten FIRs, screenshots, and audio calls with improved speed and accuracy," an official release said. PTI