logo
#

Latest news with #GPT-4

LinkedIn Cofounder Reid Hoffman says people are underestimating impact of AI on jobs, rejects bloodbath fears
LinkedIn Cofounder Reid Hoffman says people are underestimating impact of AI on jobs, rejects bloodbath fears

India Today

time11 hours ago

  • Business
  • India Today

LinkedIn Cofounder Reid Hoffman says people are underestimating impact of AI on jobs, rejects bloodbath fears

Many professionals are worried about AI taking over jobs, especially in white-collar roles. However, LinkedIn co-founder Reid Hoffman believes that the fear of AI, particularly the panic over mass job losses, is all due to its exaggeration. Hoffman believes that while AI will certainly bring significant transformation to the job sector, there will be no bloodbath for white-collar made these comments in response to a statement by Anthropic CEO Dario Amodei during an interview with Fast Company. Amodei had told Axios that AI could lead to a dramatic overhaul of white-collar work. While Hoffman acknowledged that change is inevitable, he dismissed the idea that the rise of AI would spell catastrophe for is right that over, call it, a decade or three, it will bring a massive set of job transformations. And some of that transformation will involve replacement issues,' Hoffman said. He emphasised that the shift in jobs due to AI should not be confused with total job destruction. 'Just because a function's coming that has a replacement area on a certain set of tasks doesn't mean all of this job's going to get replaced.' To support his views, Hoffman pointed to the example of the launch of spreadsheet software like Excel. He highlighted that although Excel impacted the nature of accounting work, it did not eliminate the need for accountants. Instead, the accounting profession evolved and even expanded in scope. 'Everyone was predicting that the accountant job would go away. And actually, in fact, the accountant job got broader, richer,' he maintains a clear view that, in future, AI will assist humans rather than replace them entirely. He envisions a world of 'person plus AI doing things' as the most likely scenario going forward. Therefore, AI-powered tools like GPT-4, Claude, and Microsoft Copilot should be used to enhance departments — not eliminate them. He warns that trying to completely substitute humans with AI would be a serious mistake. 'Could I just replace, for example, my accountants with GPT-4? The answer is absolutely not. That would be a disastrous mistake.'The LinkedIn co-founder also pushed back against the notion that automation through AI would wipe out entire departments. 'Let's replace my marketing department or my sales department with GPT-4. Absolutely not,' he said, adding, 'that's nowhere close to a bloodbath.'However, Hoffman is not denying the potential for job replacement altogether. He acknowledges that some roles are more vulnerable — especially those that have already been reduced to scripted, mechanical tasks. 'What jobs are most likely to be replaced? They're the ones where we're trying to program human beings to act like robots.' Yet, even in such cases, Hoffman believes AI will not take over everything. Much will depend on how companies choose to implement AI in their workflows.

Reid Hoffman Downplays AI Job Loss Fears, Urges Focus on Human-AI Collaboration
Reid Hoffman Downplays AI Job Loss Fears, Urges Focus on Human-AI Collaboration

Hans India

time11 hours ago

  • Business
  • Hans India

Reid Hoffman Downplays AI Job Loss Fears, Urges Focus on Human-AI Collaboration

LinkedIn co-founder Reid Hoffman has pushed back against growing anxiety over artificial intelligence (AI) and its impact on employment, especially among white-collar workers. In a recent conversation sparked by comments from Anthropic CEO Dario Amodei, Hoffman argued that while AI will indeed change the landscape of work, fears of an all-out "job bloodbath" are exaggerated. Amodei had earlier warned of AI driving a significant overhaul of white-collar jobs, raising concerns about the replacement of human roles. Hoffman, however, offered a more balanced perspective. 'Dario is right that over, call it, a decade or three, it will bring a massive set of job transformations. And some of that transformation will involve replacement issues,' he admitted. But he quickly clarified that this shift doesn't equate to widespread unemployment. 'Just because a function's coming that has a replacement area on a certain set of tasks doesn't mean all of this job's going to get replaced.' Hoffman pointed to historical parallels to support his view, citing the example of Microsoft Excel. When spreadsheet software was introduced, many feared it would render accountants obsolete. Instead, the field evolved. 'Everyone was predicting that the accountant job would go away. And actually, in fact, the accountant job got broader, richer,' he said. According to Hoffman, the future of work lies in symbiosis between humans and machines. He imagines a workplace where employees are empowered, not displaced, by AI tools such as GPT-4, Claude, and Microsoft Copilot. These technologies, he insists, should be used to enhance productivity, not eliminate human effort. 'Could I just replace, for example, my accountants with GPT-4? The answer is absolutely not. That would be a disastrous mistake,' Hoffman warned. Hoffman strongly cautioned against wholesale automation, particularly the idea of removing entire departments. 'Let's replace my marketing department or my sales department with GPT-4. Absolutely not,' he said. 'That's nowhere close to a bloodbath.' While Hoffman does acknowledge that some roles are at greater risk—especially those made up of repetitive or scripted tasks—he believes the potential for AI to replace such jobs has more to do with how businesses choose to deploy these technologies. 'What jobs are most likely to be replaced? They're the ones where we're trying to program human beings to act like robots,' he said. In conclusion, Hoffman remains optimistic about AI's role in the job market. Instead of viewing AI as a threat, he believes it should be seen as a powerful partner. 'Person plus AI doing things' is the model he champions — one where human judgment, creativity, and adaptability remain essential. As the debate around AI and jobs continues, Hoffman's call for cautious optimism and thoughtful implementation serves as a timely reminder: transformation does not have to mean elimination.

The secret AI sauce behind Meta stock's 683% rise since the dark days of 2022
The secret AI sauce behind Meta stock's 683% rise since the dark days of 2022

CNBC

time2 days ago

  • Business
  • CNBC

The secret AI sauce behind Meta stock's 683% rise since the dark days of 2022

Picture scrolling through Facebook or Instagram and spotting an advertisement that feels tailor-made for you. You're seeing Meta Platform 's artificial intelligence tools in action, crafting ads for its partner businesses that specifically target and attract customers based on their interests. It has also become the secret sauce behind Meta's wildly success ad unit driving its stellar financial performance and stock performance. But few could see Meta's path to AI dominance just a few years ago. In 2022, the social media giant hit a low as investors balked at CEO Mark Zuckerberg's costly metaverse project and Apple's privacy changes disrupted its ad business. Meta shares fell by more than 60% in 2022, at one point closing as low as $88.91 a share. Then came Zuckerberg's "Year of Efficiency" in 2023 aimed at reversing the tide with layoffs and a focus on profitability. Meta quickly rewarded its loyal investors with a nearly 200% stock jump that year, as its AI-enhanced ads revived revenue growth and the company's cost cuts jumpstarted earnings. And despite lingering concerns over the company investing too much and too quickly in the nascent technology, it was hard to argue with the results. Meta delivered revenue and earnings beats for all four quarters of fiscal 2024. Jump to 2025 and Meta has fully repositioned itself as an AI-first company, one of the most celebrated in the field. The company's open-source Llama large language models are the cornerstone of its strategy, competing against the likes of OpenAI's GPT-4 and Google's Gemini. Zuckerberg's investments in data centers and hardware aim to secure the company's long-term edge. Building these large powerful AI models is critical to having "control of our own destiny" in powering the various AI opportunities Meta is focused on, Zuckerberg has said, emphasizing improved advertising and user experiences. As of Wednesday's close, its stock is up roughly 683% since that November 2022 closing low. Meta's AI strategy has not been without hurdles. The company delayed the release of Behemoth, its most advanced large language model, originally slated for April, until the fall or later due to performance issues. At the same time, Llama 4's release didn't generate much enthusiasm, fueling perceptions that Meta was falling behind in the AI race. The challenges compounded when 11 out of 14 AI researchers left the company amid intensified competition. The biggest risk to Meta's stock is losing its lead on the cutting edge of AI, said Gil Luria, analyst at D.A. Davidson, who noted that the underwhelming Llama 4 opened the door to competitors, including OpenAI, Anthropic and some Chinese models. Indeed, Zuckerberg personally stepped in to recruit top talent to stay out front, including bringing back Robert Fergus, a former Google DeepMind researcher and previous Meta employee, to enhance the AI division. Earlier this month, Meta made another major move investing $14.8 billion for a 49% stake in data-labeling company Scale AI and hired its CEO to lead a new "superintelligence" research lab, joined by Scale AI staff. In a sign of its aggressiveness, Meta also offered employees at OpenAI bonuses of $100 million to leave the ChatGPT creator, OpenAI CEO Sam Altman said on a recent podcast . Key growth ingredient So what exactly has made Meta's AI so special? The company's newest Llama 4 is "multimodal," which means it can process and learn from multiple types of data including text, images or backgrounds, similar to Google's Gemini and OpenAI's GPT-4 multimodal models. It powers Meta's ad growth by enabling advertisers to create tailored ads quickly and cost-effectively, and boosts engagement and performance across Facebook and Instagram. "Performance improvements from being multimodal benefit all applications," said Matt Steiner, Meta's VP of monetization infrastructure, ranking & AI foundations, in a CNBC interview. "Models trained on different data sources like images or videos or text all benefit from being trained on the same model," he added, noting this versatility drives better ad targeting and content creation. Steiner emphasized that this approach "helps us maintain our competitive edge in advertising by maximizing return on advertiser spend while controlling costs." Meta trails only Google in digital advertising, capturing 23% of global ad revenue in 2024 compared to Google's 28%, according to eMarketer. Meta's ad revenue rose 22% last year, outpacing Google's 12% and the industry's 9%. This growth, driven by AI-powered targeting, underscores Meta's momentum. "The ability to deliver better ads allows them to sell those ads for more, which is why they're gaining share in the digital ad market," said Luria, noting that Meta has been adept at using traditional machine learning techniques. He said Meta is taking this technology to an advanced level, "automating it further — allowing advertisers to generate content and make even more well-informed decisions." This next wave, generative AI, "deepens Meta's competitive moat." He highlightdc an example of how Reels is becoming more compelling to users. That's because Meta is "getting better at the AI algorithm that allows them to serve the next best short video to keep the consumer engaged." The introduction of ads on WhatsApp, announced this week, also expands the reach of Meta's AI ad tools, creating a new place where it can generate high-margin revenue. Meta on Tuesday announced new generative AI tools for its Advantage+ platform, enabling advertisers to integrate brand elements into personalized ads and create animated videos from images with music and text overlays. The company is also testing a new feature in "Video Highlights" which uses AI summaries to better digest video ads by skipping to the highlights of the video. Looking ahead, Meta's generative AI advancements are enabling it to deliver a higher volume of personalized ads. The goal: To keep and strengthen its AI edge. As Meta's Steiner put it, the company's "compounding effects" of AI improvements fuel growth, which should help Meta make more gains in digital advertising. The company's financial and strategic commitment show Zuckerberg and Co. are in a league of their own in digital ads as Meta continues to take market share. We're confident in Meta's ability to keep innovating on its advertising tools using AI, keeping customers around and its business growing. We currently have a 2 rating on the stock with a price target of $750. (Jim Cramer's Charitable Trust is long META. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust's portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.

India's big AI test is here: Making sovereign language models work
India's big AI test is here: Making sovereign language models work

Mint

time3 days ago

  • Business
  • Mint

India's big AI test is here: Making sovereign language models work

Bengaluru/New Delhi: For years, the world's most powerful artificial intelligence (AI) models have spoken in English. Trained on sprawling datasets like Wikipedia, Reddit, and Common Crawl, models such as OpenAI's GPT-4, Google's Gemini 2.5, Meta's Llama, Microsoft's Bing AI, and Anthropic's Claude have mastered the dominant global internet dialect. But, they all falter when faced with the linguistic diversity of countries like India. English-dominated AI models can hallucinate (fabricate facts), mistranslate key phrases, or miss the cultural context when prompted in Indian languages. The concern is also over inclusion. With over 1.4 billion people and 22 official languages, alongside thousands of dialects, India can ill afford to be an afterthought in the AI revolution. The country is expected to total over 500 million non-English internet users by 2030. If AI models can't understand them, the digital divide will only widen. To address this, the Indian government launched a $1.2 billion IndiaAI Mission in February 2024. One of its central goals: to fund and foster the development of sovereign local language models and small language models (SLMs)—AI systems that are built, trained, and deployed entirely within India, on Indian data. While large language models (LLMs), such as GPT-4, handle broad tasks, having been trained on copious amounts of data, SLMs are smaller, typically built for specific uses. In January, the government opened a nationwide call for proposals to develop foundational AI models rooted in Indian languages and datasets. By April, more than 550 pitches had poured in from startups, researchers, and labs eager to build either SLMs or general-purpose LLMs. In April, the government selected Sarvam AI to lead the charge. The Bengaluru-based startup will develop the country's first foundational model trained on local language datasets. It would build a massive 120-billion parameter open-source model to power new digital governance tools. Parameters are settings that control how the AI model learns from data before making predictions or decisions. For instance, in a language model like ChatGPT, parameters help decide which word comes next in a sentence based on the words before it. On 30 May, the government announced three more model-development efforts—from Soket AI, Gnani AI and Gan AI. Soket AI, based in Gurugram, will build a 120-billion multilingual model focused on sectors like defence, healthcare, and education; Gnani AI, based in Bengaluru, will develop a 14-billion voice AI model for multilingual speech recognition and reasoning; Gan AI, also based in India's Silicon Valley, is working on a 70-billion parameter model aimed at advanced text-to-speech capabilities. During the launch of the three additional models, union minister for electronics and information technology, Ashwini Vaishnaw, stressed the importance of more people being able to access technology and get better opportunities. 'That's the philosophy with which IndiaAI Mission was created," the minister said. A senior official from the ministry of electronics and information technology (MeitY), speaking on condition of anonymity, told Mint that a foundational sovereign language model can be expected within the next 12 months. 'We will see many more sovereign models after a year or so, hosted on the government's AI marketplace platform," the official added. Why it matters Beyond the language gap, the global AI landscape is being shaped by rising concerns around sovereignty, data control, and geopolitical risk. As AI becomes the cornerstone of digital infrastructure, nations are racing to build their own models. In India, the move also aligns with India's broader vision of 'Atmanirbhar Bharat' (self-reliant India). India now joins a fast-growing club of countries that have developed or are developing sovereign LLMs—China (Baidu), France (Mistral), Singapore (SEA-LION), UAE (Falcon), Saudi Arabia (Mulhem), and Thailand (ThaiLLM). Even before Sarvam, India had seen an uptick in language model building activity. BharatGPT (by CoRover), Project Indus (Tech Mahindra), Hanooman (by Seetha Mahalaxmi Healthcare and 3AI), Krutrim (Ola), and Sutra (by Two AI) are some examples. In October 2024, BharatGen, a government-backed project, released Param-1, a 2.9-billion parameter bilingual model along with 19 Indian language speech models. Led by IIT Bombay, BharatGen's mission is to boost public service delivery and citizen engagement using AI in language, speech, and computer vision. Imagine a farmer in eastern Uttar Pradesh calling a helpline and interacting with a chatbot that understands and replies fluently in Bhojpuri, while also generating a clear summary for a government officer to act on. Or an AI tutor generating regional-language lessons, quizzes, and spoken explanations for students in languages like Marathi, Tamil, Telegu, or Kannada. These efforts fit into India's broader digital stack, alongside Aadhaar (digital identity), UPI (unified payments interface), ULI (unified lending interface) and ONDC (the Open Network for Digital Commerce). In a world where AI models are fast becoming a symbol of digital leadership, 'a sovereign LLM is also about owning the narrative, the data, and the future of its digital economy", said Akshay Khanna, managing partner at Avasant, a consulting firm. 'Sovereignty will be a key requirement in all nations including India," says Mitesh Agarwal, Asia-Pacific managing director at Google Cloud. He points out that Google's Gemini 1.5 processes data entirely within its India data centers. 'For sensitive projects, we also offer open-source AI models and sovereign cloud options," he added. Showing the way Founded in July 2023 by Vivek Raghavan and Pratyush Kumar, Sarvam has raised $41 million from private investors. While the IndiaAI Mission won't inject cash, it will take a minority equity stake in the startup. For now, Sarvam will receive computing power—over 4,000 Nvidia H100 graphics processing units (GPUs) for six months—to train its model. The aim is to build a multimodal foundation model (text, speech, images, video, code, etc.) capable of reasoning and conversation, optimized for voice interfaces, and fluent in Indian languages. 'When we do so, a universe of applications will unfold," Sarvam co-founder Raghavan said at the launch on 26 April. 'For citizens, this means interacting with AI that feels familiar, not foreign. For enterprises, it means unlocking intelligence without sending data beyond borders." Sarvam is developing three model variants—a large model for 'advanced reasoning and generation"; a smaller one for 'real-time interactive applications", and 'Sarvam-Edge for compact on-device tasks". It is partnering with AI4Bharat, a research lab at the Indian Institute of Technology (IIT)-Madras, supported by Infosys co-founder Nandan Nilekani and his philanthropist wife Rohini, to build these models. Sarvam has already developed Sarvam 1, a two-billion parameter multilingual language model, trained on four trillion tokens using Nvidia H100 GPUs. The company claims its custom tokenizer (that breaks text into small units, like words or parts of words, so a language model can understand and process it) is up to four times more efficient than leading English-centric models when processing Indian languages, hence reducing costs. Sarvam 1 supports 11 languages: Hindi, Bengali, Tamil, Telugu, Kannada, Malayalam, Marathi, Gujarati, Oriya, Punjabi, and English. It powers various generative AI (GenAI) agents and is also hosted on Hugging Face, enabling developers to build Indic-language apps. Hugging Face is a platform for sharing and hosting open-source AI models and datasets. meanwhile, is building voice-to-voice foundational LLMs that aim to produce near instant autonomous voice conversations, with very low latency. The models also aim to enable 'emotion aware conversations", which preserve intonation, stress and rhythm in the conversations, said Ganesh Gopalan, co-founder and CEO of 'The model will enable realistic conversations in governance, healthcare and education," he added. Wait and watch Sovereign LLMs and SLMs are likely to find strong acceptance in public service delivery and citizen engagement services across the country, just like it happened with UPI. However, enterprises will likely wait till the models show maturity, are secure enough, and hallucinate less. Current sovereign models, Sanchit Vir Gogia, founder of Greyhound Research explained, 'lack deployment maturity, robust safety mechanisms, and domain-specific accuracy." The Greyhound CIO Pulse 2025 survey found that 67% of enterprises exploring Indic LLMs report frequent failures in multilingual task execution, especially with mixed scripts (e.g., Devanagari+ Latin), identifying regional slang, or recognizing emotional cues in customer queries. Further, language in India is hyper-local. Hindi spoken in Varanasi differs significantly from Hindi in Patna—not just in accent, but in vocabulary and usage. A health insurance aggregator in Bengaluru faced real-world fallout when its LLM couldn't differentiate between 'dard' (pain) and 'peeda' (suffering), leading to claim errors. The company had to halt rollout and invest in regionally-tuned data, Gogia said. Moreover, there are limited safeguards against hallucinations. 'Without deeper fine-tuning, cultural grounding, and linguistic quality assurance, these models are too brittle for nuanced conversations and too coarse for enterprise-scale adoption," Gogia added. 'The ambition is clear—but execution still needs time and investment." The missing millions Building sovereign models without government or venture capital funding could also pose a big challenge since developing a foundational model from scratch is an expensive affair. For instance, OpenAI's GPT was in the works for more than six years and cost upwards of $100 million and used an estimated 30,000 GPUs. Chinese AI lab DeepSeek did build an open-source reasoning model for just $6 million, demonstrating that high-performing models could be developed at low costs. But critics point out that the reported $6 million cheque would have excluded expenses for prior research and experiments on architectures, algorithms, and data. Effectively, this means that only a lab which has already invested hundreds of millions in foundational research and secured access to extensive computing clusters could train a model of DeepSeek's quality with a $6 million run. Ankush Sabharwal, founder and CEO of CoRover, says that its BharatGPT chatbot is a 'very small sovereign model with 500-million parameters". He has plans to build a 70-billion parameter sovereign model. 'But, we will need about $6 million to build and deploy it," Sabharwal says. Long way to go A glance at the download numbers for the month of May from Hugging Face underlines the wide gap between some of India's local language models and similar-sized global offerings. For instance, Sarvam-1's 2-billion model saw just 3,539 downloads during the month. Krutrim, a 12-billion model from Ola-backed Krutrim SI Designs, fared similarly with only 1,451 downloads. Fractal AI's Fathom-R1 14-billion model showed the most promise with 9,582 downloads. In contrast, international models with comparable or slightly larger sizes saw exponential traction. Google's Gemma-2 (2-billion) logged 376,800 downloads during the same period, while Meta's Llama 3.2 (3-billion) surpassed 1.5 million. Chinese models, too, outpaced Indian counterparts. Alibaba's Qwen3 (8- billion) recorded over 1.1 million downloads, while a fine-tuned version of the same model—DeepSeek-R1-0528-Qwen3-8B—clocked nearly 94,500 downloads. The numbers underline the need for a stronger business case for Indian startups. The senior government official quoted earlier in the story said that sovereign models must stand on their own feet. 'The government has created a marketplace where developers can access and build apps on top of sovereign models. But the startups must be able to offer their services first to India, and then globally," he said. 'API revenue, government usage fees, and long-term planning are key," Aakrit Vaish, former CEO of Haptik and mission lead for IndiaAI until March, said. API revenue is what a company earns by letting others use its software features via an application programming interface. For example, OpenAI charges businesses to access models like ChatGPT through its API for writing, coding, or image generation. Nonetheless, API access alone won't cover costs or deliver value, Gogia of Greyhound Research said. 'Sovereign LLM builders must focus on service-led revenue: co-creating solutions with large enterprises, developing industry-specific applications, and securing government-backed rollouts," he suggested. Indian buyers, he added, want control—over tuning, deployment, and results. 'They'll pay for impact, not model access. This isn't LLM-as-a-Service; it's LLM-as-a-Stack." In short, capability alone won't cut it. To scale and endure, sovereign language models must be backed by viable business propositions and stable funding—from public and private sources alike.

Sam Altman reveals water cost of each ChatGPT query; it will surprise you
Sam Altman reveals water cost of each ChatGPT query; it will surprise you

Time of India

time12-06-2025

  • Business
  • Time of India

Sam Altman reveals water cost of each ChatGPT query; it will surprise you

In a surprising revelation, OpenAI CEO Sam Altman shared that a single ChatGPT query uses a few drops of water. This comes at a time when the environmental cost of artificial intelligence is under growing scrutiny. In a blog post, Altman said each query consumes about 0.000085 gallons of water. That's roughly one-fifteenth of a teaspoon. AI models like ChatGPT run on massive server farms that must be cooled constantly. This makes water usage an important part of the conversation. Altman's claim aims to ease public concern, but some experts want more clarity and proof. How water usage is connected to ChatGPT AI runs on powerful computers stored in data centers that produce a lot of heat. To keep them from overheating, companies use cooling systems that often depend on water. As tech becomes more central to daily life, water use has joined energy and carbon emissions in the sustainability debate. Sam Altman's water estimate and what it means by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like No Distractions. Just Solitaire Play Solitaire Download Undo Altman said each ChatGPT query takes about 0.34 watt-hours of electricity and a few drops of water. That may sound small, but when you think about the millions of queries made each day, the total adds up. Critics point out that OpenAI has not explained how this number was calculated. That lack of detail has made some experts cautious. Past concerns about AI's water use A report from The Washington Post last year estimated that creating a 100-word email with GPT-4 could use more than a full bottle of water. These numbers were tied to the cooling needs of data centers, especially those in hot and dry places. Altman's latest statement appears to push back on that report as pressure grows on tech firms to be more accountable. Experts call for transparency Many in the tech and environmental space say companies like OpenAI need to publish independent and verified data about their resource use. Altman's number sounds reassuring but without knowing how the math was done or where the servers are located, it is hard to trust fully. Can AI be sustainable? As AI becomes a part of more industries and daily life, its long-term environmental cost matters more than ever. Altman believes the cost of intelligence will one day drop to the price of electricity alone. That could make AI both affordable and sustainable. But for now, even few drops of water per query raise big questions. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store