Latest news with #DemisHassabis


Deccan Herald
15 hours ago
- Deccan Herald
Google brings AI Mode for voice search on iPhone, Android mobiles
Earlier this year, in March, Google introduced an experimental Artificial Intelligence (AI) Mode in the Search this had a limitation as it only took text requests for AI mode search. Now, the company has added a voice search option called 'Search Live'. Users can ask questions, and it will respond in interaction can be as natural as people talking to each other. However, the generated AI (gen AI) voice will sound a little robotic, as this is a beta feature. The new feature is now available on the Google Search app for iPhone and Android phones in the US. It will be expanded to more regions, including India, in the coming 16: Seven key features you should know about Google's latest mobile how to use 'Search Live' on your phone:Users have to just open the Google app >> tap the new 'Live' icon, and verbally ask their question, like, What are some tips for improving the flowering growth in the rose plant?'The user will hear a helpful AI-generated audio response, and he/she can easily follow up with another question, like, "How many times should I resoil the pot in a year?"The AI chatbot will also offer easy-to-access links right on the phone's screen so the user can dig deeper with content from the AI chatbot also offers transcription of the interaction, and this can be stored in the notepad on the phone for future the coming months, Google plans to introduce even more Live capabilities to AI Mode, including the camera option where users can talk back-and-forth, and users can show the Search app what he/she seeing in real time and retrieve any related information on the a related development, Google DeepMind, the creator of Gemini AI, is reportedly working on a new technology that can mimic the writing of the subscriber and auto reply to people via email, Demis Hassabis, the head of Google DeepMind, confirmed at the SXSW London the next-generation Gmail, users can automate their responses. And, he/she need not have to look at the email, Gmail will intuitively understand the context of the message and respond in an appropriate tone of the owner's writing style. .Gmail replies: Google DeepMind working on AI tool to auto-respond in user's writing the latest news on new launches, gadget reviews, apps, cybersecurity, and more on personal technology only on DH Tech

Yahoo
2 days ago
- Business
- Yahoo
Why Big Tech cannot agree on artificial general intelligence
On the front cover of their initial business plan for DeepMind, the AI lab they set up in 2010, Sir Demis Hassabis, Mustafa Suleyman and


Jordan News
5 days ago
- Business
- Jordan News
Is Google Changing the Face of the Internet Forever? - Jordan News
With the launch of a groundbreaking AI tool integrated into its search engine, Google has become the center of a global debate: is this the beginning of a smarter internet—or the end of the open web as we know it? اضافة اعلان For decades, the internet has operated on a mutual exchange: websites allow search engines free access to their content, and in return, search engines direct users to those sites—driving traffic, ad revenue, and commerce. Currently, about 68% of online activity begins with a search, and Google controls nearly 90% of all global searches, making it the gatekeeper of online discovery. The Rise of AI-Driven Search In recent years, Google has introduced subtle but profound changes, notably with features like AI Overviews, which summarize information directly in search results. Now, the company is rolling out an even more radical update: AI Mode, a chatbot-style interface that generates full answers to user queries—eliminating the need to click through to other websites. Though currently optional and limited to U.S. users, AI Mode is expected to become the default in the near future, replacing traditional blue links with machine-generated summaries. Opportunity or Threat? Supporters see this as an opportunity to modernize and streamline the internet. Google claims AI-powered search will offer more relevant and personalized results while continuing to support digital publishers. A company spokesperson stated: 'We're committed to connecting users with helpful content. Innovations like AI Mode unlock new pathways for discovering and creating knowledge.' But critics warn this could cripple the web's ecosystem. If users get all their answers directly from Google's AI, websites may suffer a massive decline in traffic—particularly those that rely on organic search for ad revenue or product sales. Some experts fear this shift could centralize control over content, reducing diversity of information and allowing algorithms to dictate what is seen or hidden. The result? A less open, less vibrant internet. The Machine Web is Here Data from BrightEdge, a web analytics firm, reveals a 49% increase in impressions thanks to AI Overviews, but a 30% drop in click-through rates. Users are getting what they need without ever leaving the search page. This hints at the dawn of a "Machine Web"—a world where websites are no longer built for people, but for algorithms. In this future, robots summarize knowledge, and the user's role becomes increasingly passive. Demis Hassabis, head of Google DeepMind, recently remarked: 'Publishers may choose to deliver content directly to AI systems instead of humans. In just a few years, everything will change.' Convenience at a Cost On the surface, it all seems easy: answers appear instantly, decisions become effortless. But this convenience may erase the magic of the web—the joy of discovery, the thrill of unexpected rabbit holes, and the wonder of exploring human-made content that surprises and inspires. In a future ruled by intelligent machines, we must ask: will the internet still be a place for curiosity and connection? Or will it become a sterile stream of automated replies?


NDTV
6 days ago
- Science
- NDTV
UN Sounds Alarm On AI Nearing Human-Like Intelligence 'AGI', Urges Action
The United Nations has warned about human-level artificial intelligence (AI), popularly referred to as Artificial General Intelligence (AGI), and urged action as the new technology evolves rapidly. The United Nations Council of Presidents of the General Assembly (UNCPGA) released a report seeking global coordination to deal with the perils of AGI, which could become a reality in the coming years. The report highlighted that though AGI could "accelerate scientific discoveries related to public health" and transform many industries, its downside could not be ignored. "While AGI holds the potential to accelerate scientific discovery, advance public health, and help achieve the Sustainable Development Goals, it also poses unprecedented risks, including autonomous harmful actions and threats to global security," the report stated. "Unlike traditional AI, AGI could autonomously execute harmful actions beyond human oversight, resulting in irreversible impacts, threats from advanced weapon systems, and vulnerabilities in critical infrastructures. We must ensure these risks are mitigated if we want to reap the extraordinary benefits of AGI." The report highlighted that immediate and coordinated international action, supported by the UN, was essential to prevent AGI from becoming a menace. "Such actions should be initiated by a special UN General Assembly specifically on AGI to discuss the benefits and risks of AGI and potential establishment of a global AGI observatory, certification system for secure and trustworthy AGI, a UN Convention on AGI, and an international AGI agency." DeepMind CEO warns In February, Demis Hassabis, CEO of Google DeepMind stated that AGI will start to emerge in the next five or 10 years. He also batted for a UN-like umbrella organisation to oversee AGI's development. "I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible," said Mr Hassabis. "You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects and sort of deal with those. And finally, some kind of supervening body that involves many countries around the world that input how you want to use and deploy these systems. So a kind of like UN umbrella, something that is fit for purpose for a that, a technical UN," he added. As per a research paper by DeepMind, AGI could arrive by early as 2030 and "permanently destroy humanity",


Mint
7 days ago
- Business
- Mint
Why superintelligent AI isn't taking over anytime soon
A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn't enough to claim that their AI is the best. All three have recently insisted that it's going to be so good, it will change the very fabric of society. Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg's dream of AI superintelligence—that is, an AI smarter than we are. 'Humanity is close to building digital superintelligence," Altman declared in an essay this week, and this will lead to 'whole classes of jobs going away" as well as 'a new social contract." Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones. Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren't buying all that talk. The title of a fresh paper from Apple says it all: 'The Illusion of Thinking." In it, a half-dozen top researchers probed reasoning models—large language models that 'think" about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim. Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users—astonishingly far reach and fast growth for a service released just 2½ years ago. But these critics argue there is a significant hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves. Apple's paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today's 'reasoning" AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence—are in some cases worse at solving problems than the plain-vanilla AI chatbots that preceded them. This work also shows that whether you're using an AI chatbot or a reasoning model, all systems fail utterly at more complex tasks. Apple's researchers found 'fundamental limitations" in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered 'complete accuracy collapse." Similarly, engineers at Salesforce AI Research concluded that their results 'underscore a significant gap between current LLM capabilities and real-world enterprise demands." Importantly, the problems these state-of-the-art AIs couldn't handle are logic puzzles that even a precocious child could solve, with a little instruction. What's more, when you give these AIs that same kind of instruction, they can't follow it. Apple's paper has set off a debate in tech's halls of power—Signal chats, Substack posts and X threads—pitting AI maximalists against skeptics. 'People could say it's sour grapes, that Apple is just complaining because they don't have a cutting-edge model," says Josh Wolfe, co-founder of venture firm Lux Capital. 'But I don't think it's a criticism so much as an empirical observation." The reasoning methods in OpenAI's models are 'already laying the foundation for agents that can use tools, make decisions, and solve harder problems," says an OpenAI spokesman. 'We're continuing to push those capabilities forward." The debate over this research begins with the implication that today's AIs aren't thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data. Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple's paper, along with related work, exposes flaws in today's reasoning models, suggesting they're not the dawn of human-level ability but rather a dead end. 'Part of the reason the Apple study landed so strongly is that Apple did it," he says. 'And I think they did it at a moment in time when people have finally started to understand this for themselves." In areas other than coding and mathematics, the latest models aren't getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors. 'The broad idea that reasoning and intelligence come with greater scale of models is probably false," says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other cutting-edge AI to sense real-world environments. Today's models have inherent limitations that make them bad at following explicit instructions—the opposite of what you'd expect from a computer, he adds. It's as if the industry is creating engines of free association. They're skilled at confabulation, but we're asking them to take on the roles of consistent, rule-following engineers or accountants. That said, even those who are critical of today's AIs hasten to add that the march toward more-capable AI continues. Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models' performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software. From a business perspective, whether or not current systems can reason, they're going to generate value for users, says Wolfe. 'Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn't be surprised if these limitations are overcome in practice in the near future," says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI. Meanwhile, the true believers are undeterred. Just a decade from now, Altman wrote in his essay, 'maybe we will go from solving high-energy physics one year to beginning space colonization the next year." Those willing to 'plug in" to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds. This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences. Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it's shown itself to have antisocial tendencies such as 'opportunistic blackmail"—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most. 'Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing," says Ortiz. 'So for example, if you want to do your taxes, you'd want to stick with something more like TurboTax than ChatGPT." Write to Christopher Mims at