Latest news with #DeepMind
Yahoo
3 hours ago
- Science
- Yahoo
Hurricanes and sandstorms can be forecast 5,000 times faster thanks to new Microsoft AI model
When you buy through links on our articles, Future and its syndication partners may earn a commission. A new artificial intelligence (AI) model can predict major weather events faster and more accurately than some of the world's most widely used forecasting systems. The model, called Aurora, is trained on more than 1 million hours of global atmospheric data, including weather station readings, satellite images and radar measurements. Scientists at Microsoft say it's likely the largest dataset ever used to train a weather AI model. Aurora correctly forecast that Typhoon Doksuri would strike the northern Philippines four days before the storm made landfall in July 2023. At the time, official forecasts placed the storm's landfall over Taiwan — several hundred miles away. It also outperformed standard forecasting tools used by agencies, including the U.S. National Hurricane Center and the Joint Typhoon Warning Center. It delivered more accurate five-day storm tracks and produced high-resolution forecasts up to 5,000 times faster than conventional weather models powered by supercomputers. More broadly, Aurora beat existing systems in predicting weather conditions over a 14-day period in 91% of cases, the scientists said. They published their findings May 21 in the journal Nature. Researchers hope Aurora and models like it could support a new approach to predicting environmental conditions called Earth system forecasting, where a single AI model simulates weather, air quality and ocean conditions together. This could help produce faster and more consistent forecasts, especially in places that lack access to high-end computing or comprehensive monitoring infrastructure. Related: Google builds an AI model that can predict future weather catastrophes Aurora belongs to a class of large-scale AI systems known as foundation models — the same category of AI models that power tools like ChatGPT. Foundation models can be adapted to different tasks because they're designed to learn general patterns and relationships from large volumes of training data, rather than being built for a single, fixed task. In Aurora's case, the model learns to generate forecasts in a matter of seconds by analyzing weather patterns from sources like satellites, radar and weather stations, as well as simulated forecasts, the researchers said. The model can then be fine-tuned for a wide range of scenarios with relatively little extra data — unlike traditional forecasting models, which are typically built for narrow, task-specific purposes and often need retraining to adapt. The diverse dataset Aurora is trained on not only results in greater accuracy in general versus conventional methods, but also means the model is better at forecasting extreme events, researchers said. Related stories —Google's DeepMind AI can make better weather forecasts than supercomputers —Is climate change making the weather worse? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game In one example, Aurora successfully predicted a major sandstorm in Iraq in 2022, despite having limited air quality data. It also outperformed wave simulation models at forecasting ocean swell height and direction in 86% of tests, showing it could extract useful patterns from complex data even when specific inputs were missing or incomplete. "It's got the potential to have [a] huge impact because people can really fine tune it to whatever task is relevant to them … particularly in countries which are underserved by other weather forecasting capabilities," study co-author Megan Stanley, a senior researcher at Microsoft, said in a statement. Microsoft has made Aurora's code and training data publicly available for research and experimentation. The model has been integrated into services like MSN Weather, which itself is integrated into tools like the Windows Weather app and Microsoft's Bing search results.
Yahoo
6 hours ago
- Business
- Yahoo
5 AI stocks to consider buying and holding for the long term
Many AI applications are still in development, offering ground-floor buying opportunities in their stocks. Below are some established companies that five of contract writers like as investments to consider buying to capitalise on this transformational technology. What it does: Alphabet is a global technology company best known for Google, YouTube, Android, and cloud services. By Mark Hartley. When considering an AI investment for the long term, Google's parent company Alphabet (NASDAQ: GOOG) stands out. It has emerged as a key player in the AI space, leveraging its vast data resources and computational power to dig deep roots into the industry. Through DeepMind and its Gemini AI models, Alphabet is at the forefront of generative AI development. Google Cloud offers scalable AI tools and infrastructure for businesses, while AI enhancements in products like Search, Gmail, and YouTube are well-positioned to benefit from advertising revenue. Alphabet's expansive ecosystem gives it a strategic advantage in training and deploying AI models at scale. A significant risk, however, lies in the potential disruption of its core search business. As AI chatbots and generative search become more prevalent, traditional search advertising could face margin pressure. Additionally, if faces increased regulatory scrutiny on data usage, antitrust concerns and competition from rivals like Microsoft and Amazon. Mark Hartley doesn't own shares in any of the stocks mentioned. What it does: Cellebrite is the global leader in decrypting mobile phones and other devices supporting digital forensic investigations. By Zaven Boyrazian. Many AI stocks today are unproven. That's why I prefer established players leveraging AI to improve their existing mission-critical products like Cellebrite (NASDAQ:CLBT). Cellebrite specialises in extracting encrypted data from mobile phones and other devices aiding law enforcement and enterprises in criminal and cybersecurity investigations. Over 90% of crime commited today has a digital element. And when it comes to decrypting mobile phones, Cellebrite is the global gold standard. The company is now leveraging AI to analyse encrypted data – drastically accelerating a task that's historically been increadibly labour intensive identifying patterns, discovering connections, and establishing leads. Most of Cellebrite's revenue comes from law enforcement, exposing Cellebrite to the risk of budget cuts. In fact, fears of lower US federal spending is why the stock dropped sharply in early 2025. And with a premium valuation, investors can expect more volatility moving forward. But in the long run, Cellebrite has what it takes to be an AI winner in my mind. That's why I've already bought shares. Zaven Boyrazian owns shares in Cellebrite. What it does: Dell Technologies provides a broad range of IT products and services and is an influential player in AI. By Royston Wild. Dell Technologies (NYSE:DELL) isn't one of the more fashionable names in the realm of artificial intelligence (AI). The good news is that this means it trades at a whopping discount to many of its peers. For this financial year (to January 2026), City analysts think earnings will soar 41% year on year, leaving it on a price-to-earnings (P/E) multiple of 12.6 times. Such readings are as rare as hen's teeth in the high-growth tech industry. In addition, Dell shares also trade on a price-to-earnings growth (PEG) ratio of 0.3 for this year. Any reading below 1 implies a share is undervalued. These modest readings fail to reflect the exceptional progress the company's making in AI, in my opinion. Indeed, Dell last month raised guidance for the current quarter as it announced 'unprecedented demand for our AI-optimised servers' during January-March. It booked $12.1bn in AI orders in the last quarter alone, beating the entire total for the last financial year. Dell is a major supplier of server infrastructure that let Nvidia's high-power chips do their thing. Dell's shares could sink if unfavourable developments in the ongoing tariff wars transpire. But the company's low valuation could help limit the scale of any falls. Royston Wild does not own shares in Dell or Nvidia. What it does: Salesforce is a customer relationship management (CRM) software company that is developing AI agents. By Edward Sheldon, CFA. We've all seen the potential of artificial intelligence (AI) in recent years. Using apps like ChatGPT and Gemini, we can do a lot of amazing things today. These apps are just the start of the AI story, however. I expect the next chapter to be about AI agents – software programmes that can complete tasks autonomously and increase business productivity exponentially. One company that is active in this space is Salesforce (NYSE: CRM). It's a CRM software company that has recently developed an agentic AI offering for businesses called 'Agentforce'. It's still early days here. But already the company is having a lot of success with this offering, having signed up 8,000 customers since the product's launch last October. Now, Salesforce is not the only company developing AI agents. So, competition from rivals is a risk. I like the fact that the company's software is already embedded in over 150,000 organisations worldwide though. This could potentially give it a major competitive advantage in the agentic AI race. Edward Sheldon has positions in Salesforce. What it does: Salesforce is a cloud-based software company specialising in customer relationship management, helping businesses manage sales, marketing, support, and data. By Ben McPoland. I think Salefsforce (NYSE: CRM) looks well set up to benefit in the age of AI. Specifically, its Agentforce platform, which lets businesses deploy AI agents to handle various tasks, could be the company's next big growth engine. By the end of April, it had already closed over 8,000 deals, just six months after launching Agentforce. Half of those were paid deals, taking its combined data cloud and AI annual recurring revenue above $1bn. Granted, that looks like small potatoes set against the $41.2bn in sales it's expected to generate this fiscal year. But it's still very early days, and management reckons the digital labour market opportunity could run into the trillions of dollars. Of course, it's always best to treat such mind-boggling projections with a healthy dose of scepticism. And the company does face stiff competition in the AI agent space, especially from Microsoft and ServiceNow. Nevertheless, I'm bullish here. Salesforce is already deeply embedded in sales, service, and marketing. Its AI agents slot into existing workflows, which I think will prove to be a big advantage over unproven AI upstarts. Ben McPoland owns shares of Salesforce. The post 5 AI stocks to consider buying and holding for the long term appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. The Motley Fool UK has recommended Alphabet, Amazon, Cellebrite, Microsoft, Nvidia, and Salesforce. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


News18
8 hours ago
- Entertainment
- News18
Google's Gemini Spent 800 Hours Beating Pokémon And Then It Panicked And Failed
Last Updated: Google's newest AI chatbot struggles to stay calm while playing a game designed for children. Artificial intelligence (AI) has come a long way, but even advanced systems can struggle sometimes. According to a report from Google DeepMind, their top AI model, Gemini 2.5 Pro, had a tough time while playing the classic video game Pokémon Blue—a game that many kids find easy. The AI reportedly showed signs of confusion and stress during the game. The results came from a Twitch channel called Gemini_Plays_Pokemon, where an independent engineer named Joel Zhang tested Gemini. Although Gemini is known for its strong reasoning and coding skills, the way it behaved during the game revealed some surprising and unusual reactions. The DeepMind team reported that Gemini started showing signs of what they called 'Agent Panic." In their findings, they explained, 'Throughout the playthrough, Gemini 2.5 Pro gets into various situations which cause the model to simulate 'panic'. For example, when the Pokémon in the party's health or power points are low, the model's thoughts repeatedly reiterate the need to heal the party immediately or escape the current dungeon." This behaviour caught the attention of viewers on Twitch. People watching the live stream reportedly started recognising the moments when the AI seemed to be panicking. DeepMind pointed out, 'This behaviour has occurred in enough separate instances that the members of the Twitch chat have actively noticed when it is occurring." Even though AI doesn't feel stress or emotions like humans do, the way Gemini reacted in tense moments looked very similar to how people respond under pressure—by making quick, sometimes poor or inefficient decisions. In its first full attempt at playing Pokémon Blue, Gemini took a total of 813 hours to complete the game. After Joel Zhang made some adjustments, the AI managed to finish a second run in 406.5 hours. However, even with those changes, the time it took was still very slow, especially when compared to how quickly a child could beat the same game. People on social media didn't hold back from poking fun at the AI's nervous playing style. A viewer commented, 'If you read its thoughts while it's reasoning, it seems to panic anytime you slightly change how something is worded." Another user made a joke by combining 'LLM" (large language model) with 'anxiety," calling it: 'LLANXIETY." Interestingly, this news comes just a few weeks after Apple shared a study claiming that most AI models don't actually 'reason" in the way people think. According to the study, these models mostly depend on spotting patterns, and they often struggle or fail when the task is changed slightly or made more difficult.

Hindustan Times
a day ago
- Hindustan Times
Google's Gemini AI panics while playing Pokémon, takes 800 hours to finish game
Artificial intelligence has made remarkable strides, but Google's latest chatbot is showing that even the smartest machines can crumble under pressure. A recent report by Google DeepMind reveals that its flagship model, Gemini 2.5 Pro, displayed signs of panic while playing Pokémon Blue—an old-school video game many children breeze through with ease. The findings came from a Twitch channel called Gemini_Plays_Pokemon, where independent engineer Joel Zhang put Gemini to the test. While Gemini is known for its advanced reasoning abilities and code-level understanding, its performance during this gaming challenge exposed unexpected behavioural quirks. Also read: 40-year-old man dies of cancer after doctors told him stomach ache was due to stress According to the DeepMind team, Gemini began to exhibit what they describe as 'Agent Panic.' The report states, 'Over the course of the playthrough Gemini 2.5 Pro gets into various situations which cause the model to simulate 'panic'. For example, when the Pokémon in the party's health or power points are low, the model's thoughts repeatedly reiterate the need to heal the party immediately or escape the current dungeon.' This behaviour didn't go unnoticed. Viewers on Twitch began identifying when the AI was panicking, with DeepMind noting, 'This behaviour has occurred in enough separate instances that the members of the Twitch chat have actively noticed when it is occurring.' Although AI doesn't experience stress or emotion like humans, the model's erratic decision-making in high-pressure situations mirrors how people behave under stress, making impulsive or inefficient choices. In the first full game run, Gemini took 813 hours to finish Pokémon Blue. After adjustments by Zhang, the AI completed a second playthrough in 406.5 hours. Still, this was far from efficient, especially compared to the time a child would take to complete the same game. Social media users were quick to mock the AI's anxious gameplay. 'If you read it's thoughts when reasoning it seems to panic just about any time you word something slightly off,' said one viewer. Another joked: 'LLANXIETY.' A third chimed in with a broader reflection: 'I'm starting to think the 'Pokémon index' might be one of our best indicators of AGI. Our best AIs still struggling with a child's game is one of the best indicators we have of how far we still have yet to go. And how far we've come.' Interestingly, these revelations come just weeks after Apple released a study arguing that most AI reasoning models don't truly reason at all. Instead, they rely heavily on pattern recognition and tend to fall apart when the task is tweaked or made more complex. Also read: Two fired after Michigan man receives $1.6 million salary in major payroll slip-up - Sakshi


NDTV
a day ago
- NDTV
Google's AI Chatbot Panics When Playing Video Game Meant For Children
Artificial intelligence (AI) chatbots might be smart, but they still sweat bullets while playing video games that seemingly young kids are able to ace. A new Google DeepMind report has found that its Gemini 2.5 Pro resorts to panic when playing Pokemon, especially when one of the fictional characters is close to death, causing the AI's performance to experience qualitative degradation in the model's reasoning capability. Google highlighted a case study from a Twitch channel named Gemini_Plays_Pokemon, where Joel Zhang, an engineer unaffiliated with the tech company, plays Pokemon Blue using Gemini. During the two playthroughs, the Gemini team at DeepMind observed an interesting phenomenon they describe as 'Agent Panic'. "Over the course of the playthrough, Gemini 2.5 Pro gets into various situations which cause the model to simulate "panic". For example, when the Pokemon in the party's health or power points are low, the model's thoughts repeatedly reiterate the need to heal the party immediately or escape the current dungeon," the report highlighted. "This behavior has occurred in enough separate instances that the members of the Twitch chat have actively noticed when it is occurring," the report says. While AI models are trained on copious amounts of data and do not think or experience emotions like humans, their actions mimic the way in which a person might make poor, hasty decisions when under stress. In the first playthrough, the AI agent took 813 hours to finish the game. After some tweaking by Mr Zhang, the AG agent shaved some hundreds of hours and finished the game in 406.5 hours. While the progress was impressive, the AI agent was still not good at playing Pokémon. It took Gemini hundreds of hours to reason through a game that a child could complete in significantly less time. The chatbot displayed erratic behaviour despite Gemini 2.5 Pro being Google's most intelligent thinking model that exhibits strong reasoning and codebase-level understanding, whilst producing interactive web applications. Social media reacts Reacting to Gemini's panicky nature, social media users said such games could be the benchmark for the real thinking skills of the AI tools. "If you read its thoughts when reasoning it seems to panic just about any time you word something slightly off," said one user, while another added: "LLANXIETY." A third commented: "I'm starting to think the 'Pokemon index' might be one of our best indicators of AGI. Our best AIs still struggling with a child's game is one of the best indicators we have of how far we still have yet to go. And how far we've come." Earlier this month, Apple released a new study, claiming that most reasoning models do not reason at all, albeit they simply memorise patterns really well. However, when questions are altered or the complexity is increased, they collapse altogether.