
Weather Company CEO wants AI to help you know when to walk your dog
Weather Company CEO Rohit Agarwal said that AI brings new opportunities for forecasting but keeping humans in the loop is their "secret sauce" during Axios' AI+ Summit in New York on Wednesday.
Why it matters: AI is improving forecasting and has the potential to help combat climate change, boost public safety and offer hyperlocal forecasts.
Zoom in: Agarwal said AI has the potential to solve problems for businesses and enterprises that are highly dependent on the weather to serve their customers.
For example, Agarwal explored the possibility of telling pet owners the specific day of the week or time of day that walks should take place to keep pets safe.
"Wouldn't it be fun if we actually deliver that message to you, knowing that you're likely to have a pet, that you're likely to choose in the morning or afternoon walk, and that you have the type of dog that actually can use a lot of exercise?" Agarwal told Axios' Ashley Gold.
But as companies race to adopt the technology, human expertise plays a crucial role.
Driving the news: Agarwal said the Weather Company's AI models are one of their "superpowers," along with the human element.
"We think that our secret sauce is also how we apply talented scientists and meteorologists to the formula to ensure that there is checks and balances against what those models are effectively communicating and computing so that we can ensure that we are delivering an accurate forecast for our customers," Agarwal said.
As the Trump administration takes a sledgehammer to the federal government, including at NOAA and the National Weather Service, Agarwal said the Weather Company leverages relationships with those agencies to deliver "world class forecast data."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
28 minutes ago
- Yahoo
OpenAI scrubs news of Jony Ive deal amid trademark dispute
OpenAI has removed news of its deal with Jony Ive's io from its website. The takedown comes amid a trademark dispute filed by iyO, an AI hardware startup. OpenAI said it doesn't agree with the complaint and is "reviewing our options." Turns out "i" and "o" make for a popular combination of vowels in the tech industry. Sam Altman's OpenAI launched a very public partnership with io, the company owned by famed Apple designer Jony Ive, in May. The announcement included a splashy video and photos of the two of them looking like old friends. On Sunday, however, OpenAI scrubbed any mention of that partnership from its website and social media. That's because iyO, a startup spun out of Google's moonshot factory, X, and which sounds like io, is suing OpenAI, io, Altman, and Ive for trademark infringement. iyO's latest product, iyO ONE, is an "ear-worn device that uses specialized microphones and bone-conducted sound to control audio-based applications with nothing more than the user's voice," according to the suit iyO filed on June 9. The partnership between OpenAI and io, meanwhile, is rumored to be working on a similarly screen-less, voice-activated AI device. According to its deal with OpenAI, Ive's firm will lead creative direction and design at OpenAI, focusing on developing a new slate of consumer devices. When the deal was announced, neither party shared specific details about future products. However, Altman said the partnership would shape the "future of AI." iyO approached OpenAI earlier this year about a potential collaboration and funding. OpenAI declined that offer, however, and says it is now fighting the trademark lawsuit. "We don't agree with the complaint and are reviewing our options," OpenAI told Business Insider. Read the original article on Business Insider
Yahoo
an hour ago
- Yahoo
ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development
The rapid rise of ChatGPT — and the cavalcade of competitors' generative models that followed suit — has polluted the internet with so much useless slop that it's already kneecapping the development of future AI models. As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it's originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI "model collapse." As a consequence, the finite amount of data predating ChatGPT's rise becomes extremely valuable. In a new feature, The Register likens this to the demand for "low-background steel," or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US's Trinity test. Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what's old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919. Maurice Chiodo, a research associate at the Centre for the Study of Existential Risk at the University of Cambridge called the admiral's actions the "greatest contribution to nuclear medicine in the world." "That enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck," he told The Register. "So the analogy works here because you need something that happened before a certain date." "But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI," he added. "Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'" In 2024, Chiodo co-authored a paper arguing that there needs to be a source of "clean" data not only to stave off model collapse, but to ensure fair competition between AI developers. Otherwise, the early pioneers of the tech, after ruining the internet for everyone else with their AI's refuse, would boast a massive advantage by being the only ones that benefited from a purer source of training data. Whether model collapse, particularly as a result of contaminated data, is an imminent threat is a matter of some debate. But many researchers have been sounding the alarm for years now, including Chiodo. "Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible," he told The Register. One area where the issue has already reared its head is with the technique called retrieval-augmented generation (RAG), which AI models use to supplement their dated training data with information pulled from the internet in real-time. But this new data isn't guaranteed to be free of AI tampering, and some research has shown that this results in the chatbots producing far more "unsafe" responses. The dilemma is also reflective of the broader debate around scaling, or improving AI models by adding more data and processing power. After OpenAI and other developers reported diminishing returns with their newest models in late 2024, some experts proclaimed that scaling had hit a "wall." And if that data is increasingly slop-laden, the wall would become that much more impassable. Chiodo speculates that stronger regulations like labeling AI content could help "clean up" some of this pollution, but this would be difficult to enforce. In this regard, the AI industry, which has cried foul at any government interference, may be its own worst enemy. "Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf, who co-authored the 2024 paper with Chiodo, told The Register. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine." More on AI: Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI
Yahoo
an hour ago
- Yahoo
OpenAI Concerned That Its AI Is About to Start Spitting Out Novel Bioweapons
OpenAI is bragging that its forthcoming models are so advanced, they may be capable of building brand-new bioweapons. In a recent blog post, the company said that even as it builds more and more advanced models that will have "positive use cases like biomedical research and biodefense," it feels a duty to walk the tightrope between "enabling scientific advancement while maintaining the barrier to harmful information." That "harmful information" includes, apparently, the ability to "assist highly skilled actors in creating bioweapons." "Physical access to labs and sensitive materials remains a barrier," the post reads — but "those barriers are not absolute." In a statement to Axios, OpenAI safety head Johannes Heidecke clarified that although the company does not necessarily think its forthcoming AIs will be able to manufacture bioweapons on their own, they will be advanced enough to help amateurs do so. "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." The OpenAI safety czar also admitted that while the company's models aren't quite there yet, it expects "some of the successors of our o3 (reasoning model) to hit that level." "Our approach is focused on prevention," the blog post reads. "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards." As Axios notes, there's some concern that the very same models that assist in biomedical breakthroughs may also be exploited by bad actors . To "prevent harm from materializing," as Heidecke put it, these forthcoming models need to be programmed to "near perfection" to both recognize and alert human monitors to any dangers. "This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said. Instead of heading off such dangerous capabilities at the pass, though, OpenAI seems to be doubling down on building these advanced models, albeit with ample safeguards. It's a noble enough effort, but it's easy to see how it could go all wrong. Placed in the hands of, say, an insurgent agency like the United States' Immigrations and Customs Enforcement, it would be easy enough to use such models for harm. If OpenAI is serious about so-called "biodefense" contracting with the US government, it's not hard to envision a next-generation smallpox blanket scenario. More on OpenAI: Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions