
OpenAI to buy Jony Ive's AI startup IO Products for $6.4 billion
CNBC's Pippa Stevens joins 'The Exchange' to discuss OpenAI to buy Jony Ive's AI startup IO products for $6.4 billion.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Why is AI halllucinating more frequently, and how can we stop it?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

Miami Herald
2 hours ago
- Miami Herald
Tesla releases new details about its next big deal
Rumors have been swirling for weeks as Tesla nears the launch of its next big idea - robotaxi - in Austin, Texas. The robotaxi hype hasn't reached the fever pitch of the Cybertruck, Tesla's last big idea, but if it gets this right, robotaxi has the chance to transform not just Tesla, but driving itself. Related: Tesla robotaxi launch hits major speed bump Tesla is admittedly slow-walking the rollout with CEO Elon Musk telling CNBC, "It's prudent for us to start with a small number, confirm that things are going well, and then scale it up." Tesla says it will have just 10 robotaxis on the street at launch. The company has already been testing its system, however. Earlier this year, Tesla said that its FSD system has driven a cumulative total of 3.6 billion miles, nearly triple the 1.3 billion cumulative miles it reported a year ago. But the public may not trust the autonomous vehicles yet. "Consumers are skeptical of the full self-driving (FSD) technology that undergirds the robotaxi proposition, with 60% considering Tesla's full self-driving 'unsafe,' 77% unwilling to utilize full self-driving technology, and a substantial share (48%) believing full self-driving should be illegal," the May 2025 edition of the Electric Vehicle Intelligence Report (EVIR) said. Self-driving Teslas have already been spotted on city streets with a human riding shotgun ahead of the program's official launch. And now Tesla is confirming that humans will be a fixture as it goes forward. Image source: vanTesla won't be leaving passengers in their Austin robotaxis alone, as the company plans to have a "safety monitor" sitting in the front seat during drives. Musk has claimed in the past that once the robotaxi program is up and running, Tesla owners would be able to earn passive income by allowing their Teslas to operate autonomously as taxis, without human intervention. However, the "safety monitor" isn't an abnormal safety feature for an autonomous vehicle. Waymo tested its vehicles for six months with a driver and for six months without one in Austin before it launched its commercial service earlier this year, according to Electrek. Related: Tesla takes drastic measures to keep robotaxi plans secret A safety monitor is just one of the robotaxis' safety requirements. Riders must agree to a TOS agreement, must have a debit or credit card on file, and can only request rides via the app between 6 a.m. and 12 a.m. within the geofenced area where it's allowed to operate. That geofenced area limits where cars can travel and changes based on the time of day. Only invited users are allowed to download and use the Robotaxi app. While the Cybertruck has had a lot of hype, it has been a massive flop for Tesla. A backlog of reservations helped push Cybertruck out with a lot of momentum, but it can only be described as an epic failure regarding sales. Tesla sold just 7,100 Cybertrucks in the first three months of the year, according to the Wall Street Journal, nearly half of the 13,000 it sold in the fourth quarter of 2024. Tesla sold fewer than 40,000 Cybertrucks in 2024, making Elon Musk's far-fetched prediction of over half a million annual sales look farcical. Electrek reports that Tesla is sitting on $800 million of 10,000 Cybertrucks it can't sell. More on Tesla: Tesla claims rival startup is built on stolen trade secrets10,000 people join crazy Tesla class action lawsuitTesla execs question Elon Musk over controversial X post Mix that with the dismal quarter Tesla just reported, and Musk has a lot of work to do. It reported its worst quarter in years, with auto sales revenue dropping 20% amid falling demand in the U.S., Europe, and China. In the first quarter, deliveries fell 13% year over year to 36,681 vehicles from 386,810. Musk has been promising the robotaxi since at least 2016. Now that it is finally ready to debut, the company needs Musk's latest big swing to be a home run. Related: Tesla's robotaxi rollout is alarming the public, new report shows The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


CNBC
3 hours ago
- CNBC
36-year-old travels the world in a Toyota Tacoma: After 3 years on the road, this is her No. 1 takeaway
In 2015, Ashley Kaye's father died and she inherited her childhood home in Waterford, Wisconsin. At the time, she was 27 years old, working in corporate healthcare and transitioning to a consulting job, where she worked 80 to 100 hours a week. "I worked from home, so I just walked from my bedroom to my office to the kitchen and repeat," Kaye, now 36, tells CNBC Make It. "I was a zombie in those times," While traveling, Kaye met someone on a scuba diving trip in Honduras who helped her realize what she wanted was to leave her career behind and travel full-time. "We just hit it off and chatted the whole time I was there. We spoke about the worst of the worst, the best of the best, and financials, too," Kaye says. "He told me he wished he had done it sooner because it's so much easier and cheaper than you think. That changed everything for me. I went home and worked more and more until I quit the next year." Kaye spent the next three years traveling during the covid-19 pandemic. While on a trip to South Africa, she received unexpected news that her aunt was ill and she'd need to fly back home to Wisconsin. "That flight was probably the moment where not a single ounce of my being was like 'Yay, I'm going home.' It was like, 'I don't want to be here. This isn't it for me.'," she says. "I love being on the islands. I love having the ocean near me. That took away the hesitation I had in previous years about selling the house." While Kaye was back home caring for her aunt, she prepared her childhood home for sale and considered her next move. She thought a lot about trying van life and living and traveling with her dog. "Traveling by plane with a dog just sounded like a terrible idea," she says. "I do a lot of photography, so I knew I wanted something where I could reach tougher destinations." While waiting for the sale of her home to close, a couple reached out to Kaye on Instagram to ask about her time in South Africa. They shared their experience overlanding in a Toyota truck with a camper in the truck bed. Overlanding is a form of self-reliant travel that involves adventuring to remote destinations, typically in a vehicle of some type. After doing a bit of her own research, Kaye was all-in and purchased a Toyota Tacoma truck for $42,934, according to documents reviewed by CNBC Make It. Kaye picked up the truck in South Dakota and drove it back to Wisconsin to finish packing up her home when it officially sold in March 2023. Now that her new home was the truck, Kaye set off on her first adventure: A drive down to Baja California, Mexico. She stayed there for three months and planned out the renovations she would need to make the truck more livable. "My life is kind of like 'the plan is there is no plan.' Most people plan this type of adventure for years. I didn't even have a truck when I accepted the offer on my house," she says. "It was very spur of the moment, so I needed to take a pause and figure things out." While living in Mexico, Kaye found an American company that made the truck bed replacements that would provide external storage and make it easier for her to live and travel in the Toyota Tacoma. But, the installation couldn't happen until September. In the meantime, Kaye learned as much as she could about the truck and the kind of camper she would need. She estimates that she has spent over $50,000 on the renovations. Costs included purchasing a camper, adding solar power, replacing the truck bed, upgrading the suspension, new tires, customizing a bumper, and installing an electric cooler. When the truck was ready, Kaye decided to journey the Pan-American Highway, starting in Denver. The highway stretches from Prudhoe Bay, Alaska to Ushuaia, Argentina. "It's really an incredible way to travel because you get to set your own pace and if you find somewhere that's beautiful and peaceful you can stay as long as you want," Kaye says. "But there's pros and cons to every mode of travel and a lot of red tape and logistics crossing borders. It can be exhausting, especially when you're alone. You have to find a balance that works for you, but overall, it's definitely one of the coolest adventures of my lifetime." Since living and traveling in the truck full-time, Kaye has visited Mexico, every country in Central America, Colombia, Ecuador, Peru, Chile and parts of Argentina. In total, she's been to over 20 countries so far. "I don't want to be a cliché and say it's a dream life because it's a lot of work and there are a lot of things that you need to take care of and maintain," she says. "But it's really incredible to be able to wake up and just look at the map and say, 'Should I go sleep inside this volcano or go to the jungle or go to the beach?' You have a lot of really beautiful options, so I can't really complain." After all this time on the road, Kaye says the biggest lesson she's learned is that life is too short. "Ever since I started traveling, [I learned] life is just too short. You don't have to go and quit your career to travel the world but whatever your dreams and goals are in life just start now and everything else is just figuring out a goal," she says. Kaye says when she was younger, it was her dad who taught her that she was capable of anything. "I grew up with my dad raising me and telling me every day 'You can be anything you want when you grow up and you can do anything,'" she says. "He was 57 when he passed away, so he never even got to retire. His passing taught me how to live life because you never know how much time you have in life."