Apple Intelligence announcements at WWDC: Everything Apple revealed for iOS, macOS and more
Apple Intelligence hasn't landed in the way Apple likely hoped it would, but that's not stopping the company from continuing to iterate on its suite of AI tools. During its WWDC 2025 conference on Monday, Apple announced a collection of new features for Apple Intelligence, starting with upgrades to Genmoji and Image Playground that will arrive alongside iOS 26 and the company's other updated operating systems.
In Messages, you'll be able to use Image Playground to generate colorful backgrounds for your group chats. At the same time, Apple has added integration with ChatGPT to the tool, meaning it can produce images in entirely new styles. As before, if you decide to use ChatGPT directly through your iPhone in this way, your information will only be shared with OpenAI if you provide permission.
Separately, Genmoji will allow users to combine two emoji from the Unicode library to create new characters. For example, you might merge the sloth and light bulb emoji if you want to poke fun at yourself for being slow to understand a joke.
Across Messages, FaceTime and its Phone app, Apple is bringing live translation to the mix. In Messages, the company's on-device AI models will translate a message into your recipient's preferred language as you type. When they responded, each message will be instantly translated into your language. In FaceTime, you'll see live captions as the person you're chatting with speaks, and over a phone call, Apple Intelligence will generate a voiced translation.
Visual Intelligence is also in line for an upgrade. Now in addition to working with your iPhone's camera, the tool can scan what's on your screen. Like Genmoji, Visual Intelligence will also benefit from deeper integration with ChatGPT, allowing you to ask the chat bot questions about what you see. Alternatively, you can search Google, Etsy and other supported apps to find images or products that might be a visual match. And if the tool detects when you're looking at an event, iOS 26 will suggest you add a reminder to your calendar. Nifty that. If you want to access Visual Intelligence, all you need to do is press the same buttons you would to take a screenshot on your iPhone.
As expected, Apple is also making it possible for developers to use its on-device foundational model for their own apps. "With the Foundation Models framework, app developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they're offline, and that protect their privacy, using AI inference that is free of cost," the company said in its press release. Apple suggests an educational app like Kahoot! might use its on-device model to generate personalized quizzes for users. According to the company, the framework supports Swift, Apple's own coding language, and the model is as easy as writing three lines of code.
An upgraded Shortcuts app for both iOS and macOS is also on the way, with support for actions powered by Apple Intelligence. You'll be able to tap into either of the company's on-device or Private Cloud Compute model to generate responses that are part of whatever shortcut you want carried out. Apple suggests students might use this feature to create a shortcut that compares an audio transcript of a class lecture to notes they wrote on their own. Here again users can turn to ChatGPT if they want.
There are many other smaller enhancements enabled by upgrades Apple has made to its AI suite. Most notably, Apple Wallet will automatically summarize tracking details merchants and delivery carriers send to you so you can find them in one place.
A year since its debut at WWDC 2024, it's safe to say Apple Intelligence has failed to meet expectations. The smarter, more personal Siri that was the highlight of last year's presentation has yet to materialize. In fact, the company delayed the upgraded digital assistant in March, only saying at the time that it would arrive sometime in the coming year. Other parts of the suite may have shipped on time, but often didn't show the company's usual level of polish. For instance, notification summaries were quite buggy at launch, and Apple ended up reworking the messages to make it clearer they were generated by Apple Intelligence. With today's announcements, Apple still has a long way to go before it catches up to competitors like Google, but at least the company kept the focus on practical features. If you buy something through a link in this article, we may earn commission.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
an hour ago
- Tom's Guide
Windows parental controls are crashing Chrome — here's the workaround
Windows 11's Family Safety feature is supposed to block certain websites from children, but apparently it's also been causing issues with Google's Chrome browser, a (vastly more popular) competitor to Microsoft's own Edge. The problem first surfaced on Windows on June 3, per The Verge, when several users started noticing they couldn't open Chrome or their browser would crash randomly. Restarting their computer or reinstalling Chrome didn't fix the issue, and other browsers like Firefox and Opera appeared unaffected. On Monday, a Google spokesperson posted in the company's community forum that it had investigated these reports and found the issues were linked to Microsoft's new Windows Family Safety feature. This optional feature is primarily used by parents and schools to manage children's screen time, filter their web browsing, and monitor their online activity. Curiously, the bug has been going on for weeks now, and Microsoft still hasn't issued a patch. 'We've not heard anything from Microsoft about a fix being rolled out,' wrote a Chromium engineer in a bug tracking thread on June 10. 'They have provided guidance to users who contact them about how to get Chrome working again, but I wouldn't think that would have a large effect.' While this issue could be an innocent bug, Microsoft has a history of placing annoying hurdles between Edge and Chrome to entice users to stick with its browser. So anytime a technical snafu makes Chrome run worse on Windows PCs, Microsoft understandably gets some serious side eye. Thankfully, there seem to be two ways to get around this bug while we wait for Microsoft to issue a fix, and they're both fairly simple. The most straightforward is to turn off the "Filter Inappropriate Websites" setting. Head to the Family Safety mobile app or Family Safety web portal, select a user's account, and choose to disable "Filter inappropriate websites" under the Edge tab. However, that'll remove the guardrails on Chrome and let your child access any website, including the ones you were trying to block in the first place. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. If you want to keep the guardrails on and still use Chrome, some users reported that altering the name in your Chrome folder (to something like Chrome1, for example), got the browser to work again even with the Family Safety feature enabled.
Yahoo
an hour ago
- Yahoo
Why is AI halllucinating more frequently, and how can we stop it?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."
Yahoo
2 hours ago
- Yahoo
Adaptive Power in iOS 26 Could Save the iPhone 17 Air From This Major Pitfall
There's one feature Apple unveiled during WWDC on Monday that didn't get the attention I think it deserves: Adaptive Power. This AI-powered feature can help your iPhone battery last longer by lowering your display's brightness and making "small performance adjustments" like "allowing some activities to take a little longer," according to Apple. It'll also turn on Low Power Mode automatically when your battery drops to 20% to limit background activities and further extend battery life. Adaptive Power can come in clutch no matter what phone you have (as long as it can run iOS 26), but where it really has the potential to be a game-changer is with the rumored iPhone 17 Air. Apple's thinner iPhone is expected to debut in the fall, though the company has yet to confirm reports about its imminent arrival. A skinny iPhone would join the ranks of other slim phones like Samsung's Galaxy S25 Edge and the Oppo Find N5, which both came out earlier this year. And on Monday, hot on the heels of WWDC, Samsung also shared a teaser about its upcoming Galaxy Z foldable series, calling it "the thinnest, lightest and most advanced foldable yet." Thin phones can come across as gimmicky (who asked for them, really?), but they're undoubtedly having a moment as companies look for new ways to lure your dollars. After using devices like the Galaxy S25 Edge and Oppo Find N5, I can attest that holding a slim, lightweight phone is quite refreshing, and I'm eager to see what Apple has in store. But there's also a major downside to building a phone with such a slim profile, as I experienced recently with the S25 Edge: Battery life takes a hit. A thinner phone also means a smaller battery, which means shorter battery life. The S25 Edge, for instance, definitely needs a recharge at the end of the day -- no excess battery there. If Apple can find a way to make the iPhone 17 Air last beyond that bare minimum amount, that could really help its slim offering stand out. Adaptive Power may be the superpower Apple needs to appeal to anyone who won't sacrifice battery life for a thinner phone. But whether this feature truly is a breakthrough is up in the air -- along with the reality of the iPhone 17 Air itself. See also: Dear Apple, Please Steal These Galaxy S25 Edge Features for a Thin iPhone