logo
Apple is bringing polls to Messages in iOS 26

Apple is bringing polls to Messages in iOS 26

TechCrunch09-06-2025

Apple announced at WWDC 2025 on Monday that it's bringing polls to Messages in its upcoming iOS 26 update. The feature has been highly requested by users, and is one that has been long adopted by services like WhatsApp and Telegram.
The feature will allow users to vote on different things directly within group chats. For example, you could create a poll to decide where your next girls' brunch will be located, or which novel you and your book club are going to read next.
Apple shared that Apple Intelligence will be able to suggest polls based on the context of your conversations. For example, if someone messages: 'What should we eat?' Apple Intelligence will suggest starting a poll.
While this new feature is in no way groundbreaking, it's nice to see Apple catch up to other chat services and give users a better way to plan and decide things right within their group chats.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Semler Scientific Investors Cheered by New Hire, Lofty Bitcoin Acquisition Goals
Semler Scientific Investors Cheered by New Hire, Lofty Bitcoin Acquisition Goals

Yahoo

timean hour ago

  • Yahoo

Semler Scientific Investors Cheered by New Hire, Lofty Bitcoin Acquisition Goals

Semler Scientific (SMLR) has hired Joe Burnett to the newly created position of director of Bitcoin strategy. Alongside, the company — which currently holds 4,449 bitcoin worth about $462 million — announced a goal of owning at least 10,000 bitcoin by the end of this year, 42,000 by year-end 2026 and 105,000 by year-end 2027. "We are excited to have Joe join our Bitcoin strategy team and help drive our three-year-plan to own 105,000 Bitcoins," said company Chairman Eric Semler in a press release. "Joe is an analytical thought leader on Bitcoin and Bitcoin treasury companies. His expertise will be instrumental as we pursue our Bitcoin treasury strategy and aim to deliver long-term value to our stockholders." "For over seven years, [Joe] has publicly been making the case for Bitcoin as the world's most advanced form of monetary technology," the release continued. "He previously served as director of market research at Unchained, a Bitcoin-focused financial services company." Investors, for now, are applauding the news, sending SMLR higher by 14% on Friday even as bitcoin has dipped back below $104,000 and most BTC-related stocks are trading in the red. Prior to today, though, it's been a rough ride for SMLR, which remains lower by 33% year-to-date and more than 50% off its 2025 high above $80. The sharp share price decline has left the company's market capitalization at or below the value of the bitcoin on its balance sheet — thus taking off the table the ability to accretively raise money for more BTC purchases through common share sales. The hiring of Burnett and lofty BTC acquisition goals suggests Semler is likely to get creative with capital raising plans, perhaps — in similar fashion to Michael Saylor's Strategy — turning to the preferred share market.

Why You Should Never Click Old Discord Invite Links
Why You Should Never Click Old Discord Invite Links

Yahoo

timean hour ago

  • Yahoo

Why You Should Never Click Old Discord Invite Links

If you've received an invite link to Discord but never used it to join that specific server, don't click through it weeks or months later. As Bleeping Computer reports, hackers have repurposed Discord invite links that have expired or been deleted to deliver malware, including infostealers and keyloggers. How Discord links are spreading malware The malware campaign, identified by Check Point Research, capitalizes on a flaw in how Discord handles invite links, which can be temporary or permanent or, for paid servers with Level 3 Boost status, customized. URLs to join regular Discord servers are randomly generated and unlikely to ever repeat, but vanity links—as well as expired temporary invite links and deleted permanent invite links—can be claimed and reused. Discord also allows invite codes with uppercase letters to be recycled in vanity links with lowercase letters while the original is still active. This means that hackers can redirect users to malicious servers via links originating from legitimate Discord communities. These links are being shared on social media and official community websites. When a user clicks the stolen link, they land on a Discord server that looks authentic and prompts them to verify their identity to unlock access. The verification link launches a ClickFix web page, which indicates that a (fake) CAPTCHA has failed to load and directs the user to "verify" by manually running a Windows command. This executes a PowerShell script, which downloads and installs the malware. The payload itself may include malicious programs—like AsynchRAT, Skuld Stealer, and ChromeKatz—that allow keylogging, webcam or microphone access, and infostealing to harvest browser credentials, cookies, passwords, Discord tokens, and/or crypto wallet data. According to Check Point's analysis, the malware has numerous features that allow it to evade detection by antivirus tools. The report also notes that while Discord took action to mitigate this specific campaign, the risk of similar bots or alternative delivery methods still exists. How to avoid malicious Discord links First and foremost, be wary of old Discord invite links, especially those posted on social media or forums weeks or months back. (Temporary invite URLs on Discord can be set to expire within 30 minutes or up to a default of seven days.) Don't click links from users you don't know and trust, and request a new invite rather than relying on an old one. You should use caution when engaging with verification requests, especially those that prompt you to copy and run manual commands on your device. ClickFix attacks via fake CAPTCHA requests abound, and any verification that tells you to execute a Run command is not legit. If you run a Discord server, use permanent invite links, which are harder to steal and repurpose than temporary or custom URLs.

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store