
Google Confirms Urgent Security Update For 3 Billion Chrome Users
Update Google Chrome now.
Photothek via Getty Images
Google has confirmed two new and high-severity security vulnerabilities impacting the Chrome web browser, which is used by more than 3 billion people worldwide. As such, the technology behemoth has issued an update that all users across Windows, Linux, Mac and Android should activate as a matter of some urgency. Given that one of the issues was only discovered on June 6 and an update is available already, that urgency should not be overlooked. Here's what you need to know and do.
Although, as is usually the case with such high-severity security issues, Google is keeping the full technical details restricted until such a time as most users have been able to apply the update, we do know that the Common Vulnerabilities and Exposures are as follows:
CVE-2025-5958 is a use-after-free vulnerability that impacts media and was disclosed by Huang Xilin of the Ant Group Light-Year Security Lab.
CVE-2025-5959 is a type confusion in Chrome's V8 javascript rendering engine and was disclosed by Seunghyun Lee during the TyphoonPWN 2025 hacking competition on June 6.
Google has confirmed that the Chrome security update 'will roll out over the coming days/weeks,' and that this will happen automatically. You will see a notification on your browser itself when the update to version 137.0.7151.103/.104 has been downloaded and applied. However, this will still not mean that you are protected, as you will need to activate the update in order for it to take effect. I would always err on the side of caution and advise kickstarting the updating process so you can be sure your browser and the data it can access are appropriately protected immediately. Head for the Help menu and select About Google Chrome. This will check for and download the update, and then all you have to do is activate it for instant security from these vulnerabilities. Don't worry, your tabs will reopen as well, so you won't lose them. So, what are you waiting for? Android users simply need to update the Chrome app.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
24 minutes ago
- Yahoo
Chevron Boosts Dividend Potential with Strategic Lithium Acquisition
Chevron Corporation (NYSE:CVX) is one of the 10 best dividend stocks according to Jim Cramer. On June 17, 2025, the company announced the closing of a transaction to acquire all equity interests in two subsidiaries of TerraVolta Resources and its investor, The Energy & Minerals Group. A tanker truck making its way through a refinery facility. . The American multinational energy corporation, Chevron Corporation (NYSE:CVX) engages in the exploration and extraction of crude oil and natural gas. Headquartered in Texas, the company focuses on multiple aspects of the oil and gas industry, from production and refining to marketing and transportation. Chevron U.S.A. Inc., a subsidiary of Chevron Corporation (NYSE:CVX) has completed the acquisition of 125,000 net acres in Northeast Texas and Southwest Arkansas from TerraVolta Resources and East Texas Natural Resources on June 17, 2025. With this acquisition, the company enters the domestic lithium sector, targeting the Smackover Formation known for its high lithium content. The company plans to use direct lithium extraction (DLE), an advanced method with a smaller environmental footprint, to establish a lithium business with commercial value. Through the investment, Chevron Corporation (NYSE:CVX) could potentially support the growing demand for critical minerals essential for electrification. With a payout ratio of 75.43%, indicating the company's capabilities to handle the dividend payments, Chevron Corporation (NYSE:CVX) offers a dividend yield of 4.69%. The increase in dividends for 38 consecutive years further makes the stocks appealing to investors looking for less risky, long-term, stable income. While we acknowledge the potential of CVX as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: andDisclosure. None.
Yahoo
an hour ago
- Yahoo
ChatGPT's Impact On Our Brains According to an MIT Study
A visualization of a new study on AI chatbots by MIT Media Lab scholars. Credit - Nataliya Kosmyna Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study. The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.' Read more: A Psychiatrist Posed As a Teen With Therapy Chatbots. The Conversations Were Alarming The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel. Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices. The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says. The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays. The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search. After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.' The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Read more: I Quit Teaching Because of ChatGPT This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.' Psychiatrist Dr. Zishan Khan, who treats children and adolescents, says that he sees many kids who rely heavily on AI for their schoolwork. 'From a psychiatric standpoint, I see that overreliance on these LLMs can have unintended psychological and cognitive consequences, especially for young people whose brains are still developing,' he says. 'These neural connections that help you in accessing information, the memory of facts, and the ability to be resilient: all that is going to weaken.' Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing. Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues. Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity. OpenAI did not respond to a request for comment. Last year in collaboration with Wharton online, the company released guidance for educators to leverage generative AI in teaching. Last year in collaboration with Wharton online, the company released guidance for educators to leverage generative AI in teaching. Contact us at letters@


Digital Trends
an hour ago
- Digital Trends
Apple needs an AI magic pill, but I'm not desperate for it on macOS
Over the past few months, all eyes have been fixated on Apple and what the company is going to do with AI. The pressure is palpable and well deserved. Google has demonstrated some really compelling AI tools, especially with Project Astra and Mariner, that turn your phone into something like an all-knowing, forever-present digital companion. The likes of Microsoft, OpenAI, Claude, and even Amazon have shown some next-gen AI chops that make Siri feel like an old prototype. But there is a fine distinction between using AI on phones and how they flesh out on a computing machine, like a MacBook Air. Recommended Videos You don't really talk to an assistant like Siri on a desktop I often run into scenarios where AI is useful on a phone, like Visual Intelligence, which can make sense of the world around you based on what you see through the camera feed. The Mac doesn't really need it, primarily because it lacks a world-facing camera. And second, you can't ergonomically point the Mac's webcam at an object — especially in a public place — like you would with a phone in your hand. But the problem with the whole 'Apple must do AI better' is suited well for mobile devices, and not really Macs, which rely on a fundamentally different mode of input-output, and how we get work done in apps and software. I've used my fair share of AI-first Copilot+ laptops running Windows, and I feel strongly that Apple's AI efforts don't need an urgent focus on macOS, as much as they do on mobile devices, for a few reasons. The Mac is already well fed Bloomberg's Mark Gurman, in the latest edition of his PowerOn newsletter, argued that Perplexity is a nice target for Apple to scoop up an AI lab of its own and get its hands on a ready-made AI stack. Perplexity's answering engine is pretty rewarding, it's not too expensive (by Apple standards), and it works beautifully on iPhones. Over the past couple of quarters, the company has launched a whole bunch of integrations across Telegram and WhatsApp, Deep Research mode, a reasoning AI model, a shopping hub in partnership with Amazon, media generation and image uploads, search through audio and video files, among others. There are just two problems, especially with accessing Perplexity on a Mac. First, it can already do everything in its role via the Mac app and web dashboard, so an integration at a deeper level with Mac won't be solving too many computing problems. Second, ChatGPT is already integrated deeply within Siri and the Apple stack, and it's only a matter of time before both of them step up. Let's be honest here. Perplexity is a cool product, but not exactly revolutionary in the sense that it can elevate the macOS experience significantly. Enterprise AI is a different beast, but for an average user, every AI tool out there — Gemini, ChatGPT, Copilot, Claude, or Perplexity — exists as its own web tool (or app) where you truly get the best out of it. So, what about integrations? Well, they would depend on the tools at hand. A huge chunk of the computing market either relies on Microsoft and its Office tools or Google's Workspace products, such as Docs, Drive, Sheets, and more. From Windows to Office, Copilot is now everywhere. Similar is the situation with Gemini and Google software. Now, millions of Mac users actually use these tools on a daily basis, and Apple doesn't offer a viable replacement of its own. Moreover, there isn't a chance that Google will allow Apple's AI to penetrate deeper into its Workspace than Gemini. Microsoft won't do any different with Copilot and Office. Plus, it's hard to imagine an external AI working better in Docs or PowerPoint than Gemini and Copilot, respectively. The space is already tight, but more importantly, well-fed. And let's not forget, OpenAI and its GPT stack are very much baked at the heart of macOS. If Apple wanted to build integrations, OpenAI offers arguably the most advanced AI tech stack out there. Adding any more AI at the system level would only add to the confusion for an average Mac user, without solving any real problems. The space of an extra AI player on the Mac is tighter for another reason: Apple's Foundation Model framework, which works on-device as well as in cloud-linked format, but with utmost privacy. Apple says it will allow developers to build a 'personal intelligence system that is integrated deeply into iPhone, iPad, and Mac, and enables powerful capabilities across language, images, actions, and personal context.' In a nutshell, Apple's own foundation models are available to developers so that they can build AI experiences in their apps. The best part? It's free. It's not nearly as powerful as the models from OpenAI or Google, but for getting work done locally — like cross-app workflow, intelligent file search, and more — they should come in handy without any privacy scares. The productivity question The M4 MacBook Air is my daily driver these days, and it's a fantastic machine. And I use AI tools heavily on a daily basis. Yet, I have never felt macOS to be an AI bottleneck for me. Every AI tool that I rely on is either already integrated within the software of my choice or available as its dedicated app or website. Yet, the whole notion of turning a product into an AI product baffles me. It makes sense for a phone, like the Pixel 9, but not so much for a laptop. I have tested five Copilot+ Windows machines so far. Yet, the core benefits they offer — snappy performance, instant wake, and long battery life — have little to do with user-facing AI. I was able to use Gemini or Copilot just as fine on a regular Windows laptop as I was able to extract their benefits on a Copilot+ machine with a minimum 45 TOPS AI capability. The Mac is no slouch, and interestingly, all the AI tools in my productivity workflow can be accessed just fine on macOS as they are available on Windows. There are a few exclusive perks, like Windows Recall, but they are not a must-have for the average computer user out there. And let's not forget that Apple already has the foundations ready, and we are going to see the results next year. When Apple introduced the M4 MacBook Air, the company focused on its AI chops, but what flew under the radar was Apple's App Intents Framework, which integrates effortlessly with Apple Intelligence. In simple terms, any app — whether AI or not — can embrace the benefits of on-device AI processing, such as awareness of on-screen content, in a native macOS environment. Now, it's valid to criticize Apple for its AI missteps. I am at a stage where I use Gemini everywhere on my iPhone, from the lock screen widgets to the dedicated app, instead of Siri. But that's not the situation with Macs. For my workflow, and a whole bunch of Mac users' out there, they're not gasping for a next-gen Apple AI. What they need is a reliable machine to run the AI of their choice. Even the cheapest Mac can meet those requirements.