logo
This AI web browser can perform certain tasks for you

This AI web browser can perform certain tasks for you

The Star13-06-2025

Dia is currently only available to a very limited number of testers. — The Browser Company
The Browser Company has launched a new web browser in beta for a limited number of users. It is designed around integrated artificial intelligence that can understand context and act on your behalf, like a true online personal assistant.
Named Dia, this web browser is currently only available to existing users of Arc, the former alternative browser project from The Browser Company, which has now been suspended. For the time being, Dia is only compatible with macOS and only works on devices equipped with M1 chips or later. This means that this beta version is not yet intended for a wide audience. However, it looks very promising.
Dia stands out for its advanced integration of artificial intelligence into the user experience. This beta version is already highly ambitious, as it includes a veritable intelligent assistant, custom-developed by The Browser Company. This assistant can analyze entire web pages and provide contextualized advice and answers based on browsing history and the various tabs that are open. The idea is to be able to handle everything that's available online and in the browser with simple natural language commands.
Dia is also capable of automatically performing certain specific tasks for you, based on the intent detected in your queries. The browser has been designed to perform a specific task, whether it's writing an email, comparing items on different shopping sites, answering specific questions about one of them, summarizing the content of a video or article opened in a tab, etc.
Dia is therefore reimagining the browser, no longer as a medium for web pages, but as an intelligent assistant designed to facilitate everyday tasks without leaving the browser interface and therefore without necessarily needing to use services like ChatGPT. The icing on the cake is that The Browser Company promises that all personal data analyzed or used by its AI is stored locally and never shared.
ChatGPT publisher OpenAI is also reportedly working on a web browser with advanced and intelligent features. – AFP Relaxnews

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

BBC threatens legal action against AI startup Perplexity over content scraping, FT reports
BBC threatens legal action against AI startup Perplexity over content scraping, FT reports

The Sun

time12 hours ago

  • The Sun

BBC threatens legal action against AI startup Perplexity over content scraping, FT reports

THE BBC has threatened legal action against Perplexity, accusing the AI startup of training its 'default AI model' using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping. The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits 'a proposal for financial compensation' for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas. The broadcaster confirmed the FT report in a statement to Reuters. Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns. Last October, the New York Times sent it a 'cease and desist' notice, demanding the firm stop using the newspaper's content for generative AI purposes. Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users. The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report. Perplexity called the BBC's claims 'manipulative and opportunistic' in a statement to Reuters, adding that the broadcaster had 'a fundamental misunderstanding of technology, the internet and intellectual property law.' Perplexity provides information by searching the internet, similar to ChatGPT and Google's Gemini, and is backed by founder Jeff Bezos, AI giant Nvidia and Japan's SoftBank Group. The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.

BBC threatens legal action over Perplexity AI content use
BBC threatens legal action over Perplexity AI content use

The Sun

time12 hours ago

  • The Sun

BBC threatens legal action over Perplexity AI content use

THE BBC has threatened legal action against Perplexity, accusing the AI startup of training its 'default AI model' using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping. The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits 'a proposal for financial compensation' for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas. The broadcaster confirmed the FT report in a statement to Reuters. Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns. Last October, the New York Times sent it a 'cease and desist' notice, demanding the firm stop using the newspaper's content for generative AI purposes. Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users. The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report. Perplexity called the BBC's claims 'manipulative and opportunistic' in a statement to Reuters, adding that the broadcaster had 'a fundamental misunderstanding of technology, the internet and intellectual property law.' Perplexity provides information by searching the internet, similar to ChatGPT and Google's Gemini, and is backed by founder Jeff Bezos, AI giant Nvidia and Japan's SoftBank Group. The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.

Relying on AI could be weakening the way we think, researchers warn
Relying on AI could be weakening the way we think, researchers warn

Sinar Daily

time15 hours ago

  • Sinar Daily

Relying on AI could be weakening the way we think, researchers warn

ARTIFICIAL intelligence is progressively transforming how we write, research, and communicate in this new age of technological renaissance. But according to MIT's latest study, this digital shortcut might come at a steep price: our brainpower. A new study by researchers at the Massachusetts Institute of Technology (MIT) has raised red flags over the long-term cognitive effects of using AI chatbots like ChatGPT, suggesting that outsourcing our thinking to machines may be dulling our minds, reducing critical thinking, and increasing our 'cognitive debt.' Researchers at MIT found that participants who used ChatGPT to write essays exhibited significantly lower brain activity, weaker memory recall, and poorer performance in critical thinking tasks than those who completed the same assignments using only their own thoughts or traditional search engines. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the research paper elaborated. While AI tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Photo: Canva The MIT Study The conducted study in question involved 54 participants, who were divided into three groups: one used ChatGPT, another relied on search engines, and the last used only their brainpower to write four essays. Using electroencephalogram (EEG) scans, the researchers measured brain activity during and after the writing tasks. The results were stark. 'EEG revealed significant differences in brain connectivity. Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM (Large Language Model) users displayed the weakest connectivity,' the researchers reported. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Researchers described this as 'offloading human thinking and planning,' indicating that the brain was doing less work because it was leaning on the AI. Interestingly, when later asked to quote or discuss the content of their essays without AI help, 83 per cent of the chatbot users failed to provide a single correct quote, compared to just 10 per cent among the search engine and brain-only groups. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. Photo: Canva In context to the study, this would likely suggest they either didn't engage deeply with the content or simply didn't remember it. 'Frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving,' lead researcher Dr Nataliya Kosmyna warned. The chatbot-written essays were also found to be homogenous, with repetitive themes and language, suggesting that while AI might produce polished results, it lacks diversity of thought and originality. Are our minds getting lazy? The MIT findings echo earlier warnings about the dangers of 'cognitive offloading' — a term used when people rely on external tools to think for them. An earlier February 2025 study by Microsoft and Carnegie Mellon University found that workers who heavily relied on AI tools reported lower levels of critical thinking and reduced confidence in their own reasoning abilities. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. This particular trend is steadily increasing concerns of having serious consequences for education and workforce development. Moving forward, the MIT team cautioned that relying too much on AI could diminish creativity, increase vulnerability to manipulation, and weaken long-term memory and language skills. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Photo: Canva The dawn of a new era? With AI chatbots becoming increasingly common in classrooms and homework help, educators are facing a difficult balancing act. While these said tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Teachers have been voicing concerns that students are using AI to cheat or shortcut their assignments. The aforementioned MIT study provides hard evidence that such practices don't just break rules — they may actually hinder intellectual development. As such, the primary takeaway is not that AI is inherently bad — but that how we use it matters greatly. The study thus reinforces the importance of engaging actively with information, rather than blindly outsourcing thinking to machines. As the researchers put it: 'AI-assisted tools should be integrated carefully, ensuring that human cognition remains at the centre of learning and decision-making.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store