Latest news with #LLM


Express Tribune
27 minutes ago
- Science
- Express Tribune
MIT AI study: Using tools like ChatGPT is making you dumber, study reveals
A new study from the Massachusetts Institute of Technology (MIT) suggests that frequent use of generative artificial intelligence (GenAI) tools, such as large language models (LLMs) like ChatGPT, may suppress cognitive engagement and memory retention. In the experiment, published by MIT, researchers monitored the brain activity of participants as they wrote essays using different resources: one group relied on LLMs, another used internet search engines, and a third worked without any digital tools. The results revealed a consistent pattern — participants who used GenAI tools displayed significantly reduced neural connectivity and recall, compared to those who relied on their own cognitive abilities. Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory. While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original. Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions. Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout. Participants who used search engines showed intermediate neural activity. Though their writing lacked variety and often reflected similar phrasing, they exhibited better memory retention than the LLM group, suggesting that the process of searching and evaluating sources provided more mental stimulation. In a later phase of the experiment, the groups were shuffled. Participants who had initially used GenAI tools showed improved neural connectivity when writing without digital aids — an encouraging sign that cognitive function could rebound when AI dependence is reduced. The findings could carry important implications for education and the workplace. BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying. Turns out, AI isn't making us more productive. It's making us cognitively bankrupt. Here's what 4 months of data revealed: (hint: we've been measuring productivity all wrong) — Alex Vacca (@itsalexvacca) June 18, 2025 With GenAI tools increasingly integrated into school assignments and professional tasks, concerns about cognitive atrophy are rising. Some students now generate entire essays with tools like ChatGPT, while educators rely on similar software to grade and detect AI-generated work. The study suggests that such widespread use of digital assistance — even when indirect — may hinder mental development and reduce long-term memory retention. As schools and organisations continue to navigate the integration of AI tools, the MIT research underscores the importance of balancing convenience with cognitive engagement. Researchers suggest that while GenAI can be a useful aid, overreliance could have unintended consequences for human memory and creativity.


Sinar Daily
5 hours ago
- Science
- Sinar Daily
Relying on AI could be weakening the way we think, researchers warn
ARTIFICIAL intelligence is progressively transforming how we write, research, and communicate in this new age of technological renaissance. But according to MIT's latest study, this digital shortcut might come at a steep price: our brainpower. A new study by researchers at the Massachusetts Institute of Technology (MIT) has raised red flags over the long-term cognitive effects of using AI chatbots like ChatGPT, suggesting that outsourcing our thinking to machines may be dulling our minds, reducing critical thinking, and increasing our 'cognitive debt.' Researchers at MIT found that participants who used ChatGPT to write essays exhibited significantly lower brain activity, weaker memory recall, and poorer performance in critical thinking tasks than those who completed the same assignments using only their own thoughts or traditional search engines. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the research paper elaborated. While AI tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Photo: Canva The MIT Study The conducted study in question involved 54 participants, who were divided into three groups: one used ChatGPT, another relied on search engines, and the last used only their brainpower to write four essays. Using electroencephalogram (EEG) scans, the researchers measured brain activity during and after the writing tasks. The results were stark. 'EEG revealed significant differences in brain connectivity. Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM (Large Language Model) users displayed the weakest connectivity,' the researchers reported. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Researchers described this as 'offloading human thinking and planning,' indicating that the brain was doing less work because it was leaning on the AI. Interestingly, when later asked to quote or discuss the content of their essays without AI help, 83 per cent of the chatbot users failed to provide a single correct quote, compared to just 10 per cent among the search engine and brain-only groups. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. Photo: Canva In context to the study, this would likely suggest they either didn't engage deeply with the content or simply didn't remember it. 'Frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving,' lead researcher Dr Nataliya Kosmyna warned. The chatbot-written essays were also found to be homogenous, with repetitive themes and language, suggesting that while AI might produce polished results, it lacks diversity of thought and originality. Are our minds getting lazy? The MIT findings echo earlier warnings about the dangers of 'cognitive offloading' — a term used when people rely on external tools to think for them. An earlier February 2025 study by Microsoft and Carnegie Mellon University found that workers who heavily relied on AI tools reported lower levels of critical thinking and reduced confidence in their own reasoning abilities. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. This particular trend is steadily increasing concerns of having serious consequences for education and workforce development. Moving forward, the MIT team cautioned that relying too much on AI could diminish creativity, increase vulnerability to manipulation, and weaken long-term memory and language skills. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Photo: Canva The dawn of a new era? With AI chatbots becoming increasingly common in classrooms and homework help, educators are facing a difficult balancing act. While these said tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Teachers have been voicing concerns that students are using AI to cheat or shortcut their assignments. The aforementioned MIT study provides hard evidence that such practices don't just break rules — they may actually hinder intellectual development. As such, the primary takeaway is not that AI is inherently bad — but that how we use it matters greatly. The study thus reinforces the importance of engaging actively with information, rather than blindly outsourcing thinking to machines. As the researchers put it: 'AI-assisted tools should be integrated carefully, ensuring that human cognition remains at the centre of learning and decision-making.'


Forbes
11 hours ago
- Business
- Forbes
Answer Engine Optimization (AEO— What Brands Need To Know
SAN FRANCISCO, CALIFORNIA - SEPTEMBER 17, 2018: A passenger waiting to board his plane walks in ... More front of a sign advertising Twilio at San Francisco International Airport in San Francisco, California. Twilio is a cloud communications platform based in San Francisco. (Photo by) We studied that traffic from ChatGPT-style experiences converts up to 9x better than traditional search. Why? Because LLMs behave more like trusted advisors than search engines. This shift is already transforming how consumers discover and buy — and if your brand isn't showing up in these conversations, you're invisible. In this article, I'll break down what Answer Engine Optimization (AEO) means, how brands can train LLMs to recognize them. Answer Engine Optimization is the practice of structuring content so that large language models (LLMs) like ChatGPT can understand, reference, and recommend your brand in response to user questions. To get picked up by an LLM, you need to understand how these models learn from content. LLMs get trained to complete sentences. Like: 'Life is like a box of chocolate'. During training the machine would just mask a word at random and then try to predict it. To show up in an LLM's response, your content needs to become part of the training data of LLM. Here are a few tips for businesses: You can't just dump your product catalog into the web and hope LLMs use it. It will scrape it, but it won't use it. Marketing copy won't cut it. LLMs learn through natural dialogue — not taglines. Brands need to shift from static, keyword-based content to dynamic, conversational material. Think less like a brochure, and more like a smart rep answering real customer questions. This is where SEO breaks down — it was built around isolated keywords. LLMs require context. LLMs skip over what they already know. If your content says 'The earth is round,' it won't register — the model already has that data. You need to find in your data something that new or less known about your brand, product, or category. The most valuable content is the stuff the model hasn't seen yet — helpful, real, and grounded in authentic conversation. Some things don't change. Just like in the SEO world, credibility still matters. High-quality content that gets linked, quoted, and validated across sources builds authority. Spam doesn't work. If your brand voice isn't trusted — or doesn't exist — LLMs won't echo it. Every time a new tech trend takes off — AEO is one — Silicon Valley races to build tools around it. AEO is no exception. The latest wave includes dashboards designed to track your brand's presence across ChatGPT, Perplexity, and other platforms. A few examples are Profound, Daydream and Goodie. All Track brand mentions across AI platforms. But here's the problem: LLMs don't behave like search engines. They remember. This was not the case in the search era. Google, for example, did not remember your searches. When I worked on Google Health, this was a common complaint from doctors: Google would always return the same results, even if you had already clicked those links before. Every new session was a reset. There was no context. That's no longer true. Ask ChatGPT what it knows about you — you'll see. These models build context. They recall prior interactions. And that memory shapes future recommendations. That however means that monitoring any LLM answers miss the point. As the LLMs memory evolves, and so do the outputs. To fully understand how your brand is being represented, you'd have to know the personalized memory of every single user — an impossible task for any dashboard. So what's the smarter approach? You want to know the traffic. Just watch your traffic. Look at what's actually coming in from ChatGPT, Gemini, or Perplexity. It's cheaper, more reliable — and shows you what really matters. Measurements are just half the rent. To impact the LLMs trainings data, you need new brand content. The old SEO playbook does not work anymore. Your brand has unique knowledge. It has a vision. Don't hide it behind generic product listings. Let's say someone searches for a 'retirement watch.' Don't just list five SKUs. Explain what makes a great retirement watch. Legacy? Legibility? Sentimental value? Engage the customer into an authentic conversation. That's the kind of context LLMs are trained to pick up. In short: show the real conversations you're already having. Look at your site search queries, your sales team scripts, your support chats. That's gold. LLMs thrive on the kind of content that sounds like a helpful human. Here's how to approach it: Some brands already have this content out in the open — in community forums, Reddit threads, or customer discussions. They'll naturally surface in LLM results. Others have great content buried in customer service logs or internal tools. That needs to come out. Structure it. Publish it. Make it discoverable. Many tools can help you here: Google's Vertex, Meta's LLama, or fine-tuned industry specific approaches like r2decide, a company I am involved with. AEO is just the beginning. Two even bigger shifts are on the horizon — and both will deeply impact how brands show up in the age of AI. LLMs will soon integrate advertising directly into their answers. Google, Perplexity, and OpenAI have all confirmed this. When exactly? Probably by early 2025 — if not sooner. But don't expect just ads or sponsored results. These models will deliver recommendations, at the end you pay for the service of ChatGPT, thus the dynamic is changing. To do that, new supply-side bidding platforms will emerge — ones that can feed LLMs with conversational ad snippets tailored to the user's prompt. The focus won't be on 'selling,' but on helping. That means brands will need their own brand-side LLM — a layer that can speak for the company inside these conversations and provide the right product at the right moment. The next wave is even bigger: agents that manage full transactions inside the LLM interface. OpenAI has already introduced Model Context Protocols (MCPs) — a new layer that allows ChatGPT to do more than chat. It can check stock, answer personalized questions, even schedule deliveries. Sam Altman has said the goal is to create personal AI companions that can act — not just inform. For brands, that means building an agent layer of your own — a system that can plug into these conversations, respond with tailored info, and complete the customer journey without sending the user back to your website. To stay relevant, brands need their own discovery layer: content that speaks the language of LLMs — conversational, helpful, and ready to be recommended. This isn't theory. The shift is already underway. If you want to dive deeper, ping me on LinkedIn.


Forbes
11 hours ago
- Business
- Forbes
Study Shows LLM Conversion Rate Is 9x Better — AEO Is Coming
Bing, OpenAI, Microsoft and Google logos displayed on a phone screen and a laptop keyboard are seen ... More in this multiple exposure illustration photo taken in Krakow, Poland on February 8, 2023. (Photo by Jakub Porzycki/NurPhoto via Getty Images) Some predict that by 2028, more people will discover products and information through large language models (LLMs) like ChatGPT and Gemini than through traditional search engines. But based on research I conducted with Cornell Master's students, that shift is happening much faster. LLM-driven traffic is already starting to outperform traditional search — not in volume, but in value. Traffic from LLMs converts at nearly 9x higher rates than traditional search. This is the biggest disruption to search since the dawn of the internet. If you're a brand or publisher, now is the time to adapt your SEO playbook. Oh, there is no 'S' — it's now called Answer Engine Optimization (AEO) Back in January, I predicted that traditional search was on its way out. Just six months later, the shift is already visible. In my UX research, I classify shoppers into three categories: It's easy to see how all these needs can now be met through a conversation with LLMs like ChatGPT, Claude, Gemini, or Perplexity. Say you're looking for an isotonic drink powder. Instead of scanning blogs, watching videos, or scrolling endlessly, you now ask ChatGPT — and it responds with direct recommendations: Ask about ketogenic-friendly options, and it will go even further — offering details on ingredients, comparisons, and alternatives. Staff Sergeant Alex Mackinnon from the Royal Electrical and Mechanical Engineers holds a sachet of ... More isotonic drink, Tuesday September 20, 2005, at Bramley Training Area near Basingstoke, where the Army announced it will be including the sports drink in its ration packs. The powdered drink will be incorporated in 24-hour ration packs after the its producer, GlaxoSmithKline, won the three-year contract in a tendering process. See PA Story DEFENCE Drink. PRESS ASSOCIATION Photo. Photo credit should read: Chris Ison/PA (Photo by Chris Ison - PA Images/PA Images via Getty Images) This isn't search — it's advice. And when users follow those links or act on suggestions, they convert at dramatically higher rates compared to normal search traffic. In my studies, LLM-generated traffic behaves more like a personal recommendation than a keyword query. But here's the catch: if your brand isn't listed, you're invisible. The customer won't even consider you. Good numbers are hard to come by. LLM traffic, like what comes from ChatGPT, doesn't always leave a clean trail — users might just copy and paste a product name and head to Amazon or another site. To get better data, we created a ChatGPT-style experience inside the site search of several e-commerce stores. In A/B tests, we compared regular keyword search with an AI-guided, conversational search experience. The difference was stunning: almost 9x higher conversion. Yes, nine times. But it's not just conversion that's changing — the way people search is evolving, too. In the past, users typed one or two words like 'camera.' Now, when they're shown more natural and detailed responses, they respond in kind. We're seeing queries like: 'What's a compact camera for wildlife photography that fits in a carry-on?' Semrush backs this up with broader data: In our interviews, shoppers said they felt more 'understood' and 'better about their purchase.' It didn't feel like a search engine. It felt like getting advice from a knowledgeable friend. If you scale that behavior to external LLM traffic — not just on-site — the value of that traffic already rivals what you get from SEO. For brands, this means it's time to rethink how you show up in these conversations. That's what AEO — Answer Engine Optimization — is all about. Brands need to act. If you're not being cited by LLMs, you're becoming increasingly invisible. To get picked up by an LLM, you need to understand how these models learn from content. Masking in ML Training LLMs are pattern-completion engines. I often use the example of 'Life is like a box of ___' in my online certificate from Cornell. Correct. The answer is Chocolate. Machines learn the right answer through trial and error. This approach is called masking. To show up in an LLM's response, your content needs to become part of its masked training data. LLMs look for authoritative, helpful, and authentic content. Since they predict the next word in a conversation with a user, they favor content written in a conversational or Q&A format. For brands a new playbook is emerging AEO. I outlined all what brands need to know. AEO is just the beginning. Two even bigger shifts are on the horizon — and both will deeply impact how brands show up in the age of AI: Paid Ads in LLMs and Model Context Protocol and agents that act on behalf of the LLM. The future is already underway. Ping me on LinkedIn if you want to continue the conversation.


Time of India
18 hours ago
- Time of India
Up to 80% of answers not assessed in taxation law papers, find final-year students; Mumbai University says it was ‘human error'
Mumbai: Many final-year law students from Mumbai University were stumped to see up to 80% of their answers were not assessed in the photocopies of their answersheets. Mumbai University, in a statement, admitted that it was a human error and said action has been initiated. Tired of too many ads? go ad free now When many students scored in single digits in their Law of Taxation paper in the three-year LLB programme, they applied for photocopies, and found that many of the questions were marked as 'Not Attempted'. A student said he was shocked to get only 10 marks in the Law of Taxation paper when he got the results on June 9. He cleared all other subjects. "I am a commerce student and was confident about clearing the taxation paper. When I sought the photocopy of my answersheet, it showed that only 16 marks of the 75-mark question paper was assessed. The remaining questions were not touched by the examiner," said the student. When he approached the university, he found many more students had come with the same grievance. "In our group alone, we have 96 affected students from across law colleges," he said, adding that many are waiting to appear for the Bar Council exams, or seek admissions to LLM, or to get placed. He further said that they did not any satisfactory response from the examination office. The students are now seeking an corrective action at the earliest. The director of the board of examinations said the number of evaluators for law papers is very low. "After reviewing the answer sheets, it was found that a few questions were assessed. This mistake was due to a human error, the concerned examiners have been informed and action has been taken," said the official.