logo
#

Latest news with #cognitiveDebt

This is your brain on ChatGPT
This is your brain on ChatGPT

Yahoo

time13 hours ago

  • Science
  • Yahoo

This is your brain on ChatGPT

When you buy through links on our articles, Future and its syndication partners may earn a commission. Sizzle. Sizzle. That's the sound of your neurons frying over the heat of a thousand GPUs as your generative AI tool of choice cheerfully churns through your workload. As it turns out, offloading all of that cognitive effort to a robot as you look on in luxury is turning your brain into a couch potato. That's what a recently published (and yet to be peer-reviewed) paper from some of MIT's brightest minds suggests, anyway. The study examines the "neural and behavioral consequences" of using LLMs (Large Language Models) like ChatGPT for, in this instance, essay writing. The findings raise serious questions about how long-term use of AI might affect learning, thinking, and memory. More worryingly, we recently witnessed it play out in real life. The study, titled: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, involved 54 participants split into three groups: LLM group: Instructed to complete assignments using only ChatGPT, and no other websites or tools. Search engine group: Allowed to use any website except LLMs, even AI-enhanced answers were forbidden. Brain-only group: Relying only on their own knowledge. Across three sessions, these groups were tasked with writing an essay about one of three changing topics. An example of the essay question for the topic of "Art" was: "Do works of art have the power to change people's lives?" Participants then had 20 minutes to answer the question related to their chosen topic in essay form, all while wearing an Enobio headset to collect EEG signals from their brain. In a fourth session, LLM and Brain-only groups were swapped to measure any potential lasting impact of prior sessions. The results? Across the first three tests, Brain-only writers had the most active, widespread brain engagement during the task, while LLM-assisted writers showed the lowest levels of brain activity across the board (although routinely completed the task fastest). Search engine-assisted users generally fell somewhere in between the two. In short, Brain-only writers were actively engaging with the assignment, producing more creative and unique writing while actually learning. They were able to quote their essays afterwards and felt strong ownership of their work. Alternatively, LLM users engaged less over each session, began to uncritically rely on ChatGPT more as the study went on, and felt less ownership of the results. Their work was judged to be less unique, and participants often failed to accurately quote from their own work, suggesting reduced long-term memory formation. Researchers referred to this phenomenon as "metacognitive laziness" — not just a great name for a Prog-Rock band, but also a perfect label for the hazy distance between autopilot and Copilot, where participants disengage and let the AI do the thinking for them. But it was the fourth session that yielded the most worrying results. According to the study, when the LLM and Brain-only group traded places, the group that previously relied on AI failed to bounce back to pre-LLM levels tested before the study. To put it simply, sustained use of AI tools like ChatGPT to "help" with tasks that require critical thinking, creativity, and cognitive engagement may erode our natural ability to access those processes in the future. But we didn't need a 206-page study to tell us that. On June 10, an outage lasting over 10 hours saw ChatGPT users cut off from their AI assistant, and it provoked a disturbing trend of people openly admitting, sans any hint of awareness, that without access to OpenAI's chatbot, they'd suddenly forgotten how to work, write, or function. This study may have used EEG caps and grading algorithms to prove it, but most of us may already be living its findings. When faced with an easy or hard path, many of us would assume that only a particularly smooth-brained individual would willingly take the more difficult, obtuse route. However, as this study claims, the so-called easy path may be quietly sanding down our frontal lobes in a lasting manner — at least when it comes to our use of AI. That's especially frightening when you think of students, who are adopting these tools en masse, with OpenAI itself pushing for wider embrace of ChatGPT in education as part of its mission to build "an AI-Ready Workforce." A 2023 study conducted by revealed that a third of U.S. college students surveyed used ChatGPT for schoolwork during the 2022/23 academic year. In 2024, a survey from the Digital Education Council claimed that 86% of students across 16 countries use artificial intelligence in their studies to some degree. AI's big sell is productivity, the promise that we can get more done, faster. And yes, MIT researchers have previously concluded that AI tools can boost worker productivity by up to 15%, but the long-term impact suggests codependency over competency. And that sounds a lot like regression. At least for the one in front of the computer. Sizzle. Sizzle. Is Microsoft misleading users about Copilot? New claims point the finger at AI productivity Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman Google's latest Gemini 2.5 models are its biggest response to ChatGPT yet — and they're already live

Ten Artificial Integrity Gaps To Guard Against With Machines, Intelligent Or Not
Ten Artificial Integrity Gaps To Guard Against With Machines, Intelligent Or Not

Forbes

time30-05-2025

  • Business
  • Forbes

Ten Artificial Integrity Gaps To Guard Against With Machines, Intelligent Or Not

HumAIn Michael Dziedzic We need technology, because it offers solutions capable of reducing suffering, mitigating intolerable risks, and improving lives. But no technology should ever be paid for at the price of a cognitive debt that would cost us the sovereignty of thought, and, with it, sever our connection to who we are. Evaluating the artificial integrity of digital technologies, and even more so when they include AI, is a responsibility inherent to any so-called digital transformation. This evaluation should enable the identification of functional artificial integrity gaps and the definition of preventive, corrective, and mitigation measures to address their impacts. 1. Functional Misappropriation: The use of a technology for purposes or in roles not intended by its designer and/or the organization using it, cases in which the software's intended logic and the internal governance mechanisms are rendered ineffective or inoperative, creating functional and relational a chatbot designed to answer questions about the company's HR policies is used as a substitute for human hierarchy, handling conflict resolution or task assignment. 2. Functional Loophole: The absence of necessary steps or features due to them not being developed, and therefore not present in the system's operational logic—creating a "functional void" (analogous to a legal loophole) with respect to the user's intended a content generation technology (such as generative AI) that does not allow direct export of the content into a usable format (Word, PDF, CMS) with the expected quality, thus limiting or blocking its operational use. 3. Functional Safeguards: The absence of guardrails, human validation steps, or informational alerts during the system's execution of an action with potentially irreversible effects that may not align with the user's a marketing technology automatically sends emails to a contact list without any mechanism to block the sending, request user confirmation, or generate an alert in case a critical condition, such as validating the correct recipient list, is missing. 4. Functional Alienation: The creation of automatic behaviors or conditioned responses, akin to Pavlovian reflexes, that diminish or eliminate the user's capacity for reflection and judgment, leading to a gradual erosion of their decision-making sovereignty and, consequently, their free the systematic acceptance of cookies, or blind validation of system alerts by cognitively fatigued users. 5. Functional Ideology: An emotional dependency on the technology that leads to the weakening or suppression of critical thinking, and fosters the mental construction of an ideology that fuels narratives of relativization, rationalization, or collective denial regarding their proper functioning, or lack justifying shortcomings or errors inherent to the technology's operations with arguments like 'It's not the tool's fault' or 'The tool can't guess what the user forgets'. 6. Functional Cultural Coherence: A contradiction or conflict between the logical framework imposed or influenced by the technology and the behavioral values or principles promoted by the organizational a digital-based workflow that leads to the creation of validation and control teams overseeing the work of others, within an organization that promotes and values team empowerment. 7. Functional Transparency: The absence or inaccessibility of transparency and explainability regarding the decision-making mechanisms or algorithmic logic of a technology, particularly in cases where it may anticipate, override, or go beyond the user's original a candidate pre-selection technology that manages trade-offs and conflicts between user-defined selection criteria (e.g., experience, education, soft skills) without making the weighting or exclusion rules explicitly visible, editable, or verifiable by the user. 8. Functional Addiction: The presence of features based on gamification, instant gratification, or micro-reward systems specifically designed to hack the user's motivation circuits, activating neurological reward mechanisms (dopamine, serotonin, norepinephrine, etc.) to trigger repetitive, compulsive, and addictive behaviors. These mechanisms can lead to emotional decompensation (as a form of compensatory refuge) and self-reinforcing cycles (withdrawal-like phenomena).Example: notifications, likes, infinite scroll algorithms, visual or sound bonuses, levels reached through point systems, badges, ranks, or scores, used to sustain user engagement in an exponential and lasting way. 9. Functional Property: The appropriation, repurposing, or processing of personal or intellectual data by a technology, regardless of its public accessibility, without the informed, explicit, and meaningful consent of its owner or creator, including but not limited to: personal data, creative works (text, images, voice, video, etc.), behavioral data (clicks, preferences, locations, etc.), knowledge artifacts (academic, journalistic, open-source content, etc.).Example: an AI model trained on images, texts, or voices of individuals found online, thereby monetizing someone's identity, knowledge, or creative works without prior authorization, and without any explicit opt-in mechanisms, licensing, or transparent attribution. 10. Functional Bias: The failure of a technology to detect, mitigate, or prevent biased outputs or discriminatory patterns, either in its design, training data, decision logic, or deployment context, resulting in unjust treatment, exclusion, or systemic distortion of individuals or a facial recognition system that performs significantly worse on individuals with darker skin tones due to imbalanced training data without functional bias safeguards or accountability protocols. Because they form a system with us, these 10 functional artificial integrity gaps must be analyzed through a systemic approach, ranging from the nano level (biological, neurological), to the micro level (individual, behavioral), the macro level (organizational, institutional), and up to the meta level (cultural, ideological). The cost of artificial integrity deficits in systems, whether or not they involve AI, directly burdens the organization's capital: human (skills, engagement, mental health), cultural (values, internal coherence), decision-making (sovereignty, accountability), reputational (stakeholder trust), technological (actual value of technologies), and of course, financial (inefficiency costs, underperformance of investments, maintenance overruns, corrective expenditures, legal disputes, lost opportunities, and value destruction). This cost results in sustained value destruction, driven by intolerable risks and an uncontrolled increase in the cost of capital invested to generate returns (ROIC), turning these technological investments into a structural handicap for the company's profitability, and consequently, for its long-term viability. A company does not choose a responsible digital transformation for the sake of society, in opposition to or in ambivalence with its own objectives. It chooses it for itself, because its long-term performance depends on it, and because it helps strengthen the living fabric of the society that sustains it and upon which it relies to grow. That is why we cannot be satisfied with designing machines that are just artificially intelligent. We must also ensure that they exhibit artificial integrity by design.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store