logo
#

Latest news with #ChatGPT-4o

What happens when you use ChatGPT to write an essay? See what new study found.
What happens when you use ChatGPT to write an essay? See what new study found.

Indianapolis Star

time19 hours ago

  • Science
  • Indianapolis Star

What happens when you use ChatGPT to write an essay? See what new study found.

Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks.

What happens when you use ChatGPT to write an essay? See what new study found.
What happens when you use ChatGPT to write an essay? See what new study found.

USA Today

timea day ago

  • Science
  • USA Today

What happens when you use ChatGPT to write an essay? See what new study found.

Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write How was the study conducted? A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. Quality of essays: What did the study find? In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. Brain activity: What did the study find? When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. What's next for future studies? Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks. Greta Cross is a national trending reporter at USA TODAY. Story idea? Email her at gcross@

Barbie just got an AI glow-up — here's what Mattel and OpenAI are building next
Barbie just got an AI glow-up — here's what Mattel and OpenAI are building next

Tom's Guide

time12-06-2025

  • Business
  • Tom's Guide

Barbie just got an AI glow-up — here's what Mattel and OpenAI are building next

Mattel, the company behind beloved toys like Barbie, is partnering with OpenAI to bring AI to the toy aisle before the end of the a new collaboration announced this week, Mattel confirmed it will use OpenAI's generative models, including ChatGPT Enterprise, to create 'smart play experiences' designed for kids. While the companies haven't revealed exactly what they're building, both physical toys and digital companions are on the table. This marks a major shift for one of the world's most iconic toy makers, and a sign of how quickly AI is moving away screens, keyboards and work apps to childhood playrooms. Mattel says it's committed to building age-appropriate, safe and privacy-conscious AI features. Think: dolls that can carry on dynamic conversations, Hot Wheels sets with real-time coaching or Uno games that react to your strategy. While those examples are speculative, the company confirmed that its first product will launch by the end of 2025. Before the holidays? That's anyone's guess. 'Mattel gets access to advanced AI tools to enable productivity, creativity, and transformation at scale,' said Brad Lightcap, COO at OpenAI. The two companies also say the tools will help ideate and design new toys internally — with ChatGPT Enterprise already in use across product development and storytelling teams. But the real spotlight is on what kids and families will be able to experience firsthand. Mattel is no stranger to reinvention. After the explosive success of the 'Barbie' movie and users exploring ChatGPT-4o image generation to turn themselves into action figures, the growing portoflio of digital content means the brand is now doubling down on future-forward tech. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. The OpenAI partnership gives it a competitive edge as more toy makers explore interactive, AI-enabled play. As reported by Bloomberg, Josh Silverman, Mattel's chief franchise officer, says the company hasn't licensed its IP to OpenAI. This means, the brands are still tightly controlled, but he hinted that a range of physical and digital offerings are in development. Expect more news ahead of the holiday season regarding the rollout of this much-anticipated toy. And if Mattel delivers on its promise, we could be entering a new era of AI toys — where Barbie is more than a doll, but perhaps talking and listening, too.

I let smart glasses read my emotions and watch what I eat — and now I can't unsee the future
I let smart glasses read my emotions and watch what I eat — and now I can't unsee the future

Tom's Guide

time12-06-2025

  • Health
  • Tom's Guide

I let smart glasses read my emotions and watch what I eat — and now I can't unsee the future

I believe the next great fitness wearable will not be a smartwatch or smart ring — it will be glasses. I saw this for myself when trying a prototype of the new eyewear Emteq Labs is keen to launch next year. Sporting sensors all around the rims, it can detect the subtlest of changes in your facial expressions (even those you aren't consciously aware of doing). With this data, paired with AI, it can become a personalized life coach for your fitness, your diet and even your emotional health. I put this to the test in my time talking to Emteq CEO, Steen Strand, to see what they can truly bring to the table for the average user and what the future holds. At the core of Emteq's glasses are a series of nine sensors that can identify facial movements to a near-microscopic degree. They're dotted across the bottom of the lenses in these prototypes, which are paired with AI to deliver a personalized set of specs that can sense you. Of course, there are plenty of fascinating use-cases for these, such as using your face to interact with a computer, or adding more true-to-life emotion to your in-game character. But the one that jumped out at me is health — not just physical health but emotional health. Currently, health tracking via consumer tech is limited to your fitness routines — filling in Apple Watch rings and checking your sleep. These are all fair and good, but as I've learned in my journey of losing 20 pounds, good nutrition is just as important. And while there are apps like MyFitnessPal that can deliver effective nutritional information. None come quite as easy to use and complex with actionable detail as Emteq's prototype setup. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Using ChatGPT-4o, the on-board camera takes a snap of what you're eating and breaks it down into total calories and detailed macros. And on top of that, it will even give you a chewing score…yep, you read that correctly. Digestive issues and impacts on metabolic health can creep up if you chew too fast, so it's important to take your time. So those sensors on your glasses can track biting and chewing speeds to ensure you don't become too much of a food hoover. 'We can use AI to give you custom personalized guidance — some of that actually in real-time,' Steen added. 'We have high fidelity information about how you're eating and what you're eating, and are already using haptic feedback for in-the-moment notifications.' And with their ability to track activity too — tracking different exercises such as walking, running and even star jumps — this can all come together with the AI infusion to give you a far better understanding of your fitness levels. Then there's the emotion sensing piece of the puzzle. Up until this point, it's all been very surface level — prompts to fill in a journal, heart rate tracking to detect stress, or practice deep breathing exercises. All nice-to-haves but beyond the big issue that people could just lie to their phones, nothing has really gone deeper. We believe that understanding emotions is a force multiplier for AI, in terms of it being effective for you in the context of wearing glasses all day. If you want AI to be really effective for you, it's critical that it understands how you're feeling in real-time." Well, beyond accurately assessing eating behaviors throughout the day, other data points can be used to assess emotional context, such as mood detection and posture analysis. While I'm able to fake a smile, the upper section of my face and forehead gave me away in the moment. And then when you tap into the evergrowing popularity of people using ChatGPT for emotional support and therapy, you're surely going to get a more personalized, more frank conversation when data is added in there too. 'We believe that understanding emotions is a force multiplier for AI, in terms of it being effective for you in the context of wearing glasses all day,' Steen commented. 'If you want AI to be really effective for you, it's critical that it understands how you're feeling in real-time, in response to different things that are happening around you.' It sounds creepy on paper, and it kind of is when you think about it. But it's certainly a gateway into real emotional honesty that you may not get by rationalizing with yourself in a journal app and possibly glazing over any cracks in your mental health when filling out that survey for the day. Now this may all seem fascination (I think it is too), but I'm not ignorant of the key questions that come with strapping a bunch of sensors to your face: the questions of privacy surrounding a device grabbing so much data, or simply asking do we really want to be judged for our chewing. Privacy is always a question you can have of many different items that collect a lot of information like this. And to that latter question, that's asked with every big step forward like this. But the end result is something so much more advanced than a smart ring, and much more proactive. Here at Augmented World Expo (AWE), I found a breadcrumb trail of a lot of things that could lead to the smart glasses of the future that everyone will wear. Emteq is probably the biggest crumb of them all, because while AI is definitely the key to unlocking XR, personalizing it is the real challenge. Sensors and real-time data collection like this to help aid you into a better life is the clearest step towards tackling that challenge.

ChatGPT leads enterprise AI, but model diversity is surging
ChatGPT leads enterprise AI, but model diversity is surging

Techday NZ

time10-06-2025

  • Business
  • Techday NZ

ChatGPT leads enterprise AI, but model diversity is surging

New Relic has published its first AI Unwrapped: 2025 AI Impact Report, presenting data from 85,000 businesses on enterprise-level adoption and usage trends in artificial intelligence models. ChatGPT's leading role The report reveals that developers are overwhelmingly favouring OpenAI's ChatGPT for general-purpose AI tasks. According to the findings, more than 86% of all large language model (LLM) tokens processed by New Relic customers involved ChatGPT models. Nic Benders, Chief Technical Strategist at New Relic, stated, "AI is rapidly moving from innovation labs and pilot programmes into the core of business operations. The data from our 2025 AI Impact Report shows that while ChatGPT is the undisputed dominant model, developers are also moving at the 'speed of AI,' and rapidly testing the waters with the latest models as soon as they come out. In tandem, we're seeing robust growth of our AI monitoring solution. This underscores that as AI is ingrained in their businesses, our customers are realising they need to ensure model reliability, accuracy, compliance, and cost efficiency." The report highlights that enterprises have been quick to adopt OpenAI's latest releases. ChatGPT-4o and ChatGPT-4o mini emerged as the primary models in use, with developers making near-immediate transitions between versions as new capabilities and improvements are launched. Notably, there has been an observed pattern of rapid migration from ChatGPT-3.5 Turbo to ChatGPT-4.1 mini since April, indicating a strong developer focus on performance improvements and features, often taking precedence over operational cost savings. Broadening model experimentation The findings also suggest a trend toward greater experimentation, with developers trying a wider array of AI models across applications. While OpenAI remains dominant, Meta's Llama ranked second in terms of LLM tokens processed among New Relic customers. There was a 92% increase in the number of unique models used within AI applications in the first quarter of 2025, underlining growing interest in open-source, specialised, and task-specific solutions. This diversification, although occurring at a smaller scale compared to OpenAI models, points to a potentially evolving AI ecosystem. Growth in AI monitoring As the diversity of model adoption increases, the need for robust AI monitoring solutions has also grown. Enterprises continue to implement unified platforms to monitor and manage AI systems, with New Relic reporting a sustained 30% quarter-over-quarter growth in the use of its AI Monitoring solution since its introduction last year. This growth reflects a drive among businesses to address concerns such as reliability, accuracy, compliance, and cost as AI systems become more embedded in day-to-day operations. Programming languages trends The report notes that Python solidifies its status as the preferred programming language for AI applications, recording nearly 45% growth in adoption since the previous quarter. follows closely behind Python in terms of both volume of requests and adoption rates. Java, meanwhile, has experienced a significant 34% increase in use for AI applications, suggesting a rise in production-grade, Java-based LLM solutions within large enterprises. Research methodology details The AI Unwrapped: 2025 AI Impact Report's conclusions are drawn from aggregated and de-identified usage statistics from active New Relic customers. The data covers activity from April 2024 to April 2025, offering a representative view of current AI deployment and experimentation trends across a substantial commercial user base. Follow us on: Share on:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store