logo
At graduation, university student brags about using ChatGPT to finish final project; social media reacts, ‘That's scary'

At graduation, university student brags about using ChatGPT to finish final project; social media reacts, ‘That's scary'

Mint14 hours ago

A UCLA student has gone viral for showing how he used ChatGPT to finish his final project. He bragged about it right at his graduation ceremony.
The student proudly revealed this during the event while other students cheered. The video spread fast on social media, with many users saying the younger generation is too dependent on AI and may struggle to think for themselves.
Some joked that it's all fun until doctors or engineers do the same, risking real harm to society. Most people agreed that using AI for everything at university and not actually learning anything is worrying. They believe it is definitely not something that should be celebrated.
'Start eating healthy yall, your future doctor is probably using ChatGPT right now,' quipped one user on Instagram.
'What are they gonna do fr bro already graduated,' wrote another.
One user commented, 'So many years, just to ruin the value of your diploma for the sake of some buzz on social media.'
Also Read | AI vs Humans: Artificial intelligence influencers outperform originals
'Neck deep in debt only to get out with underdeveloped skills, nice one,' came a sarcastic comment.
Another user wrote, 'That's scary. Professional jobs without the professionals.'
When the video was reshared by FearBuck on Twitter (now X), the video gained more than 71 million views.
One user wondered, 'Is it Legal to use AI In university?'
'Don't act like you all weren't using Google when you were in school. People have always cheated. He'll be weeded out eventually,' came from another.
Use of ChatGPT
A first-year university student earlier shared on Reddit that they used ChatGPT regularly to understand difficult topics and quickly finish assignments. The user asked if students before ChatGPT had a harder time in university. Users' opinions were divided.
'This can really harm you in the long term, and if it turns out that ChatGPT has plagiarised, which it does do, you could be given a severe punishment,' wrote a Reddit user.
Another user, who disagreed, wrote, 'Essentially, you do end up having to add another layer of checking. But, you can definitely use it in an ethical way.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims

Time of India

time5 hours ago

  • Time of India

Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims

Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency
ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency

Time of India

time6 hours ago

  • Time of India

ChatGPT might be making you think less: MIT study raises ‘red flags' about AI dependency

ChatGPT might be making you think less: MIT study raises 'red flags' about AI dependency As AI tools become part of our daily routines, a question is starting to bubble up: What happens when we rely on them too much? A new study from MIT's Media Lab takes a closer look at how tools like ChatGPT may be affecting our brains. And what the researchers found is worth paying attention to. The study focused on how people engage mentally when completing tasks with and without AI. It turns out that while ChatGPT can make writing easier, it may also be reducing how much we think. According to the research team, participants who used ChatGPT showed noticeably lower brain activity than those who did the same task using Google or no tech at all. The findings suggest that depending on AI for tasks that require effort, like writing, decision-making, or creative thinking, could weaken the very mental muscles we're trying to sharpen. ChatGPT users show lowest brain activity in MIT's groundbreaking study The experiment involved 54 participants between the ages of 18 and 39. They were split into three groups and asked to write essays in response to prompts similar to those on standardised tests. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Buy Brass Idols - Handmade Brass Statues for Home & Gifting Luxeartisanship Buy Now Undo Group 1 used ChatGPT to generate their answers Group 2 relied on Google Search to find and compile information Group 3 worked without any tools, using only their knowledge and reasoning While they worked, each participant wore a headset that tracked electrical activity across 32 areas of the brain. The aim was to see how engaged their minds were during the process. (The research was led by Dr. Nataliya Kosmyna along with a team that included Ashly Vivian Beresnitzky, Ye Tong Yuan, Jessica Situ, Eugene Hauptmann, Xian-Hao Liao, Iris Braunstein, and Pattie Maes.) ChatGPT may be hurting your creativity, MIT researchers warn The results were pretty clear: the group that used ChatGPT showed the lowest brain activity of all three groups. In particular, areas linked to memory, creativity, and concentration were significantly less active. In contrast, those who wrote without help from AI showed the highest mental engagement. They had to organise their thoughts, build arguments, and recall information, all things that activated the brain more deeply. Even the group using Google Search showed more engagement than the AI group, possibly because the process of looking for and evaluating information keeps the brain involved. There was another telling detail. Many in the ChatGPT group simply pasted the prompts into the tool and copied the output with little to no editing. Teachers who reviewed their essays said they felt impersonal, calling them 'soulless.' Dr. Kosmyna put it bluntly: 'They weren't thinking. They were just typing.' AI dependency Short-term efficiency, long-term cost Later in the study, researchers asked participants to rewrite one of their essays, this time without using any tools. The ChatGPT users struggled. Many couldn't remember their original arguments or structure. Since they hadn't processed the material deeply the first time, it hadn't stuck. Kosmyna described this as a red flag: 'It was efficient. But nothing was integrated into their brains.' That raises a broader concern: if AI is doing the heavy lifting, are we still learning? Or are we just moving text around while our cognitive skills fade in the background? The growing concern among psychiatrists and educators Dr. Zishan Khan, a psychiatrist who works with students, says he's already seeing signs of AI overuse in younger people. 'The neural pathways responsible for thinking, remembering, and adapting—they're weakening,' he explained. The fear is that early and frequent reliance on tools like ChatGPT might lead to long-term cognitive decline, especially in developing brains. MIT's team is now expanding their research to see how AI affects people in other fields. They've already started looking at coders who use tools like GitHub Copilot. So far, Kosmyna says the early results there are 'even worse' in terms of mental engagement. A word of warning for classrooms and beyond Interestingly, the MIT researchers shared their findings before going through the full peer review process, something that's uncommon in academic research. But Kosmyna felt the potential impact was urgent enough to make an exception. 'I'm really concerned someone might say, 'Let's introduce ChatGPT into kindergarten classrooms,'' she said. 'That would be a terrible mistake. Young brains are especially vulnerable.' To prove just how easy it is to lose the depth of complex research, the team did something clever: they planted subtle factual 'traps' in the study. When readers ran the paper through ChatGPT to summarise it, many versions came back with key errors, including details the researchers never even included. What does this mean for the future of AI use Not at all. The tool isn't the enemy. It can be incredibly helpful, especially when used wisely. But this study reminds us that how we use AI matters just as much as whether we use it. Here are a few takeaways from the researchers: Use AI as a partner, not a replacement. Let it offer ideas, but make sure you're still doing the core thinking. Stay actively involved. Skipping the process of learning or writing just to get a result means you're not absorbing anything. Be cautious in education. Children need to build foundational skills before leaning on technology. Also read | Nvidia CEO Jensen Huang swears by these 6 effective management strategies to run a company like a genius AI Masterclass for Students. Upskill Young Ones Today!– Join Now

‘Hatke' vibes: Nothing Headphone 1 allegedly leaks ahead of July 1 launch
‘Hatke' vibes: Nothing Headphone 1 allegedly leaks ahead of July 1 launch

Hindustan Times

time9 hours ago

  • Hindustan Times

‘Hatke' vibes: Nothing Headphone 1 allegedly leaks ahead of July 1 launch

Nothing recently confirmed that it was going to launch its first-ever pair of headphones, called the Nothing Headphone 1, on July 1, alongside the company's flagship, the Nothing Phone 3. But now, just days before the launch, alleged images of the headphone pair in two different colourways, white and black, have surfaced online. Courtesy of an Instagram account that goes by the name @nothing_fan_blog, the account posted five images of the Nothing Headphone 1. Alleged image of the Nothing Headphone 1 in black. (@nothing_fan_blog on Instagram) Also Read: 'My patience is running thin': Nothing CEO explains why he's fed up with obsession over mobile processors Nothing Headphone 1, in the images, are visible in two different colourways. The first appears to be in white, while the second colour is black. If we think about it, this goes with what the brand has launched so far, including its earbuds, wherein it offered black and white variants. The headphones retain Nothing's signature transparent aesthetic. If there's one thing that's been constant with Nothing's products, including its phones or its audio gear, it has to be the transparent aesthetic, and this may remain the same with the Nothing Headphone 1. They appear to have a truly distinct aesthetic, which you will mostly not find on any other pair of headphones out in the open, and they will truly stand out when you wear them. A Closer Look As for the construction, the headphones appear to have a two-part construction; the base appears to be made out of metal while the top part appears to be constructed out of plastic. And if you have a closer look, you will spot buttons on the side of the headphones as well, and two are visible. Further, we also see a wire connected in the last image, but it's unclear if it's going to be a 3.5 mm jack or if it's a USB-C port. But from the construction of the wire, we can assume it's likely going to be a 3.5 mm headphone jack, which will allow for wired connectivity to your favourite output device. It currently isn't clear how much the Nothing Headphone 1 would cost, but so far, reports have suggested they could be priced around 299 Euros. MOBILE FINDER: iPhone 16 LATEST Price And More

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store