Latest news with #AI-generated


Express Tribune
28 minutes ago
- Science
- Express Tribune
MIT AI study: Using tools like ChatGPT is making you dumber, study reveals
A new study from the Massachusetts Institute of Technology (MIT) suggests that frequent use of generative artificial intelligence (GenAI) tools, such as large language models (LLMs) like ChatGPT, may suppress cognitive engagement and memory retention. In the experiment, published by MIT, researchers monitored the brain activity of participants as they wrote essays using different resources: one group relied on LLMs, another used internet search engines, and a third worked without any digital tools. The results revealed a consistent pattern — participants who used GenAI tools displayed significantly reduced neural connectivity and recall, compared to those who relied on their own cognitive abilities. Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory. While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original. Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions. Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout. Participants who used search engines showed intermediate neural activity. Though their writing lacked variety and often reflected similar phrasing, they exhibited better memory retention than the LLM group, suggesting that the process of searching and evaluating sources provided more mental stimulation. In a later phase of the experiment, the groups were shuffled. Participants who had initially used GenAI tools showed improved neural connectivity when writing without digital aids — an encouraging sign that cognitive function could rebound when AI dependence is reduced. The findings could carry important implications for education and the workplace. BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying. Turns out, AI isn't making us more productive. It's making us cognitively bankrupt. Here's what 4 months of data revealed: (hint: we've been measuring productivity all wrong) — Alex Vacca (@itsalexvacca) June 18, 2025 With GenAI tools increasingly integrated into school assignments and professional tasks, concerns about cognitive atrophy are rising. Some students now generate entire essays with tools like ChatGPT, while educators rely on similar software to grade and detect AI-generated work. The study suggests that such widespread use of digital assistance — even when indirect — may hinder mental development and reduce long-term memory retention. As schools and organisations continue to navigate the integration of AI tools, the MIT research underscores the importance of balancing convenience with cognitive engagement. Researchers suggest that while GenAI can be a useful aid, overreliance could have unintended consequences for human memory and creativity.

an hour ago
- Business
Music streaming service Deezer adds AI song tags in fight against fraud
LONDON -- Music streaming service Deezer said Friday that it will start flagging albums with AI-generated songs, part of its fight against streaming fraudsters. Deezer, based in Paris, is grappling with a surge in music on its platform created using artificial intelligence tools it says are being wielded to earn royalties fraudulently. The app will display an on-screen label warning about 'AI-generated content" and notify listeners that some tracks on an album were created with song generators. Deezer is a small player in music streaming, which is dominated by Spotify, Amazon and Apple, but the company said AI-generated music is an 'industry-wide issue.' It's committed to 'safeguarding the rights of artists and songwriters at a time where copyright law is being put into question in favor of training AI models," CEO Alexis Lanternier said in a press release. Deezer's move underscores the disruption caused by generative AI systems, which are trained on the contents of the internet including text, images and audio available online. AI companies are facing a slew of lawsuits challenging their practice of scraping the web for such training data without paying for it. According to an AI song detection tool that Deezer rolled out this year, 18% of songs uploaded to its platform each day, or about 20,000 tracks, are now completely AI generated. Just three months earlier, that number was 10%, Lanternier said in a recent interview. AI has many benefits but it also "creates a lot of questions" for the music industry, Lanternier told The Associated Press. Using AI to make music is fine as long as there's an artist behind it but the problem arises when anyone, or even a bot, can use it to make music, he said. Music fraudsters 'create tons of songs. They upload, they try to get on playlists or recommendations, and as a result they gather royalties,' he said. Musicians can't upload music directly to Deezer or rival platforms like Spotify or Apple Music. Music labels or digital distribution platforms can do it for artists they have contracts with, while anyone else can use a 'self service' distribution company. Fully AI-generated music still accounts for only about 0.5% of total streams on Deezer. But the company said it's 'evident" that fraud is 'the primary purpose" for these songs because it suspects that as many as seven in 10 listens of an AI song are done by streaming "farms" or bots, instead of humans. Any AI songs used for 'stream manipulation' will be cut off from royalty payments, Deezer said. AI has been a hot topic in the music industry, with debates swirling around its creative possibilities as well as concerns about its legality. Two of the most popular AI song generators, Suno and Udio, are being sued by record companies for copyright infringement, and face allegations they exploited recorded works of artists from Chuck Berry to Mariah Carey. Gema, a German royalty-collection group, is suing Suno in a similar case filed in Munich, accusing the service of generating songs that are 'confusingly similar' to original versions by artists it represents, including 'Forever Young' by Alphaville, 'Daddy Cool' by Boney M and Lou Bega's 'Mambo No. 5.' Major record labels are reportedly negotiating with Suno and Udio for compensation, according to news reports earlier this month. To detect songs for tagging, Lanternier says Deezer uses the same generators used to create songs to analyze their output. 'We identify patterns because the song creates such a complex signal. There is lots of information in the song,' Lanternier said. The AI music generators seem to be unable to produce songs without subtle but recognizable patterns, which change constantly. 'So you have to update your tool every day," Lanternier said. "So we keep generating songs to learn, to teach our algorithm. So we're fighting AI with AI.' Fraudsters can earn big money through streaming. Lanternier pointed to a criminal case last year in the U.S., which authorities said was the first ever involving artificially inflated music streaming. Prosecutors charged a man with wire fraud conspiracy, accusing him of generating hundreds of thousands of AI songs and using bots to automatically stream them billions of times, earning at least $10 million.
&w=3840&q=100)

Business Standard
3 hours ago
- Business
- Business Standard
Google trains Veo 3 AI video generation model using YouTube content: Report
Google has reportedly been using YouTube content to train its artificial intelligence (AI) models, including Gemini and the Veo 3 video and audio generator. According to a report by CNBC, a YouTube spokesperson confirmed that Google relies on its bank of YouTube videos to train its AI models. However, the spokesperson added that Google does not use each and every single video on YouTube but only uses a subset of its videos for training purposes. The report further claims that many creators whose videos might have been used in this matter remain unaware that their content has been used without their consent or any compensation. Creators were never notified? As per YouTube, this information has been conveyed to creators previously, but, as per experts who talked to CNBC, it is not widely understood by creators and media organisations that the US technology giant trains its AI models using its video library (YouTube). Earlier last year, in September, YouTube in a blog stated that the content uploaded on the platform could be used to 'improve the product experience … including through machine learning and AI applications.' A huge disadvantage here is that creators who have uploaded videos on YouTube have no way of opting out from letting Google use it to train AI models, which is something that its competitors, like Meta offers. Surprisingly, YouTube allows created to opt out from sharing their content with third-party companies to train their AI models. As per YouTube, there are around 20 billion videos on the platform, and out of them, how many are being used to train Google AI models is unclear at the moment. CNBC cited experts as saying that even if Google uses one per cent of those videos, then it would amount to around 2.3 billion minutes of content, which is 40 times more of the training data that is being used by competing AI models for training. The report claimed that CNBC talked to a number of leading creators and IP professionals, and it found out that none of them were apparently aware or had been informed by YouTube about the possibility of their content being used to train Google's AI models. Why does it matter YouTube, using user-uploaded videos to train AI, has raised concerns, especially after Google unveiled its powerful Veo 3 video generator. The tool can create fully AI-generated cinematic scenes, including visuals and audio. With around 20 million videos uploaded to YouTube daily by creators and media companies, some fear their content is being used to build technology that might one day rival or replace them. CNBC cited experts as saying that even if Veo 3's results don't directly copy existing content, the AI-generated output can power commercial products that may rival the very creators whose work helped train it, without their permission, credit, or payment. This no-way-out trap begins as soon as a creator uploads a video on YouTube, as by doing so, the person agrees to YouTube having a broad license to the content. What does the past record show According to The New York Times, Google has reportedly transcribed YouTube videos to train its AI models. Mashable India points out that this practice raises legal concerns, as it may infringe on creators' copyrights. The use of online content for AI training has already led to lawsuits related to licensing and intellectual property. Other players like Meta and OpenAI have also faced heat for using intellectual property to train their AI models without having the consent from creators or authors.


India Today
4 hours ago
- Entertainment
- India Today
Midjourney launches V1 AI video generation model right after Disney accuses it of plagiarism
Midjourney, the AI startup famous for its surreal image generation tools, is making a bold leap into video. Recently, the company unveiled V1, its long-awaited video-generation model that promises to breathe life into your static images. It's a big move for Midjourney as it throws the company into direct competition with other big-hitters like OpenAI, Runway, Adobe and Google.V1 is designed as an image-to-video model, allowing users to transform either their own uploaded pictures or Midjourney's AI-generated images into short five-second video clips. Like its sibling image models, V1 is only accessible via Discord for now and is web-only at launch. advertisementAnd it's not just videos Midjourney has in its sights. In a blog post, CEO David Holz set out some pretty ambitious goals for the company's AI, saying V1 is just the next stepping stone toward real-time 'open-world simulations.' The company also revealed its plans to branch into 3D renderings and real-time generative models down the line. While Midjourney's image tools have long appealed to artists and designers, the company has taken a slightly different tack with video. Many of its rivals — such as Sora by OpenAI, Runway's Gen-4, Firefly by Adobe and Veo 3 by Google — are going after commercial filmmakers and studios with highly controllable AI tools. Midjourney, however, is positioning itself as more of a creative playground for those looking for something a little more V1 AI video generation model: Pricing and availabilityadvertisementDespite this, Midjourney is pushing ahead. Video generation doesn't come cheap, though. V1 consumes eight times more credits per clip than Midjourney's still-image tools, so subscribers will burn through their monthly allowances far faster. At launch, Basic subscribers — who pay $10 (around Rs 866) per month — can access V1, but unlimited video generation is limited to the $60 (around Rs 5,200) Pro and $120 (approximately Rs 10,400) Mega plans, and only on the 'Relax' mode, which produces videos more slowly. However, the company says it will review this pricing structure in the coming weeks as it gathers feedback from for the tools themselves, V1 offers a surprising level of control. You can opt for an 'auto' mode that lets the AI generate motion for you or a 'manual' mode that accepts text prompts to dictate exactly how you want your animation to move. Plus, there are settings for adjusting movement intensity — 'low motion' if you want subtle shifts, or 'high motion' for more energetic effects. Clips last five seconds by default but can be extended up to 21 seconds in four-second accuses Midjourney of plagiarismThat said, Midjourney is entering the video arena under a legal cloud. Only a week ago, Disney and Universal sued the startup over its image-generation models, claiming they can produce unauthorised versions of famous characters like Darth Vader and Homer Simpson. It's part of a growing backlash across Hollywood as studios grow nervous about AI tools replacing human creatives — and AI companies face questions about training data and copyright examples of V1's output suggest Midjourney is sticking to its trademark surreal aesthetic rather than aiming for hyper-realism, the sort of style that fans of the platform have come to love. The initial reaction from users has been mostly positive so far, though it's still too early to tell how V1 will stack up against more established players like Runway and Sora.


San Francisco Chronicle
4 hours ago
- Business
- San Francisco Chronicle
Music streaming service Deezer adds AI song tags in fight against fraud
LONDON (AP) — Music streaming service Deezer said Friday that it will start flagging albums with AI-generated songs, part of its fight against streaming fraudsters. Deezer, based in Paris, is grappling with a surge in music on its platform created using artificial intelligence tools it says are being wielded to earn royalties fraudulently. The app will display an on-screen label warning about 'AI-generated content" and notify listeners that some tracks on an album were created with song generators. The company said AI-generated music is an 'industry-wide issue.' It's committed to 'safeguarding the rights of artists and songwriters at a time where copyright law is being put into question in favor of training AI models," CEO Alexis Lanternier said in a press release. Deezer's move underscores the disruption caused by generative AI systems, which are trained on the contents of the internet including text, images and audio available online. AI companies are facing a slew of lawsuits challenging their practice of scraping the web for such training data without paying for it. According to an AI song detection tool that Deezer rolled out this year, 18% of songs uploaded to its platform each day, or about 20,000 tracks, are now completely AI generated. Just three months earlier, that number was 10%, Lanternier said in a recent interview. AI has many benefits but it also "creates a lot of questions" for the music industry, Lanternier told The Associated Press. Using AI to make music is fine as long as there's an artist behind it but the problem arises when anyone, or even a bot, can use it to make music, he said. Music fraudsters 'create tons of songs. They upload, they try to get on playlists or recommendations, and as a result they gather royalties,' he said. Musicians can't upload music directly to Deezer or rival platforms like Spotify or Apple Music. Music labels or digital distribution platforms can do it for artists they have contracts with, while anyone else can use a 'self service' distribution company. Fully AI-generated music still accounts for only about 0.5% of total streams on Deezer. But the company said it's 'evident" that fraud is 'the primary purpose" for these songs because it suspects that as many as seven in 10 listens of an AI song are done by streaming "farms" or bots, instead of humans. Any AI songs used for 'stream manipulation' will be cut off from royalty payments, Deezer said. AI has been a hot topic in the music industry, with debates swirling around its creative possibilities as well as concerns about its legality. Two of the most popular AI song generators, Suno and Udio, are being sued by record companies for copyright infringement, and face allegations they exploited recorded works of artists from Chuck Berry to Mariah Carey. Gema, a German royalty-collection group, is suing Suno in a similar case filed in Munich, accusing the service of generating songs that are 'confusingly similar' to original versions by artists it represents, including 'Forever Young' by Alphaville, 'Daddy Cool' by Boney M and Lou Bega's 'Mambo No. 5.' Major record labels are reportedly negotiating with Suno and Udio for compensation, according to news reports earlier this month. To detect songs for tagging, Lanternier says Deezer uses the same generators used to create songs to analyze their output. 'We identify patterns because the song creates such a complex signal. There is lots of information in the song,' Lanternier said. The AI music generators seem to be unable to produce songs without subtle but recognizable patterns, which change constantly. Fraudsters can earn big money through streaming. Lanternier pointed to a criminal case last year in the U.S., which authorities said was the first ever involving artificially inflated music streaming. Prosecutors charged a man with wire fraud conspiracy, accusing him of generating hundreds of thousands of AI songs and using bots to automatically stream them billions of times, earning at least $10 million.