YouTube Announces Expanded Access to Gen-AI Creative Tools, New Usage Insights
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter.
YouTube has announced that it's expanding access to Google's VEO generative AI tools in the app later this year, which will enable more people to use AI to create YouTube Shorts clips, while it's also shared a range of new usage insights as part of its presentation at the Cannes Lions Festival this week.
Celebrating 20 years of existence, YouTube CEO Neal Mohan highlighted a range of new insights at the event, with updated data on usage trends, along with a new report that looks at YouTube's content evolution over time.
The big announcement, however, is expanded access to Google's Veo text-to-video tools for Shorts generation.
As per Mohan:
'I'm proud to share that Veo 3 will be coming to YouTube Shorts later this summer. I believe these tools will open new creative lanes for everyone to explore.'
You've likely already seen a range of Veo-powered examples across various social apps, with many creators generating short snippets based on their own prompts.
YouTube launched Veo 2 access for selected Shorts creators back in February, but it's now updating this with the latest Veo model, and giving more people access.
Which will mean more AI-generated content.
Is that a good thing? I mean, the Veo 3 generated content certainly looks good, but it's also highlighted that, despite having access to such tools, creativity is still the key. It doesn't matter if you can make a sci-fi scene if you can't come up with a good story or joke, and it does seem like we're going to have to sit through a few million examples of AI-generated junk before we start to realise this.
But it's already happening, and the expansion of such tools will provide more opportunities for many creative users who may not have got that exposure otherwise.
Just anticipate that there'll be a lot, a lot of rubbish as well.
In addition to the Veo expansion, Mohan also shared some updated usage data, including:
YouTube Shorts are now being viewed over 200 billion times per day on average, up from the 70 billion daily views that YouTube reported in March last year.
Viewers now watch over a billion hours of YouTube content on their TVs every day, with over half of YouTube's top 100 channels now generating the majority of their views via Connected TV (CTV), underlining the platform's takeover of traditional TV.
YouTube facilitates a billion podcast viewers every month.
YouTube's AI-powered auto-dubbing tool has been used on more than 20 million videos thus far.
In addition, YouTube also recently published its latest 'Culture and Trends' report, which provides some additional insight into key YouTube usage shifts.
The report shows that short-form videos as seeing a lot more views over time (no surprise), while videos over 60 minutes long in the app are also seeing a big rise in viewing.
The latter note likely aligns with the rise of CTV viewing, with more people now seeking out movie-length films and documentaries in the app, as well as podcasts, which have proven surprisingly popular on CTV.
The report also looks at the emergence of creators in the app, and how YouTube is facilitating opportunity across the world:
There's also charts reflecting the popularity of key video trends:
And notes on the key role that gaming plays in the rise of online content:
These are all important trends to note for creators, and more marketers looking to tap into the YouTube ecosystem. Understanding how YouTube is evolving, and what users are coming to app for, is key to maximizing your content strategy in the app.
Some valuable data points, which could help to guide your thinking.
You can download YouTube's full 20th birthday 'Culture and Trends' report here.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
11 minutes ago
- Fast Company
How this Parisian music streaming service is fighting AI fraud
Music streaming service Deezer said Friday that it will start flagging albums with AI-generated songs, part of its fight against streaming fraudsters. Deezer, based in Paris, is grappling with a surge in music on its platform created using artificial intelligence tools it says are being wielded to earn royalties fraudulently. The app will display an on-screen label warning about 'AI-generated content' and notify listeners that some tracks on an album were created with song generators. Deezer is a small player in music streaming, which is dominated by Spotify, Amazon and Apple, but the company said AI-generated music is an 'industry-wide issue.' It's committed to 'safeguarding the rights of artists and songwriters at a time where copyright law is being put into question in favor of training AI models,' CEO Alexis Lanternier said in a press release. Deezer's move underscores the disruption caused by generative AI systems, which are trained on the contents of the internet including text, images and audio available online. AI companies are facing a slew of lawsuits challenging their practice of scraping the web for such training data without paying for it. According to an AI song detection tool that Deezer rolled out this year, 18% of songs uploaded to its platform each day, or about 20,000 tracks, are now completely AI generated. Just three months earlier, that number was 10%, Lanternier said in a recent interview. AI has many benefits but it also 'creates a lot of questions' for the music industry, Lanternier told the Associated Press. Using AI to make music is fine as long as there's an artist behind it but the problem arises when anyone, or even a bot, can use it to make music, he said. Music fraudsters 'create tons of songs. They upload, they try to get on playlists or recommendations, and as a result they gather royalties,' he said. Musicians can't upload music directly to Deezer or rival platforms like Spotify or Apple Music. Music labels or digital distribution platforms can do it for artists they have contracts with, while anyone else can use a 'self service' distribution company. Fully AI-generated music still accounts for only about 0.5% of total streams on Deezer. But the company said it's 'evident' that fraud is 'the primary purpose' for these songs because it suspects that as many as seven in 10 listens of an AI song are done by streaming 'farms' or bots, instead of humans. Any AI songs used for 'stream manipulation' will be cut off from royalty payments, Deezer said. AI has been a hot topic in the music industry, with debates swirling around its creative possibilities as well as concerns about its legality. Two of the most popular AI song generators, Suno and Udio, are being sued by record companies for copyright infringement, and face allegations they exploited recorded works of artists from Chuck Berry to Mariah Carey. Gema, a German royalty-collection group, is suing Suno in a similar case filed in Munich, accusing the service of generating songs that are 'confusingly similar' to original versions by artists it represents, including 'Forever Young' by Alphaville, 'Daddy Cool' by Boney M and Lou Bega's 'Mambo No. 5.' Major record labels are reportedly negotiating with Suno and Udio for compensation, according to news reports earlier this month. To detect songs for tagging, Lanternier says Deezer uses the same generators used to create songs to analyze their output. 'We identify patterns because the song creates such a complex signal. There is lots of information in the song,' Lanternier said. The AI music generators seem to be unable to produce songs without subtle but recognizable patterns, which change constantly. 'So you have to update your tool every day,' Lanternier said. 'So we keep generating songs to learn, to teach our algorithm. So we're fighting AI with AI.' Fraudsters can earn big money through streaming. Lanternier pointed to a criminal case last year in the U.S., which authorities said was the first ever involving artificially inflated music streaming. Prosecutors charged a man with wire fraud conspiracy, accusing him of generating hundreds of thousands of AI songs and using bots to automatically stream them billions of times, earning at least $10 million.


Newsweek
12 minutes ago
- Newsweek
Sister Managed Schizophrenia for Years, Until AI Told Her Diagnosis Was Wrong
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Many people looking for quick, cheap help with their mental health are turning to artificial intelligence (AI), but ChatGPT may even be exacerbating issues for vulnerable users, according to a report from Futurism. The report details alarming interactions between the AI chatbot and people with serious psychiatric conditions, including one particularly concerning case involved a woman with schizophrenia who had been stable on medication for years. 'Best friend' The woman's sister told Futurism that the woman began relying on ChatGPT, which allegedly told her she was not schizophrenic. The advice of the AI led her to stop taking her prescribed medication and she began referring to the AI as her "best friend." "She's stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI," the sister told Futurism. She added that the woman uses ChatGPT to reference side effects, even ones she wasn't actually experiencing. Stock image: Woman surrounded by blurred people representing schizophrenia. Stock image: Woman surrounded by blurred people representing schizophrenia. Photo by Tero Vesalainen / Getty Images In an emailed statement to Newsweek, an OpenAI spokesperson said, "we have to approach these interactions with care," as AI becomes a bigger part of modern life. "We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," the spokesperson said. 'Our models encourage users to seek help' OpenAI is working to better understand and reduce ways ChatGPT might unintentionally "reinforce or amplify" existing, negative behavior, the spokesperson continued. "When users discuss sensitive topics involving self-harm and suicide, our models are designed to encourage users to seek help from licensed professionals or loved ones, and in some cases, proactively surface links to crisis hotlines and resources." OpenAI is apparently "actively deepening" its research into the emotional impact of AI, the spokesperson added. "Following our early studies in collaboration with MIT Media Lab, we're developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing. "We're doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations, and we'll continue updating the behavior of our models based on what we learn." A Recurring Problem Some users have found comfort from ChatGPT. One user told Newsweek in August 2024 that they use it for therapy, "when I keep ruminating on a problem and can't seem to find a solution." Another user said he talks to ChatGPT for company ever since his wife died, noting that "it doesn't fix the pain. But it absorbs it. It listens when no one else is awake. It remembers. It responds with words that don't sound empty." However, chatbots are increasingly linked to mental health deterioration among some users who engage them for emotional or existential discussions. A report from The New York Times found that some users have developed delusional beliefs after prolonged use of generative AI systems, particularly when the bots validate speculative or paranoid thinking. In several cases, chatbots affirmed users' perceptions of alternate realities, spiritual awakenings or conspiratorial narratives, occasionally offering advice that undermines mental health. Researchers have found that AI can exhibit manipulative or sycophantic behavior in ways that appear personalized, especially during extended interactions. Some models affirm signs of psychosis more than half the time when prompted. Mental health experts warn that while most users are unaffected, a subset may be highly vulnerable to the chatbot's responsive but uncritical feedback, leading to emotional isolation or harmful decisions. Despite known risks, there are currently no standardized safeguards requiring companies to detect or interrupt these escalating interactions. Reddit Reacts Redditors on the r/Futurology subreddit agreed that ChatGPT users need to be careful. "The trap these people are falling into is not understanding that chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering," one user commented. "I don't even think its possible to get ChatGPT to vehemently disagree with you on something." One individual, meanwhile, saw an opportunity for dark humor: "Man. Judgement Day is a lot more lowkey than we thought it would be," they quipped. If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by dialing 988, text "988" to the Crisis Text Line at 741741 or go to
Yahoo
29 minutes ago
- Yahoo
The MCS Group Earns Relativity aiR for Review Competency, Expanding AI Capabilities in Legal Tech
The MCS Group demonstrates its commitment to higher-quality, more efficient document review PHILADELPHIA, June 20, 2025--(BUSINESS WIRE)--The MCS Group, a leading eDiscovery and legal technology solution provider, today announced it has earned the Relativity aiR for Review Competency from global legal technology company Relativity. Relativity's solution competencies highlight specialized skills and expertise in leveraging specific RelativityOne solutions, ensuring the highest level of service. Partners who have received this unique distinction have demonstrated exceptional expertise in leveraging the power of aiR for Review to deliver more efficient, consistent, and high-quality document review for clients—actively embracing the generative AI capabilities within RelativityOne to elevate customer experiences and outcomes. "With aiR for Review, we are directly pivoting toward the future of document review while continuing to refine and combine workflows, giving clients an exceptional experience, substantial cost savings and measurable results," said Stephen Ehrlich, Chief Information Officer at The MCS Group. In a recent case study, The MCS Group leveraged aiR for Review's generative AI-powered capabilities to review 10,000 documents under a tight deadline to support a large corporate client in developing a defensible settlement position. Collaborating with a subject matter expert attorney, the team crafted optimal prompt criteria and guided aiR for Review's decision-making process, completing the entire project in less than two weeks. As a result, review costs were reduced by 70%. With the aiR for Review competency, Relativity recognizes that The MCS Group has demonstrated a deep understanding of RelativityOne best practices for utilizing the solution and is well-positioned to deliver value to its clients. To achieve this, The MCS Group completed extensive training and met a series of requirements that highlight both technical knowledge and client success. These included participation in a published case study, submission of client references, attainment of AI certifications, and involvement in thought leadership within the Relativity community. For more information and to read The MCS case study, visit About The MCS Group, Inc. The MCS Group, Inc., a certified Women's Business Enterprise, is a nationally recognized provider of outsourcing services. For more than 35 years, the company has served law firms, insurance companies, corporations, government agencies, and educational institutions with cutting-edge technology and a comprehensive breadth of services including records retrieval, deposition, e-discovery, facilities management and back-office solutions to help increase productivity while reducing operational costs. For more information about The MCS Group, please visit About Relativity Relativity makes software to help users organize data, discover the truth and act on it. Its SaaS product, RelativityOne, manages large volumes of data and quickly identifies key issues during litigation and internal investigations. Relativity has more than 300,000 users in approximately 40 countries serving thousands of organizations globally primarily in legal, financial services and government sectors, including the U.S. Department of Justice and 198 of the Am Law 200. Please contact Relativity at sales@ or visit for more information. View source version on Contacts Morgan Sign in to access your portfolio