&w=3840&q=100)
Google trains Veo 3 AI video generation model using YouTube content: Report
Google has reportedly been using YouTube content to train its artificial intelligence (AI) models, including Gemini and the Veo 3 video and audio generator. According to a report by CNBC, a YouTube spokesperson confirmed that Google relies on its bank of YouTube videos to train its AI models. However, the spokesperson added that Google does not use each and every single video on YouTube but only uses a subset of its videos for training purposes.
The report further claims that many creators whose videos might have been used in this matter remain unaware that their content has been used without their consent or any compensation.
Creators were never notified?
As per YouTube, this information has been conveyed to creators previously, but, as per experts who talked to CNBC, it is not widely understood by creators and media organisations that the US technology giant trains its AI models using its video library (YouTube).
Earlier last year, in September, YouTube in a blog stated that the content uploaded on the platform could be used to 'improve the product experience … including through machine learning and AI applications.' A huge disadvantage here is that creators who have uploaded videos on YouTube have no way of opting out from letting Google use it to train AI models, which is something that its competitors, like Meta offers. Surprisingly, YouTube allows created to opt out from sharing their content with third-party companies to train their AI models.
As per YouTube, there are around 20 billion videos on the platform, and out of them, how many are being used to train Google AI models is unclear at the moment. CNBC cited experts as saying that even if Google uses one per cent of those videos, then it would amount to around 2.3 billion minutes of content, which is 40 times more of the training data that is being used by competing AI models for training.
The report claimed that CNBC talked to a number of leading creators and IP professionals, and it found out that none of them were apparently aware or had been informed by YouTube about the possibility of their content being used to train Google's AI models.
Why does it matter
YouTube, using user-uploaded videos to train AI, has raised concerns, especially after Google unveiled its powerful Veo 3 video generator. The tool can create fully AI-generated cinematic scenes, including visuals and audio. With around 20 million videos uploaded to YouTube daily by creators and media companies, some fear their content is being used to build technology that might one day rival or replace them.
CNBC cited experts as saying that even if Veo 3's results don't directly copy existing content, the AI-generated output can power commercial products that may rival the very creators whose work helped train it, without their permission, credit, or payment.
This no-way-out trap begins as soon as a creator uploads a video on YouTube, as by doing so, the person agrees to YouTube having a broad license to the content.
What does the past record show
According to The New York Times, Google has reportedly transcribed YouTube videos to train its AI models. Mashable India points out that this practice raises legal concerns, as it may infringe on creators' copyrights.
The use of online content for AI training has already led to lawsuits related to licensing and intellectual property. Other players like Meta and OpenAI have also faced heat for using intellectual property to train their AI models without having the consent from creators or authors.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
30 minutes ago
- News18
Google's Gemini Spent 800 Hours Beating Pokémon And Then It Panicked And Failed
Last Updated: Google's newest AI chatbot struggles to stay calm while playing a game designed for children. Artificial intelligence (AI) has come a long way, but even advanced systems can struggle sometimes. According to a report from Google DeepMind, their top AI model, Gemini 2.5 Pro, had a tough time while playing the classic video game Pokémon Blue—a game that many kids find easy. The AI reportedly showed signs of confusion and stress during the game. The results came from a Twitch channel called Gemini_Plays_Pokemon, where an independent engineer named Joel Zhang tested Gemini. Although Gemini is known for its strong reasoning and coding skills, the way it behaved during the game revealed some surprising and unusual reactions. The DeepMind team reported that Gemini started showing signs of what they called 'Agent Panic." In their findings, they explained, 'Throughout the playthrough, Gemini 2.5 Pro gets into various situations which cause the model to simulate 'panic'. For example, when the Pokémon in the party's health or power points are low, the model's thoughts repeatedly reiterate the need to heal the party immediately or escape the current dungeon." This behaviour caught the attention of viewers on Twitch. People watching the live stream reportedly started recognising the moments when the AI seemed to be panicking. DeepMind pointed out, 'This behaviour has occurred in enough separate instances that the members of the Twitch chat have actively noticed when it is occurring." Even though AI doesn't feel stress or emotions like humans do, the way Gemini reacted in tense moments looked very similar to how people respond under pressure—by making quick, sometimes poor or inefficient decisions. In its first full attempt at playing Pokémon Blue, Gemini took a total of 813 hours to complete the game. After Joel Zhang made some adjustments, the AI managed to finish a second run in 406.5 hours. However, even with those changes, the time it took was still very slow, especially when compared to how quickly a child could beat the same game. People on social media didn't hold back from poking fun at the AI's nervous playing style. A viewer commented, 'If you read its thoughts while it's reasoning, it seems to panic anytime you slightly change how something is worded." Another user made a joke by combining 'LLM" (large language model) with 'anxiety," calling it: 'LLANXIETY." Interestingly, this news comes just a few weeks after Apple shared a study claiming that most AI models don't actually 'reason" in the way people think. According to the study, these models mostly depend on spotting patterns, and they often struggle or fail when the task is changed slightly or made more difficult.


Time of India
39 minutes ago
- Time of India
Apple executives held internal talks about buying perplexity: Report
Apple executives have held internal talks about potentially bidding for artificial intelligence startup Perplexity, Bloomberg News reported on Friday, citing people with knowledge of the matter. The discussions are at an early stage and may not lead to an offer, the report said, adding that the tech behemoth's executives have not discussed a bid with Perplexity's management. "We have no knowledge of any current or future M&A discussions involving Perplexity," Perplexity said in response to a Reuters' request for comment. Apple did not immediately respond to a Reuters' request for comment. Big tech companies are doubling down on investments to enhance AI capabilities and support growing demand for AI-powered services to maintain competitive leadership in the rapidly evolving tech landscape. Bloomberg News also reported on Friday that Meta Platforms tried to buy Perplexity earlier this year. Meta announced a $14.8 billion investment in Scale AI last week and hired Scale AI CEO Alexandr Wang to lead its new superintelligence unit. Adrian Perica, Apple's head of mergers and acquisitions, has weighed the idea with services chief Eddy Cue and top AI decision-makers, as per the report. The iPhone maker reportedly plans to integrate AI-driven search capabilities - such as Perplexity AI - into its Safari browser, potentially moving away from its longstanding partnership with Alphabet's Google . Banning Google from paying companies to make it their default search engine is one of the remedies proposed by the U.S. Department of Justice to break up its dominance in online search. While traditional search engines such as Google still dominate global market share, AI-powered search options including Perplexity and ChatGPT are gaining prominence and seeing rising user adoption, especially among younger generations. Perplexity recently completed a funding round that valued it at $14 billion, Bloomberg News reported. A deal close to that would be Apple's largest acquisition so far. The Nvidia-backed startup provides AI search tools that deliver information summaries to users, similar to OpenAI's ChatGPT and Google's Gemini.


Time of India
39 minutes ago
- Time of India
Sebi mulls guiding principle for responsible usage of AI, ML in securities markets
Sebi on Friday proposed guiding principles for responsible usage of Artificial Intelligence (AI) and Machine Learning (ML) applications in securities markets to safeguard investors and market integrity. Also, the regulator has proposed that a "regulatory lite" framework may be adopted for usage of AI/ML in the securities market for any purpose other than for business operations that may directly impact their customers. The proposed "guiding principles are intended to optimise benefits and minimise potential risks associated with integration of AI/ML-based applications in securities markets to safeguard investor protection , market integrity, and financial stability," Sebi said in its consultation paper. At present, AI/ML is being used by market participants mainly for advisory and support services, risk management, client identification and monitoring, surveillance, pattern recognition, internal compliance purposes and cyber security. "While AI/ML has the potential to improve productivity, efficiency and outcome, it is also important to manage these systems responsibly as their usage creates or amplifies certain risks which could have an impact on the efficiency of financial markets and may result in adverse impact to investors," Sebi said. Accordingly, Sebi proposed high-level principles to provide guidance to the market participants for having reasonable procedures and control systems in place for supervision and governance of usage of AI/ML applications or tools. The proposed guiding principles were suggested by a Sebi-constituted working group after studying the existing AI/ML guidelines in India as well as globally. As a part of the proposal, the working group suggested that market participants using AI/ML models should have an internal team with adequate skills and experience to monitor the performance, efficacy and security of the algorithms deployed throughout their lifecycle, as well as maintain auditability and explain interpretability of such models. Furthermore, the team should establish procedures for exception and error handling related to AI/ML-based systems. It should also establish fallback plans in the event an AI-based application fails due to technical issues or an unexpected disruption to ensure that the relevant function is carried out through an alternative process. It has been proposed that market participants using AI/ML models for business operations -- such as selection of trading algorithms, asset management or portfolio management and advisory and support services -- that may directly impact their customers should disclose the same to the respective customers to foster transparency and accountability. The market participants should adequately test and monitor the AI/ML-based models to validate their results on a continuous basis. Further, it has been proposed that the testing should be conducted in an environment that is segregated from the live environment prior to deployment to ensure that AI/ML models behave as expected in stressed and unstressed market conditions. Also, market participants should maintain proper documentation of all the models and store input and output data for at least 5 years. "Since the AI/ML systems are dependent on collection and processing of data, market participants should have a clear policy for data security, cyber security and data privacy for the usage of AI/ML based models," Sebi said, adding that information about technical glitches, data breaches shall be communicated to it and other relevant authorities. The Securities and Exchange Board of India (Sebi) has sought public comments till July 11 on the proposals.