
Ninja warns AI deepfakes could destroy live streaming within a year
Tyler 'Ninja' Blevins, one of the most recognizable names in gaming and live streaming, has issued a strong warning about the future of the industry in light of rapid advances in artificial intelligence.
The Twitch veteran believes AI-generated deepfake videos will soon pose an existential threat to streamers.
Speaking during a recent broadcast, Ninja said AI technology, like Google's Veo3, is progressing so quickly that it could make streaming 'impossible' within a year if proper safeguards aren't implemented. His main concern centers around the rise of convincing AI-generated videos showing streamers engaging in fake gameplay or making offensive statements.
'If there's no watermark that's unremovable, streaming is going to be over,' Ninja said. 'There will be AI videos of me saying unhinged things, and people will believe them.'
Ninja also highlighted the potential for AI to be used maliciously by those trying to fabricate scandals or spread fake drama. The ability to prompt AI to create such content could lead to reputational damage for many creators.
His comments follow a viral video where Veo3 generated a fake Fortnite streamer celebrating a win. While not yet flawless, the video demonstrated just how close AI is to replicating real-time content.
The broader implications are already unfolding. Earlier in 2025, an Australian radio station used an AI-generated host for months without public awareness, highlighting how easy it is to deceive audiences.
As AI continues to blur the lines between real and fake, Ninja's warning emphasizes the urgent need for mandatory content labeling and watermarking tools to protect the integrity of live streaming platforms.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
a day ago
- Express Tribune
Apple mulls bid for AI startup Perplexity in search shake-up: Bloomberg report
man walks past an Apple logo outside an Apple store in Aix-en Provence, France, January 15, 2025. Photo:REUTERS Listen to article Apple executives have held internal talks about potentially bidding for artificial intelligence startup Perplexity, Bloomberg News reported on Friday, citing people with knowledge of the matter. The discussions are at an early stage and may not lead to an offer, the report said, adding that the tech behemoth's executives have not discussed a bid with Perplexity's management. "We have no knowledge of any current or future M&A discussions involving Perplexity," Perplexity said in response to a Reuters' request for comment. Apple did not immediately respond to a Reuters' request for comment. Big tech companies are doubling down on investments to enhance AI capabilities and support growing demand for AI-powered services to maintain competitive leadership in the rapidly evolving tech landscape. Bloomberg News also reported on Friday that Meta Platforms tried to buy Perplexity earlier this year. Exclusive: Apple executives have held internal discussions about potentially bidding for artificial intelligence startup Perplexity AI, seeking to address the need for more AI talent and technology — Bloomberg (@business) June 20, 2025 Meta announced a $14.8 billion investment in Scale AI last week and hired Scale AI CEO Alexandr Wang to lead its new superintelligence unit. Adrian Perica, Apple's head of mergers and acquisitions, has weighed the idea with services chief Eddy Cue and top AI decision-makers, as per the report. The iPhone maker reportedly plans to integrate AI-driven search capabilities - such as Perplexity AI - into its Safari browser, potentially moving away from its longstanding partnership with Alphabet's Google. Banning Google from paying companies to make it their default search engine is one of the remedies proposed by the US Department of Justice to break up its dominance in online search. NEW: Apple executives have held internal discussions about potentially bidding for artificial intelligence startup Perplexity AI, seeking to address the need for more AI talent and technology — — Mark Gurman (@markgurman) June 20, 2025 While traditional search engines such as Google still dominate global market share, AI-powered search options including Perplexity and ChatGPT are gaining prominence and seeing rising user adoption, especially among younger generations. Perplexity recently completed a funding round that valued it at $14 billion, Bloomberg News reported. A deal close to that would be Apple's largest acquisition so far. The Nvidia-backed startup provides AI search tools that deliver information summaries to users, similar to OpenAI's ChatGPT and Google's Gemini.


Express Tribune
a day ago
- Express Tribune
Teen social media ban clears first hurdle in Australia
Some age-checking applications collect too much data and no product works 100% of the time, but using software to enforce a teenage social media ban can work in Australia, the head of the world's biggest trial of the technology said on Friday. The view from the government-commissioned Age Assurance Technology Trial of more than 1,000 Australian school students and hundreds of adults is a boost to the country's plan to keep under 16s off social media. From December, in a world first ban, companies like Facebook and Instagram owner Meta, Snapchat, and TikTok must prove they are taking reasonable steps to block young people from their platforms or face a fine of up to A$49.5 million ($32 million). Since the Australian government announced the legislation last year, child protection advocates, tech industry groups and children themselves have questioned whether the ban can be enforced due to workarounds like Virtual Private Networks, which obscure an internet user's location. "Age assurance can be done in Australia privately, efficiently and effectively," said Tony Allen, CEO of the Age Check Certification Scheme, the UK-based organisation overseeing the Australian trial. The trial found "no significant tech barriers" to rolling out a software-based scheme in Australia, although there was "no one-size-fits-all solution, and no solution that worked perfectly in all deployments," Allen added in an online presentation. Allen noted that some age-assurance software firms "don't really know at this stage what data they may need to be able to support law enforcement and regulators in the future. "There's a risk there that they could be inadvertently over-collecting information that wouldn't be used or needed." Organisers of the trial, which concluded earlier this month, gave no data findings and offered only a broad overview which did not name individual products. They will deliver a report to the government next month which officials have said will inform an industry consultation ahead of the December deadline. A spokesperson for the office of the eSafety Commissioner, which will advise the government on how to implement the ban, said the preliminary findings were a "useful indication of the likely outcomes from the trial. "We are pleased to see the trial suggests that age assurance technologies, when deployed the right way and likely in conjunction with other techniques and methods, can be private, robust and effective," the spokesperson said. The Australian ban is being watched closely around the world with several governments exploring ways to limit children's exposure to social media.


Express Tribune
2 days ago
- Express Tribune
MIT AI study: Using tools like ChatGPT is making you dumber, study reveals
A new study from the Massachusetts Institute of Technology (MIT) suggests that frequent use of generative artificial intelligence (GenAI) tools, such as large language models (LLMs) like ChatGPT, may suppress cognitive engagement and memory retention. In the experiment, published by MIT, researchers monitored the brain activity of participants as they wrote essays using different resources: one group relied on LLMs, another used internet search engines, and a third worked without any digital tools. The results revealed a consistent pattern — participants who used GenAI tools displayed significantly reduced neural connectivity and recall, compared to those who relied on their own cognitive abilities. Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory. While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original. Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions. Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout. Participants who used search engines showed intermediate neural activity. Though their writing lacked variety and often reflected similar phrasing, they exhibited better memory retention than the LLM group, suggesting that the process of searching and evaluating sources provided more mental stimulation. In a later phase of the experiment, the groups were shuffled. Participants who had initially used GenAI tools showed improved neural connectivity when writing without digital aids — an encouraging sign that cognitive function could rebound when AI dependence is reduced. The findings could carry important implications for education and the workplace. BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying. Turns out, AI isn't making us more productive. It's making us cognitively bankrupt. Here's what 4 months of data revealed: (hint: we've been measuring productivity all wrong) — Alex Vacca (@itsalexvacca) June 18, 2025 With GenAI tools increasingly integrated into school assignments and professional tasks, concerns about cognitive atrophy are rising. Some students now generate entire essays with tools like ChatGPT, while educators rely on similar software to grade and detect AI-generated work. The study suggests that such widespread use of digital assistance — even when indirect — may hinder mental development and reduce long-term memory retention. As schools and organisations continue to navigate the integration of AI tools, the MIT research underscores the importance of balancing convenience with cognitive engagement. Researchers suggest that while GenAI can be a useful aid, overreliance could have unintended consequences for human memory and creativity.