Latest news with #Gen-4


India Today
11 hours ago
- Entertainment
- India Today
Midjourney launches V1 AI video generation model right after Disney accuses it of plagiarism
Midjourney, the AI startup famous for its surreal image generation tools, is making a bold leap into video. Recently, the company unveiled V1, its long-awaited video-generation model that promises to breathe life into your static images. It's a big move for Midjourney as it throws the company into direct competition with other big-hitters like OpenAI, Runway, Adobe and Google.V1 is designed as an image-to-video model, allowing users to transform either their own uploaded pictures or Midjourney's AI-generated images into short five-second video clips. Like its sibling image models, V1 is only accessible via Discord for now and is web-only at launch. advertisementAnd it's not just videos Midjourney has in its sights. In a blog post, CEO David Holz set out some pretty ambitious goals for the company's AI, saying V1 is just the next stepping stone toward real-time 'open-world simulations.' The company also revealed its plans to branch into 3D renderings and real-time generative models down the line. While Midjourney's image tools have long appealed to artists and designers, the company has taken a slightly different tack with video. Many of its rivals — such as Sora by OpenAI, Runway's Gen-4, Firefly by Adobe and Veo 3 by Google — are going after commercial filmmakers and studios with highly controllable AI tools. Midjourney, however, is positioning itself as more of a creative playground for those looking for something a little more V1 AI video generation model: Pricing and availabilityadvertisementDespite this, Midjourney is pushing ahead. Video generation doesn't come cheap, though. V1 consumes eight times more credits per clip than Midjourney's still-image tools, so subscribers will burn through their monthly allowances far faster. At launch, Basic subscribers — who pay $10 (around Rs 866) per month — can access V1, but unlimited video generation is limited to the $60 (around Rs 5,200) Pro and $120 (approximately Rs 10,400) Mega plans, and only on the 'Relax' mode, which produces videos more slowly. However, the company says it will review this pricing structure in the coming weeks as it gathers feedback from for the tools themselves, V1 offers a surprising level of control. You can opt for an 'auto' mode that lets the AI generate motion for you or a 'manual' mode that accepts text prompts to dictate exactly how you want your animation to move. Plus, there are settings for adjusting movement intensity — 'low motion' if you want subtle shifts, or 'high motion' for more energetic effects. Clips last five seconds by default but can be extended up to 21 seconds in four-second accuses Midjourney of plagiarismThat said, Midjourney is entering the video arena under a legal cloud. Only a week ago, Disney and Universal sued the startup over its image-generation models, claiming they can produce unauthorised versions of famous characters like Darth Vader and Homer Simpson. It's part of a growing backlash across Hollywood as studios grow nervous about AI tools replacing human creatives — and AI companies face questions about training data and copyright examples of V1's output suggest Midjourney is sticking to its trademark surreal aesthetic rather than aiming for hyper-realism, the sort of style that fans of the platform have come to love. The initial reaction from users has been mostly positive so far, though it's still too early to tell how V1 will stack up against more established players like Runway and Sora.


CNBC
10-06-2025
- Entertainment
- CNBC
50. Runway
Founders: Cristóbal Valenzuela (CEO), Anastasis Germanidis, Alejandro Matamala OrtizLaunched: 2018Headquarters: New York CityFunding: $545 millionValuation: $3.5 billion (PitchBook)Key Technologies: Artificial intelligence, generative AIIndustry: Enterprise technology, mediaPrevious appearances on Disruptor 50 list: 0 Many in Hollywood and the broader creative community feel like they are under attack from AI. It was only months after ChatGPT's introduction when the major Hollywood unions representing writers and actors both voted to go on strike. But generative AI research and media company Runway says the future of filmmaking, incorporating a wider range of views and stories, will require focusing a generative AI lens on it. Runway's three founders, Cristóbal Valenzuela, Anastasis Germanidis and Alejandro Matamala Ortiz, met while studying for their master's in interactive telecommunications at New York University's Tisch School of the Arts. Combining their backgrounds in art and engineering, the three went on to launch Runway the same year that they graduated. "Runway is an invitation to artists, and others, to learn about and explore machine learning through more accessible tools. Machine learning is a complex field that will likely continue to impact our society for years to come, and we need more ways to give more people access," Valenzuela said in a statement. The company offers a variety of content creation tools designed to be used by students and Hollywood directors alike. Just this May, the company released its newest tool called Gen-4, which allows users to generate consistent characters, locations and styles throughout their photographs or videos. By uploading just a single reference image, Gen-4 gives users access to tweak lighting scenarios, recast a character, switch camera angles or even change locations. It's not just for movies, with multimedia storytelling created by Gen-4 used by brands across industries, enabling customers to virtually try on clothing, create new assets in video games, or virtually design their homes. The AI video generation company's tools include Act-One, which streamlines the traditional process for facial animation and use of a video of an actor performing, and can preserve actors' facial expressions and realistically depict performances in live action or animated characters. Its Frames tools can help artists maintain a specific style with a simple set of written prompts — and extend to subject, scene, lighting and color. Despite the time-saving tools and creative assist offered, the backlash from creators over AI has been significant. One study commissioned by creator unions and advocate groups found that 75% of executives in the industry whose divisions has used gen AI said it had already contributed to the elimination, reduction or consolidation of jobs. Yet, Runway has received support from major names in the media industry across movies, television and music. Madonna's Celebration Tour used Runway's tools to generate visuals for the stage. Additionally, Runway's technology was used by editors on the Oscar-winning film "Everything Everywhere All at Once" and Stephen Colbert's "The Late Show." Art programs at schools including Harvard University, New York University and Rhode Island School of Design also have begun incorporating Runway into their design and film curriculum. There's another problem Runway's founders say its AI can help solve and which the film industry has long failed on: accessibility. Roughly two out of 10 theatrical film directors are people of color and even less are women, according to a study from the University of California, Los Angeles. "I'm calling it Hollywood 2.0, where everyone is gonna be able to make the films and the blockbusters that only a handful of people were able to before," Valenzuela said in an interview with Variety.


Forbes
28-04-2025
- Entertainment
- Forbes
AI And Hollywood's Next Golden Age
When I asked Amit Jain, the CEO of Luma Labs, what he was building, he didn't say "a video generator." He said a 'universal imagination engine,' a studio-grade system that can produce coherent, emotionally resonant scenes from nothing but a prompt. 'Directing becomes prompting,' he told me. 'We're trying to build tools that understand you like a really good DP would.' Luma's Ray 2 model, which builds on a lineage of NeRF-based rendering and physics-informed realism, can now produce short video clips with camera-aware motion and cinematic texture. The tools still aren't easy to control and lack consistency, but they are improving at an astonishing rate. Generative AI models can rip off elaborate special effects without thinking about it, literally. Generative video is becoming its own medium. Over the past six months, we've seen a wave of new models reach production quality at a remarkable pace, including Runway's Gen-4, OpenAI's Sora, Google's Veo, Luma's Ray 2, Pika 1.5, Kling, Higgsfield, and Minimax, to name just a few. More models are emerging every month. Some excel at animation. Others at live-action. Adobe's Firefly and Moonvalley's Marey are notably trained only on licensed media. Runway has a deal with Lionsgate. HOLLYWOOD, CA - OCTOBER 15: Director James Cameron attends the American Cinematheque 30th ... More Anniversary Screening Of "The Terminator" Q+A at the Egyptian Theatre on October 15, 2014 in Hollywood, California. (Photo by) Blockbuster director James Cameron, who recently joined the board of Stability AI, says AI won't eliminate jobs — but he knows better than most how long it takes and how much it costs to move atoms around in meatspace. The flight to AI will mirror the flight of physical production from Hollywood to cheaper locations, only this time, the locations aren't real. Synthetic content from Holywater. If you can't tell the difference, there is no difference. According to Bogdan Nesvit, founder and CEO of Holywater — whose AI-native streaming apps My Muse and My Drama have over 5 million downloads — 'Ninety percent of our content will be AI-generated within two years.' Holywater is already using AI to produce short-form series for its own platform. Nesvit told me even his mother can no longer tell what's synthetic and what's filmed. Holywater's next move is a platform for TikTok stars, YouTube creators, and short-form filmmakers who want to work directly with AI-generated characters and sets. Nesvit describes it as 'Hollywood in your room.' A director speaks aloud — "Wide shot. Morning light. Forest clearing." — and the scene appears. Not as a sketch, but as a finished scene. CHENGDU, CHINA - SEPTEMBER 20: Film director and producer Rob Minkoff attends the 1st Golden Panda ... More International Cultural Forum on September 20, 2023 in Chengdu, Sichuan Province of China. (Photo by Liu Zhongjun/China News Service/VCG via Getty Images) Film director Rob Minkoff (The Lion King, Stuart Little, Forbidden Kingdom, Haunted Mansion) offered a similar perspective at a Chapman University event two weeks ago, saying, 'You'll be directing in a virtual space the way you would direct on a soundstage — just faster, cheaper, and without the physical limits.' At the Harvard XR Symposium, OpenAI researcher Jeff Bigham projected that 'personalized generative video will become standard in entertainment within three years.' At the same event, Meta CTO Andrew Bosworth described a future where creators describe scenes and characters in natural language and watch them unfold instantly. Jeffrey Katzenberg, founder and managing partner of WndrCo LLC, during the Bloomberg New Economy ... More Forum in Singapore, on Thursday, Nov. 9, 2023. The New Economy Forum is being organized by Bloomberg Media Group, a division of Bloomberg LP, the parent company of Bloomberg News. Photographer: Lionel Ng/Bloomberg Entertainment executives are well aware of what's coming. At a Bloomberg conference last fall, DreamWorks founder Jeffrey Katzenberg said, 'AI will cut the cost of making animated movies by 90%.' Sony Pictures CEO Tony Vinciquerra echoed the sentiment, noting, 'The biggest problem with making films today is the expense. We will be looking at ways to produce both films for theaters and television in a more efficient way, using AI primarily.' Soon creators will be able to prompt fully navigable 3D virtual worlds with realistic physics and intelligent NPCs. Startups like Cecilia Chen's Cybever and Li-Li Feng's World Labs are already laying the groundwork. Chen told me recently on The AI/XR Podcast that Cybever's goal is 'to let creators generate entire virtual cities in real time, then walk through them and interact with AI agents as easily as building a deck in PowerPoint.' Amit Jain, co-founder and CEO of Luma Labs. This brings us back to Luma's Jain. He's not just talking about movies. We're talking about filming dynamic avatars inside reactive 3D worlds — environments more like games than traditional films. Storytelling will evolve into storyliving, where games meet movies, like HBO's Westworld, and we might start questioning who and what is real. A new golden age of Hollywood is dawning, and as much as it pains me to say it, this golden age will come at enormous human cost. Hollywood isn't alone. There's no business that can't be helped by massively cutting costs. Meaning, humans. Fortunately for many, the old ways fade slowly — but in our accelerated age, no one can say what 'slowly' means anymore. The deeper threat to the entertainment industry remains the war for attention — and AI is arming the insurgents. Hollywood can be understood as an ecosystem of capital, technology, IP, distribution, and celebrity. Celebrities will remain central. Studios will continue to own beloved IPs. But three of the studios' advantages — capital, production, and distribution — are under siege and will inevitably fade in a world reordered by AI. There's much more to explore. In the coming weeks, I'll publish a series of follow-up essays examining what this disruption could mean for Hollywood, gaming, big tech, and pop culture.


Forbes
15-04-2025
- Entertainment
- Forbes
Runway AI's Gen-4: How Can AI Montage Go Beyond Absurdity
NEW YORK, NEW YORK - MAY 09: (L-R) Jane Rosenthal and Cristobal Valenzuela speak onstage during the ... More 2024 AI Film Festival New York Panel at Metrograph on May 09, 2024 in New York City. (Photo by) The recent release of Runway AI's Gen-4 has ignited both excitement and existential questions about the future of cinema and media in general. With the company now valued at 3 billion following a $308 million funding round led by the private equity firm General Atlantic and backed by industry heavyweights like Nvidia and SoftBank, AI's march into Hollywood appears unstoppable. The film industry, alongside all creative sectors, from digital marketing to social media, stands at a technological crossroad. As artificial intelligence begins to reshape every aspect of visual storytelling and change the landscape of entertainment and digital commerce, we must assess its potentials and pitfalls. Major production companies are rapidly adopting AI video tools. Fabula, the acclaimed studio behind Oscar-winning A Fantastic Woman and biopic Spencer, just announced a partnership with Runway AI to integrate AI across its production pipeline. Lionsgate signed a deal with Runway last fall to explore AI-powered filmmaking. Experimental directors like Harmony Korine have already debuted AI-assisted film at Venice last year. The broad applications of AI videos are already impressive, from pre-visualizing scenes for Amazon's House of David to creating advertisement for Puma. Yet beneath these flashy demonstrations lies a more fundamental question: can AI-generated content evolve beyond technical spectacle to deliver truly meaningful stories? Runway's Gen-4 represents significant progress in several areas: character consistency, scene coherence, and visual fidelity. An example Runway AI releases show two main characters stay consistent across different shots ranging from walking, running, petting a cow, lighting up a match, and maintain fidelity of the look of a steppe in gloomy weather. Yet these technical improvements don't address the core challenge: AI excels at generating individual moments but struggles with coherent and sustained storytelling. While it can create a stunning shot of giraffes and lions roam in the New York City, can it make audiences care about a city turned into a zoo? AI videos risk repeating the early mistakes of Computer Generated Imagery (CGI), prioritizing visual gimmicks over in-depth messages. As barriers to creative production and film making disappear, we may face a flood of visually polished but emotionally hollow contents, derivative works optimized for algorithmic efficiency, or compelling synthetic media that lacks human touch. While AI videos can wow first-time viewers, can they make audiences want to watch them more? Can AI films ever produce classic pieces that draw generations of movie-goers? Current multi modal AI technologies center on innovations in film, media, and video games. A recent project spearheaded by researchers from Nvidia, Stanford and UCSD uses Test-Time-Training layers in machine learning models to generate 60-seconds animations of Tom and Jerry. To achieve this, the team trained the model on 81 cartoon footages between 1940 and 1948, which add up to about 7 hours. The model generates and connects multiple 3-second segments, each guided by storyboard annotating plots, settings, and camera movements. The technique highlights significant potential to scale video productions and animation series creations. A poster for Joseph Barbera and William Hanna's 1950 cartoon 'The Framed Cat'. (Photo by Movie ... More Poster) But the technology also reveal critical flaws that persist among AI video generators such as Sora, Kling, Runway, Pika, etc. One limitation is continuity error. For example, rooms, landscapes, and lighting shift unnaturally between 3-second segments. Physics defiance is another problem. For instance, in the earlier mentioned Tom and Jerry AI videos, Jerry's cheese float or morph into different sizes and textures at segment boundaries. Another issue is narrative disjointedness. As the segmentation of content is necessary for algorithms to effectively learn the contents, understand the prompt, and accurately generate videos, AI models struggle to show logical scene progression. These traces of what I call AI montage also appear in Runway AI's videos, the elephants walking across the Time Square is abruptly followed by a scene of a cheetah running across a bridge. One is set in cloudy weather while the next in a sunny day. The changes do not push the storyline forward nor do they convey any logic. The absurd, the fragmented, and the incongruous, are what AI video generators currently good at producing. For now, AI struggles to replicate the coherence of even a 5-minute cartoon, let alone a feature film. AI-generated videos show strength as a medium for critiquing both itself and the societies that produce it. Director Jia Zhangke's recent AI film made using Kling AI imagines a future run by robotic caretakers. The film provokes audiences to reflect on the crisis of aging populations, societal neglect, and the erosion of empathy in an era of breakneck competition, capitalism, and exploding automated technologies. Jia's film show robot companions taking the elderly for a walk or helping them harvest crops, in lieu of real sons and daughters. Such a theme is grounded in societal challenges today. The film critiques the substitution of human connection with automated machines and transactional relationships, and raises the concern over relentless stress and long hours in workplaces. Just as Charlie Chaplin used industrialization-era tools to critique industrialization in Modern Times, today's filmmakers can use AI to critique the conditions of its own existence. Consider how synthetic news anchors might expose media manipulation, or endlessly combinable streaming content could comment on algorithmic culture. Just like science fictions that critique environmental disasters, human greed, and inequality, the most compelling AI films will likely be those that embrace their own artificiality to engage with real social problems. Rather than fearing obsolescence, filmmakers might focus more intensely on what machines cannot replicate: the nuance of human emotions, complexities of human nature, the weight of lived experience, and the cultural resonance of authentic storytelling. History suggests that film and media have always adapted to technological upheaval, from silent to sound, black-and-white to color, celluloid to digital, each time emerging with new creative possibilities. The question is no longer whether AI will change filmmaking, but how filmmakers will harness it to tell stories that matter.
Yahoo
10-04-2025
- Business
- Yahoo
Runway AI Secures $300M From Nvidia And Fidelity To Revolutionize Film With Gen-4 Model And Studio Deals Raising Its Valuation To $3B
Runway AI, a generative video startup based in New York, has just secured more than $300 million in funding. General Atlantic led the round with support from key-name backers such as Nvidia (NASDAQ:NVDA), Fidelity Management & Research Company, Baillie Gifford, and SoftBank Vision Fund 2, according to Seeking Alpha. This new funding raises Runway's valuation to $3 billion, Bloomberg reported. The investment showcases strong investor confidence in AI-generated media as a promising frontier for innovation. Runway plans to use the new funding to boost research and expand its reach across the media and entertainment industries. The company is also growing its team of researchers and engineers to accelerate the development pipeline. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. Runway AI recently released Gen-4, its most advanced media generation model to date. Gen-4 delivers precise and consistent visual outputs across characters, locations, and objects in multi-scene compositions, and is meant to enable professional-level storytelling and production at scale. The company also announced the expansion of Runway Studios, which focuses on creating original film and animation projects using the startup's proprietary foundation models. Runway Studios is a key player in proving its AI models in action, in real-world, commercial settings. According to Bloomberg, Runway AI's tools have already been used in a number of high-profile projects, including in scenes for Amazon's (NASDAQ:AMZN) upcoming House of David series, visuals for a Madonna world tour, and branded content for Puma. Runway's AI involvement in these productions shows the increasing need for scalable media solutions to deliver quality and efficiency. Trending: Runway AI has also signed an agreement with Lionsgate to train AI models using the studio's content. These models can then be used for future movie productions. CEO and co-founder Cristóbal Valenzuela told Bloomberg that the company is developing more partnerships with studios focused on exploring the possibilities of AI-generated content, but he didn't disclose names. Valenzuela reiterated the company's commitment to collaborating with creative professionals to push the boundaries of modern storytelling with the Gen-4 model and the ongoing studio collaborations at the heart of that, Bloomberg reported. Runway AI's investor base includes a mix of new and existing backers who see generative video as a high-growth sector. Nvidia's involvement reflects its continued push into physical AI and its broader ambitions beyond generative text and image tools. Fidelity and Baillie Gifford, both known for placing long-term bets on innovative companies, have also made clear that they believe in Runway's ability to lead the AI media generation Vision Fund 2, managed under SoftBank Group, has been increasingly focused on supporting companies creating foundational models and production tools within AI. General Atlantic's role in this round underscores the growing interest in digital transformation via AI. Runway AI's momentum positions it in a select group of startups transforming the way media is created, produced, and distributed. With its $3 billion valuation and backing from the biggest names in tech and finance, the company is poised to shape the future of the creative industries. The focus remains on advancing AI research while delivering tools that help creators meet the rising demand for high-quality content at scale. Read Next: Hasbro, MGM, and Skechers trust this AI marketing firm — . Inspired by Uber and Airbnb – Deloitte's fastest-growing software company is transforming 7 billion smartphones into income-generating assets – UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report NVIDIA (NVDA): Free Stock Analysis Report This article Runway AI Secures $300M From Nvidia And Fidelity To Revolutionize Film With Gen-4 Model And Studio Deals Raising Its Valuation To $3B originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Sign in to access your portfolio