logo
#

Latest news with #RunwayAI

An AI Film Festival And The Multiverse Engine
An AI Film Festival And The Multiverse Engine

Forbes

time07-06-2025

  • Entertainment
  • Forbes

An AI Film Festival And The Multiverse Engine

In the glassy confines of Alice Tully Hall on Thursday, the third annual Runway AI Film Festival celebrated an entirely new art form. The winning film, Total Pixel Space, was not made in the traditional sense. It was conjured by Jacob Adler, a composer and educator from Arizona State University, stitched together from image generators, synthetic voices, and video animation tools — most notably Runway's Gen-3, the company's text-to-video model (Runway Gen-4 was released in March). Video generation technology emerged in public in 2022 with Meta's crude video of a flying Corgi wearing a red cape and sunglasses. Since then, it has fundamentally transformed filmmaking, dramatically lowering barriers to entry and enabling new forms of creative expression. Independent creators and established filmmakers alike now have access to powerful AI tools such as Runway that can generate realistic video scenes, animate storyboards, and even produce entire short films from simple text prompts or reference images. As a result, production costs and timelines are shrinking, making it possible for filmmakers with limited resources to achieve professional-quality results and bring ambitious visions to life. The democratization of content creation is expanding far beyond traditional studio constraints, empowering anyone with patience and a rich imagination. Adler's inspiration came from Jorge Luis Borges' celebrated short story The Library of Babel, which imagines a universe where every conceivable book exists in an endless repository. Adler found a parallel in the capabilities of modern generative machine learning models, which can produce an unfathomable variety of images from noise (random variations in pixel values much like the 'snow' on an old television set) and text prompts. 'How many images can possibly exist,' the dreamy narrator begins as fantastical AI-generated video plays on the screen: a floating, exploding building; a human-sized housecat curled on a woman's lap. 'What lies in the space between order and chaos?' Adler's brilliant script is a fascinating thought experiment that attempts to calculate the total number of possible images, unfurling the endless possibilities of the AI-aided human imagination. 'Pixels are the building blocks of digital images, tiny tiles forming a mosaic,' continues the voice, which was generated using ElevenLabs. 'Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers,' the narration continues, the voice itself a sequence of numbers that describe air pressure changes over time. 'Therefore, every photograph that could ever be taken exists as coordinates. Every frame of every possible film exists as coordinates.' Winners at the 3rd Annual International AIFF 2025 Runway was founded in 2018 by Cristóbal Valenzuela, Alejandro Matamala, and Anastasis Germanidis, after they met at New York University Tisch School of the Arts. Valenzuela, who serves as CEO, says he fell in love with neural networks in 2015, and couldn't stop thinking about how they might be used by people who create. Today, it's a multi-million-user platform, used by filmmakers, musicians, advertisers, and artists, and has been joined by other platforms, including OpenAI's Sora, and Google's Veo 3. What separates Runway from many of its competitors is that it builds from scratch. Its research team — which comprises most of the company — develops its own models, which can now generate up to about 20 seconds of video. The result, as seen in the works submitted to the AI Film Festival, is what Valenzuela calls 'a new kind of media.' The word film may soon no longer apply. Nor, perhaps, will filmmaker. 'The Tisches of tomorrow will teach something that doesn't yet have a name,' he said during opening remarks at the festival. Indeed, Adler is not a filmmaker by training, but a classically trained composer, a pipe organist, and a theorist of microtonality. 'The process of composing music and editing film,' he told me, 'are both about orchestrating change through time.' He used the image generation platform Midjourney to generate thousands of images, then used Runway to animate them. He used ElevenLabs to synthesize the narrator's voice. The script he wrote himself, drawing from the ideas of Borges, combinatorics, and the sheer mind-bending number of possible images that can exist at a given resolution. He edited it all together in DaVinci Resolve. The result? A ten-minute film that feels as philosophical as it is visual. It's tempting to frame all this as the next step in a long evolution; from the Lumière brothers to CGI, from Technicolor to TikTok. But what we're witnessing isn't a continuation. It's a rupture. 'Artists used to be gatekept by cameras, studios, budgets,' Valenzuela said. 'Now, a kid with a thought can press a button and generate a dream.' At the Runway Film Festival, the lights dimmed, and the films came in waves of animated hallucinations, synthetic voices, and impossible perspectives. Some were rough. Some were polished. All were unlike anything seen before. This isn't about replacing filmmakers. It's about unleashing them. 'When photography first came around — actually, when daguerreotypes were first invented — people just didn't have the word to describe it,' Valenzuela said during his opening remarks at the festival. 'They used this idea of a mirror with a memory because they'd never seen anything like that. … I think that's pretty close to where we are right now.' Valenzuela was invoking Oliver Wendell Holmes Sr.'s phrase to convey how photography could capture and preserve images of reality, allowing those images to be revisited and remembered long after the moment had passed. Just as photography once astonished and unsettled, generative media now invites a similar rethinking of what creativity means. When you see it — when you watch Jacob Adler's film unfold — it's hard not to feel that the mirror is starting to show us something deeper. AI video generation is a kind of multiverse engine, enabling creators to explore and visualize an endless spectrum of alternate realities, all within the digital realm. 'Evolution itself becomes not a process of creation, but of discovery,' his film concludes. 'Each possible path of life's development … is but one thread in a colossal tapestry of possibility.'

Runway AI's Gen-4: How Can AI Montage Go Beyond Absurdity
Runway AI's Gen-4: How Can AI Montage Go Beyond Absurdity

Forbes

time15-04-2025

  • Entertainment
  • Forbes

Runway AI's Gen-4: How Can AI Montage Go Beyond Absurdity

NEW YORK, NEW YORK - MAY 09: (L-R) Jane Rosenthal and Cristobal Valenzuela speak onstage during the ... More 2024 AI Film Festival New York Panel at Metrograph on May 09, 2024 in New York City. (Photo by) The recent release of Runway AI's Gen-4 has ignited both excitement and existential questions about the future of cinema and media in general. With the company now valued at 3 billion following a $308 million funding round led by the private equity firm General Atlantic and backed by industry heavyweights like Nvidia and SoftBank, AI's march into Hollywood appears unstoppable. The film industry, alongside all creative sectors, from digital marketing to social media, stands at a technological crossroad. As artificial intelligence begins to reshape every aspect of visual storytelling and change the landscape of entertainment and digital commerce, we must assess its potentials and pitfalls. Major production companies are rapidly adopting AI video tools. Fabula, the acclaimed studio behind Oscar-winning A Fantastic Woman and biopic Spencer, just announced a partnership with Runway AI to integrate AI across its production pipeline. Lionsgate signed a deal with Runway last fall to explore AI-powered filmmaking. Experimental directors like Harmony Korine have already debuted AI-assisted film at Venice last year. The broad applications of AI videos are already impressive, from pre-visualizing scenes for Amazon's House of David to creating advertisement for Puma. Yet beneath these flashy demonstrations lies a more fundamental question: can AI-generated content evolve beyond technical spectacle to deliver truly meaningful stories? Runway's Gen-4 represents significant progress in several areas: character consistency, scene coherence, and visual fidelity. An example Runway AI releases show two main characters stay consistent across different shots ranging from walking, running, petting a cow, lighting up a match, and maintain fidelity of the look of a steppe in gloomy weather. Yet these technical improvements don't address the core challenge: AI excels at generating individual moments but struggles with coherent and sustained storytelling. While it can create a stunning shot of giraffes and lions roam in the New York City, can it make audiences care about a city turned into a zoo? AI videos risk repeating the early mistakes of Computer Generated Imagery (CGI), prioritizing visual gimmicks over in-depth messages. As barriers to creative production and film making disappear, we may face a flood of visually polished but emotionally hollow contents, derivative works optimized for algorithmic efficiency, or compelling synthetic media that lacks human touch. While AI videos can wow first-time viewers, can they make audiences want to watch them more? Can AI films ever produce classic pieces that draw generations of movie-goers? Current multi modal AI technologies center on innovations in film, media, and video games. A recent project spearheaded by researchers from Nvidia, Stanford and UCSD uses Test-Time-Training layers in machine learning models to generate 60-seconds animations of Tom and Jerry. To achieve this, the team trained the model on 81 cartoon footages between 1940 and 1948, which add up to about 7 hours. The model generates and connects multiple 3-second segments, each guided by storyboard annotating plots, settings, and camera movements. The technique highlights significant potential to scale video productions and animation series creations. A poster for Joseph Barbera and William Hanna's 1950 cartoon 'The Framed Cat'. (Photo by Movie ... More Poster) But the technology also reveal critical flaws that persist among AI video generators such as Sora, Kling, Runway, Pika, etc. One limitation is continuity error. For example, rooms, landscapes, and lighting shift unnaturally between 3-second segments. Physics defiance is another problem. For instance, in the earlier mentioned Tom and Jerry AI videos, Jerry's cheese float or morph into different sizes and textures at segment boundaries. Another issue is narrative disjointedness. As the segmentation of content is necessary for algorithms to effectively learn the contents, understand the prompt, and accurately generate videos, AI models struggle to show logical scene progression. These traces of what I call AI montage also appear in Runway AI's videos, the elephants walking across the Time Square is abruptly followed by a scene of a cheetah running across a bridge. One is set in cloudy weather while the next in a sunny day. The changes do not push the storyline forward nor do they convey any logic. The absurd, the fragmented, and the incongruous, are what AI video generators currently good at producing. For now, AI struggles to replicate the coherence of even a 5-minute cartoon, let alone a feature film. AI-generated videos show strength as a medium for critiquing both itself and the societies that produce it. Director Jia Zhangke's recent AI film made using Kling AI imagines a future run by robotic caretakers. The film provokes audiences to reflect on the crisis of aging populations, societal neglect, and the erosion of empathy in an era of breakneck competition, capitalism, and exploding automated technologies. Jia's film show robot companions taking the elderly for a walk or helping them harvest crops, in lieu of real sons and daughters. Such a theme is grounded in societal challenges today. The film critiques the substitution of human connection with automated machines and transactional relationships, and raises the concern over relentless stress and long hours in workplaces. Just as Charlie Chaplin used industrialization-era tools to critique industrialization in Modern Times, today's filmmakers can use AI to critique the conditions of its own existence. Consider how synthetic news anchors might expose media manipulation, or endlessly combinable streaming content could comment on algorithmic culture. Just like science fictions that critique environmental disasters, human greed, and inequality, the most compelling AI films will likely be those that embrace their own artificiality to engage with real social problems. Rather than fearing obsolescence, filmmakers might focus more intensely on what machines cannot replicate: the nuance of human emotions, complexities of human nature, the weight of lived experience, and the cultural resonance of authentic storytelling. History suggests that film and media have always adapted to technological upheaval, from silent to sound, black-and-white to color, celluloid to digital, each time emerging with new creative possibilities. The question is no longer whether AI will change filmmaking, but how filmmakers will harness it to tell stories that matter.

Runway AI Secures $300M From Nvidia And Fidelity To Revolutionize Film With Gen-4 Model And Studio Deals Raising Its Valuation To $3B
Runway AI Secures $300M From Nvidia And Fidelity To Revolutionize Film With Gen-4 Model And Studio Deals Raising Its Valuation To $3B

Yahoo

time10-04-2025

  • Business
  • Yahoo

Runway AI Secures $300M From Nvidia And Fidelity To Revolutionize Film With Gen-4 Model And Studio Deals Raising Its Valuation To $3B

Runway AI, a generative video startup based in New York, has just secured more than $300 million in funding. General Atlantic led the round with support from key-name backers such as Nvidia (NASDAQ:NVDA), Fidelity Management & Research Company, Baillie Gifford, and SoftBank Vision Fund 2, according to Seeking Alpha. This new funding raises Runway's valuation to $3 billion, Bloomberg reported. The investment showcases strong investor confidence in AI-generated media as a promising frontier for innovation. Runway plans to use the new funding to boost research and expand its reach across the media and entertainment industries. The company is also growing its team of researchers and engineers to accelerate the development pipeline. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. Runway AI recently released Gen-4, its most advanced media generation model to date. Gen-4 delivers precise and consistent visual outputs across characters, locations, and objects in multi-scene compositions, and is meant to enable professional-level storytelling and production at scale. The company also announced the expansion of Runway Studios, which focuses on creating original film and animation projects using the startup's proprietary foundation models. Runway Studios is a key player in proving its AI models in action, in real-world, commercial settings. According to Bloomberg, Runway AI's tools have already been used in a number of high-profile projects, including in scenes for Amazon's (NASDAQ:AMZN)​ upcoming House of David series, visuals for a Madonna world tour, and branded content for Puma. Runway's AI involvement in these productions shows the increasing need for scalable media solutions to deliver quality and efficiency. Trending: Runway AI has also signed an agreement with Lionsgate to train AI models using the studio's content. These models can then be used for future movie productions. CEO and co-founder Cristóbal Valenzuela told Bloomberg that the company is developing more partnerships with studios focused on exploring the possibilities of AI-generated content, but he didn't disclose names. Valenzuela reiterated the company's commitment to collaborating with creative professionals to push the boundaries of modern storytelling with the Gen-4 model and the ongoing studio collaborations at the heart of that, Bloomberg reported. Runway AI's investor base includes a mix of new and existing backers who see generative video as a high-growth sector. Nvidia's involvement reflects its continued push into physical AI and its broader ambitions beyond generative text and image tools. Fidelity and Baillie Gifford, both known for placing long-term bets on innovative companies, have also made clear that they believe in Runway's ability to lead the AI media generation Vision Fund 2, managed under SoftBank Group, has been increasingly focused on supporting companies creating foundational models and production tools within AI. General Atlantic's role in this round underscores the growing interest in digital transformation via AI. Runway AI's momentum positions it in a select group of startups transforming the way media is created, produced, and distributed. With its $3 billion valuation and backing from the biggest names in tech and finance, the company is poised to shape the future of the creative industries. The focus remains on advancing AI research while delivering tools that help creators meet the rising demand for high-quality content at scale. Read Next: Hasbro, MGM, and Skechers trust this AI marketing firm — . Inspired by Uber and Airbnb – Deloitte's fastest-growing software company is transforming 7 billion smartphones into income-generating assets – UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report NVIDIA (NVDA): Free Stock Analysis Report This article Runway AI Secures $300M From Nvidia And Fidelity To Revolutionize Film With Gen-4 Model And Studio Deals Raising Its Valuation To $3B originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Sign in to access your portfolio

Nvidia, SoftBank and Other Investors Propel Runway AI to $3 Billion Valuation
Nvidia, SoftBank and Other Investors Propel Runway AI to $3 Billion Valuation

Yahoo

time04-04-2025

  • Business
  • Yahoo

Nvidia, SoftBank and Other Investors Propel Runway AI to $3 Billion Valuation

Runway AI managed to secure $308 million in funding from General Atlantic and other investors which enabled its rise to a market valuation of over $3 billion on April 4. The funding round received significant investment from Nvidia (NASDAQ:NVDA) together with Fidelity Management & Research, Baillie Gifford (Trades, Portfolio), and SoftBank Vision Fund 2. Bloomberg reports that the investment raised Runway's worth to exceed $3 billion.? Warning! GuruFocus has detected 3 Warning Signs with NVDA. Runway Studio expansion is the primary target of these funds as the studio runs its AI-driven film and animation operations through models built by research team members. The company intends to enhance media generation and AI research by adding a new staff of engineers and researchers.? According to their Gen-4 model runways, developers can now create exact and reliable character-based elements, among many other scene items, with standardized output. Runway implements this technology in multiple projects, including Amazon's House of David, Madonna's concert visual designs, and Puma's promotional advertisement. Runway executed a partnership with Lionsgate studio (LGF.A, LGF.B) to provide training for AI models through studio content, which enables prospects for film cohort development. This article first appeared on GuruFocus. Sign in to access your portfolio

Christie's AI art auction draws big-money bids — and thousands of protests signatures
Christie's AI art auction draws big-money bids — and thousands of protests signatures

Yahoo

time26-02-2025

  • Business
  • Yahoo

Christie's AI art auction draws big-money bids — and thousands of protests signatures

In Christie's New York gallery, a robot is painting a 10-by-12-foot canvas. It adds more oil paint each time a $100 bid is placed on it. But its creative vision doesn't come from the artist who programmed it. It comes from a technique called outpainting, which employs artificial intelligence to generate elements that blend with existing content on a canvas. It's just one method used by the 34 works in Christie's latest venture: the first major auction that exclusively features art made using AI. 'We've seen throughout time that there's a lot of artistry in working with mechanical means for creating artwork, " said artist and roboticist Alexander Reben, whose aforementioned painting is up for bidding. "And I think what really matters is your intention and what you do." The auction house — known for selling fine art, luxury goods, and antiques — opened 'Augmented Intelligence' on Feb. 20. The sale has raked in hundreds of thousands of dollars in bids. But not everyone is pleased with those results. 'Many of the artworks you plan to auction were created using AI models that are known to be trained on copyrighted work without a license,' states an open letter addressed to Christie's signed by more than 6,400 artists. The letter called for the auction to be cancelled. Reid Southen, who helped organize the letter, said he believes a third of the works featured use generative AI models trained on copyrighted works. He named Midjourney, Open AI's Sora, Runway AI and Stable Diffusion as examples. 'Christie's can hold themselves accountable to a higher standard and engage with these things in a way that is supportive of artists as a whole, and doesn't package these exploitative models into their auction alongside people that are doing things ethically,' Southen said. Southen, a Michigan-based film industry concept artist, said he and many of his peers have lost work and had their income 'slashed in half' over the past two years due to AI. Art isn't the only industry bracing for change. According to a World Economic Forum report released last month, 41% of employers expect to downsize their workforce as AI begins to replicate roles. Sixty-nine percent said they plan to recruit talent skilled in AI tool design and enhancement. But Christie's sees AI as a natural progression in art history. Nicole Sales Giles, Christie's director of digital art, said she welcomes debate around the auction as a sign that AI will transform art to the industry's benefit. 'I'm not a copyright lawyer, so I can't comment on the legality, but from a theft-influence angle, artists have been influenced by other artists for centuries,' Sales Giles said. Many of the artists featured in the auction used their own data — including personal photography, curated collages and their own poetry — to train their AI models. 'The AI I've been using for almost 10 years was not trained on other artists' work,' said digital artist Daniel Ambrosi, whose work is part of the auction. 'It was not even created to make art in the first place.' Ambrosi fed his photography of Central Park into Google's DeepDream at two different scales. The AI recognizes the image and moves pixels around in hallucinogenic ways. 'It's like I'm the leader of a jazz band,' he said. 'I write original compositions, and I have this virtuoso saxophonist who knows where I'm going with the song, but is going to improvise, surprise and delight me.' But even if an artist is using their own work as an input, it doesn't guarantee that the AI model they are using was not built on data that contains copyrighted works. On Feb. 12, Thomson Reuters won a copyright battle against a legal research firm that used its materials to train an AI model without permission. That ruling said tech companies used data sets with large amounts of human writing to train AI chatbots without compensating those who wrote the original works. Developer OpenAI wrote in a U.K. filing last year that it would be 'impossible' to train top AI models without copyrighted works. The company's website stated that using publicly available internet materials to train AI models is fair use under U.S. copyright to Reben, AI models pull from such large datasets that it's difficult to find an individual's work. As OpenAI's first artist in residence, Reben worked extensively with beta AI technologies for making art. Now, he's an artist in residence Meta. He said it comes down to the artist to assess what is fair use. 'Using other works to create new works is part of history,' Reben said. 'Creating things which change expression, which move the idea forward, is an exception in copyright law.' But even if AI is set to become part of the fine art world, Southen said it should be integrated ethically. That means holding AI companies accountable to licensing data they extract value from, and compensating artists fairly. Until then, he said, it's time for Christie's to 'pump the brakes.' This article was originally published on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store