Latest news with #Runway


India Today
4 hours ago
- Entertainment
- India Today
Midjourney launches V1 AI video generation model right after Disney accuses it of plagiarism
Midjourney, the AI startup famous for its surreal image generation tools, is making a bold leap into video. Recently, the company unveiled V1, its long-awaited video-generation model that promises to breathe life into your static images. It's a big move for Midjourney as it throws the company into direct competition with other big-hitters like OpenAI, Runway, Adobe and Google.V1 is designed as an image-to-video model, allowing users to transform either their own uploaded pictures or Midjourney's AI-generated images into short five-second video clips. Like its sibling image models, V1 is only accessible via Discord for now and is web-only at launch. advertisementAnd it's not just videos Midjourney has in its sights. In a blog post, CEO David Holz set out some pretty ambitious goals for the company's AI, saying V1 is just the next stepping stone toward real-time 'open-world simulations.' The company also revealed its plans to branch into 3D renderings and real-time generative models down the line. While Midjourney's image tools have long appealed to artists and designers, the company has taken a slightly different tack with video. Many of its rivals — such as Sora by OpenAI, Runway's Gen-4, Firefly by Adobe and Veo 3 by Google — are going after commercial filmmakers and studios with highly controllable AI tools. Midjourney, however, is positioning itself as more of a creative playground for those looking for something a little more V1 AI video generation model: Pricing and availabilityadvertisementDespite this, Midjourney is pushing ahead. Video generation doesn't come cheap, though. V1 consumes eight times more credits per clip than Midjourney's still-image tools, so subscribers will burn through their monthly allowances far faster. At launch, Basic subscribers — who pay $10 (around Rs 866) per month — can access V1, but unlimited video generation is limited to the $60 (around Rs 5,200) Pro and $120 (approximately Rs 10,400) Mega plans, and only on the 'Relax' mode, which produces videos more slowly. However, the company says it will review this pricing structure in the coming weeks as it gathers feedback from for the tools themselves, V1 offers a surprising level of control. You can opt for an 'auto' mode that lets the AI generate motion for you or a 'manual' mode that accepts text prompts to dictate exactly how you want your animation to move. Plus, there are settings for adjusting movement intensity — 'low motion' if you want subtle shifts, or 'high motion' for more energetic effects. Clips last five seconds by default but can be extended up to 21 seconds in four-second accuses Midjourney of plagiarismThat said, Midjourney is entering the video arena under a legal cloud. Only a week ago, Disney and Universal sued the startup over its image-generation models, claiming they can produce unauthorised versions of famous characters like Darth Vader and Homer Simpson. It's part of a growing backlash across Hollywood as studios grow nervous about AI tools replacing human creatives — and AI companies face questions about training data and copyright examples of V1's output suggest Midjourney is sticking to its trademark surreal aesthetic rather than aiming for hyper-realism, the sort of style that fans of the platform have come to love. The initial reaction from users has been mostly positive so far, though it's still too early to tell how V1 will stack up against more established players like Runway and Sora.


CNET
3 days ago
- Business
- CNET
Adobe's New AI Mobile App Lets You Use 6 New AI Models, Including Google's Veo 3
Adobe just dropped a ton of AI news, so let's dive into what that means for you. First up, the company is dropping brand-new Firefly AI mobile apps for iPhones and Androids. You can download these apps now for free and use Firefly to create AI images and videos on the go. Plus, the app comes with a few free generative credits for you to experiment with Adobe's AI. Next, Adobe is expanding its roster of third-party AI partners to include six new models from Ideogram, Pika, Luma and Runway. Google's latest AI models are also joining the lineup, including the internet-famous Veo 3 AI video generation model with native audio capabilities and the Imagen 4 text-to-image model support. Finally, its moodboarding AI program, Firefly Boards, is generally available today after months in beta. Here's everything you need to know about Adobe's newest batch of Firefly AI updates. For more, check out our favorite AI image generators and what to know about AI video models. Firefly AI for iOS and Android users Adobe's Firefly mobile apps will let you access its AI image and video capabilities from your phone. A mobile app felt like the next natural step, since Adobe saw that mobile web usage of Firefly noticeably increased after Adobe's Firefly video capability launched in early 2025. Not every Firefly feature will be available at launch, but for now, we know that these features will be included: text-to-image, text- and image-to-video, generative fill, and generative expand. You can download the app now from the Apple App Store and Google Play Store. The app is free to download, but you'll need a Firefly-inclusive Adobe plan to really use the app. In the hopes that you'll sign up for a full plan, Adobe gives you 12 free generation Firefly credits (10 for images, two for videos, which doesn't shake out to many of each). So you can use those to see if Firefly is a good fit for you. Firefly plans start at $10 per month for 2,000 credits (about 20 videos), increasing in price and generation credits from there. Depending on your Adobe plan, you may already have access to Firefly credits, so double-check that first. Adobe's six new AI models from Google, Runway and more Adobe's also adding new outside AI creative models to its offerings, including image and video models from Ideogram, Pika, Luma and Runway. You might recognize the name Runway from its deal with Lionsgate to create models for the entertainment giant. Ideogram, Pika and Luma are all other well-known AI creative services. Google's Veo 3 AI video generator is also joining, bringing its first-of-its-kind synchronized AI audio capabilities, along with the latest generation of Google's AI image model. This is the second batch of third-party models that Adobe has added to its platform. Earlier this spring, Adobe partnered with OpenAI, Google and Black Forest (creator of Flux) to bring the companies' AI models to Adobe. What's unique about this is that all third-party models have to agree to Adobe's AI policy, which prevents all the companies from training on customers' content -- even if the individual companies don't have that policy on their own, it's standardized across all models offered through Adobe. This is also true for the new models added today. For AI-wary professional creators who make up the majority of Adobe users, that's a bit of good news. You'll need a paid Firefly plan to access outside models; otherwise, you'll just have access to the Adobe models. Here are all the AI models available through Adobe: Adobe Firefly Image 3 Adobe Firefly Image 4 Adobe Firefly Image Ultra Flux 1.1 Pro Flux 1 Kontext Google's Imagen 3 OpenAI's image generation model (new) Ideogram 3 (new) Google's Imagen 4 (new) Runway's Gen-4 Image For video, you can use: Adobe Firefly Video Google Veo 2 (new) Google Veo 3 (new) Luma AI Ray 2 (new) Pika's text-to-video generator Adobe's own Firefly AI models are trained on a combination of Adobe Stock and other licensed content. You can learn more in Adobe's AI guidelines and approach to AI. AI moodboarding gets a boost Other Adobe updates include the general release of its moodboarding program, Firefly Boards, which has been in beta since April. Moodboarding is a practice that lets you cluster together different elements, like colors and shapes, to evoke specific moods and aesthetics. It's a good initial step for planning content and campaigns. Adobe You can use the infinite canvas to brainstorm and plan content. You can generate images and videos in Boards using Adobe and non-Adobe models; the setups are very similar to generating in the regular Firefly window. Boards are collaborative, so you can edit with multiple people. A new one-click arrange button can help you organize and visualize your files more easily, a much-requested feature that came out of the beta. Firefly boards are synced up with your Adobe account. So you can select a photo in a Board, open it in Photoshop and edit it. Those changes will then be synced up with your Firefly Board in less than a minute, so you can always see the latest version of your file without needing to be limited to editing in Boards. For more, check out Premiere Pro's first generative AI feature and the best Photoshop AI tools.
Yahoo
4 days ago
- Entertainment
- Yahoo
Why Hollywood Is Training for Jobs That Don't Yet Exist
Last week's column on the rise of vertical dramas prompted a heartfelt text from a friend who's worked in film and TV sound for two decades. 'Blerrgh. Time to reskill, methinks,' he wrote. 'This article made me really see the writing in the sky once and for all.' More from IndieWire How A24 Found the Perfect Match Between Commercial and Specialized Crowds to Make 'Materialists' a Box Office Hit The Great Comedy with Judge Reinhold, Harvey Keitel, John Turturro, and More That You Can't See I reminded him not to shoot the messenger. Truthfully, I left my conversation with Yun Xie feeling curious, even excited, about what might come next for the format. These may never evolve into masterpieces of storytelling (a tall order when a script must deliver an emotional cliffhanger every 90 seconds), but the business around them is growing. And with that growth comes the potential for new companies, new markets, and new opportunities, even if we can't see them clearly yet. And that's the real problem: We can't see. The last five years have been an Arrakis-level sandstorm — Covid, strikes, AI, the slow collapse of legacy film and TV. It's exhausting to keep moving forward without a clear view of the path ahead. We can take some comfort in the fact that even futurists are fumbling. Last week, I attended both CAA's Amplify conference and Runway's AI Film Festival. They cater to different audiences, but their underlying messages were the same: The future is coming, we think it's exciting, but still can't say exactly what that looks like. 'We are now training for jobs that do not yet exist,' said Bruce Markoe, IMAX's head of post and image capture, speaking at the AI Film Festival in Santa Monica on June 12. In a casual press chat before 10 shorts screened at The Broad Stage, he and Runway founder Cristóbal Valenzuela both admitted they didn't know what's next. However, they argued that history suggests we should remain optimistic. 'People were freaking out when talkies were around,' Markoe said. 'The argument was people are going to lose their jobs and the reality said yes, there were jobs that changed and… there [are] jobs that need to change. We assume that efficiency means lower of everything and… it's actually the opposite. There are going to be new industries. The thing is it's really hard to understand those industries. We have never experienced them before. Trying to understand visual effects in the 1920s was unthinkable until we got there.' Valenzuela said he believed that 'there's going to be all kinds of new positions that need to be created to work with AI tools that are not existing today. Is it equal? I can't tell you that. I dunno. But there's definitely a shift that's going to happen.' Kind of? The shorts were fine. Aesthetics have improved since last year. So has interest: Valenzuela said they received over 6,000 submissions, compared to a few hundred in 2023. Still, the tech has a way to go. Basic elements of cinematic language, like character consistency, often fall short. Many films felt like conceptual collages seeking the right tools to bring their visions to life. One highlight was Riccardo Fusetti's 'Editorial,' which visualizes the thoughts racing through a young woman's mind before she answers a question. The concept was sharp, although the chaotic imagery and uncanny valley left it feeling more like a promising rough draft. A similar mood emerged at CAA's Amplify conference at Montage Laguna Beach on June 10. Speaking with CAA agent Alex Mebed, Microsoft AI CEO Mustafa Suleyman traced a familiar pattern: From the printing press to podcasts to generative AI, new tech lowers the barrier to entry and opens the door to more creators. And with that comes aggressive competition, major disruption, and redistribution of jobs, power, and income. 'We've got to be open about that,' Suleyman said. At this point, everyone is open about that. The question is, how much longer do we have to wait? Maybe that impatience is misplaced. Even as vertical dramas and AI have their days, A24's old-fashioned 'Materialists' — Celine Song's original IP, a rom-com, shot on 35mm for pete's sake — opened this weekend to $12 million. That success is every bit as real as, say, the latest round of studio layoffs. In its analysis last weekend, the L.A. Times reflected on last year's mantra, 'Survive until '25,' and suggested that it's morphed into something bleaker: 'Exist until '26.' Solid advice, but maybe it's time to retire the aphorisms. Rhymes aren't making the future arrive any faster. See you next week, Dana 5. Should You Work for Free? by Kathi Carey Building a career in the arts requires finding a way to charge money for something you likely loved enough to do for free at one point. This post from filmmaker Kathi Carey's Indie Film Substack explores the nuances of charging for creative work and the challenge of maximizing your earning potential without costing yourself relationships and opportunities that could emerge from unpaid work. 4. Micro-Series: A Manifesto by Jon Stahl Did last week's In Development about vertical dramas leave you curious about the future of short-form storytelling? Newsletter favorite Jon Stahl offers his vision for a future of 60-second storytelling, in which the medium could expand to other genres and expand its storytelling ambitions. 3. Let's Talk About… The New Media Circuit and What That Means for Indie Film by KLA Media Group Everyone trying to make an independent film has been bombarded with the (true) sentiment that your job doesn't end after the film premieres or finds distribution. Filmmakers need to be more involved in their films' marketing than ever before, and this post from the always excellent Marketing & PR for Indie Films, Creatives & Small Businesses newsletter offers a look at some of the new avenues that have emerged for filmmakers to promote their work. 2. Building a Forever Franchise by Will Harrison This article from the Brands to Fans Substack offers a detailed look at one of the biggest paradoxes facing Hollywood: studios and streamers are more reliant on IP than ever before, yet the value of legacy entertainment brands is waning as a new generation of audiences grows up in an era in which movies are no longer the dominant form of pop culture. Harrison breaks down the process of finding a new franchise in 2025, explaining how producers are looking to newer sources of IP and monetizing it differently. 1. Theatrical Distribution for Independent Filmmakers | Annalisa Shoemaker by Kinema for Filmmakers Festival sales and other traditional forms of distribution are harder than ever to come by, but there is still value to be created by filmmakers who take theatrical distribution into their own hands. This interview with independent distribution consultant Annalisa Shoemaker breaks down the considerations that go into planning your own theatrical run and the pitfalls to avoid. Best of IndieWire Guillermo del Toro's Favorite Movies: 56 Films the Director Wants You to See 'Song of the South': 14 Things to Know About Disney's Most Controversial Movie Nicolas Winding Refn's Favorite Films: 37 Movies the Director Wants You to See


NBC News
4 days ago
- Entertainment
- NBC News
For some in the industry, AI filmmaking is already becoming mainstream
Across Hollywood, talking about it publicly can sometimes be taboo. Using it without disclosing that you did could make you the center of controversy. And protesting its use has been the norm. But even amid widespread vocal pushback against generative artificial intelligence, industry leaders say its use in film and TV is slowly becoming mainstream. More filmmakers are using evolving AI tools, and studios are partnering with AI companies to explore how they can use the technology in content creation. 'It's being used by everybody that doesn't talk about the fact that they're using it,' Michael Burns, vice chairman of Lionsgate, said during a panel at the third annual Runway AI Film Festival in Los Angeles last week. Lionsgate, which is behind hits like the 'John Wick' and 'Hunger Games' franchises, signed a deal with Runway last fall allowing its video generation model to train on the studio's movies and TV shows. Burns joked that AI tools are like the Ozempic of the film industry, referring to the popularity of the semaglutide-based weight loss drug. Burns was among hundreds — including a mix of creatives and execs — who attended the AI video company's showcase of user-submitted short films made with generative tools. The festival, which was also held in New York City this month, ballooned from 300 film submissions in its first year to 6,000 submissions this year, its organizers said. While using AI in film isn't completely new, the technology has continued to stoke concerns among creatives. AI was a sticking point during the 2023 writers and actors strikes against studios, with creatives seeking assurances that their work wouldn't be replaced by the technology. Runway CEO Cristóbal Valenzuela, however, is optimistic about AI's impact on the labor force, telling reporters before the Los Angeles festival that history has 'proven once and again' that industries can adapt to new technologies. AI-generated video-making has taken off even as it remains controversial. The technology has given rise to music videos to brand advertisements to nonconsensual deepfakes. Though AI videos have frequently been marked by telltale distortions, such as extra fingers or nonsensical physics, Google's latest video generation model, Veo 3, shocked the internet last month with how seemingly flawless its outputs were. 'There are going to be new industries' as a result of AI, Valenzuela said. 'Just the hard thing is it's really hard to understand these industries when they're new; we have never experienced them.' The company has increased its presence in Hollywood in the past few years. Burns said the partnership between Lionsgate and Runway is an attempt to create higher-quality content for lower prices. 'Even a year or two years ago, there was no chance that the output was going to be able to be projected on the big screen without you seeing gaps or somebody with three arms or a dragon that didn't look like a dragon,' Burns added. 'And now, today, it's a completely different place.' Runway also recently reached a deal with AMC Networks, giving it access to Runway's AI tools for use in marketing materials and TV development processes, such as pre-visualization or special effects ideation. All 10 of the films shown at the festival included generative video, but not all were made entirely with AI. The shorts, which were created in a variety of animated and photorealistic styles, appeared to lean into the more absurdist themes made possible by generative tools. One followed the perspective of a chicken on its way to prison. Another offered life lessons through a small insect's journey. And another painted a scene of human souls desperate to reclaim their bodies after Earth's collapse. Other AI companies have also upped their visibility in the industry in recent years. OpenAI, which is behind ChatGPT, held its own AI film screenings this year in New York, Los Angeles and Tokyo to tout its popular text-to-video model Sora. The tool, launched in early 2024, stirred both buzz and panic when the company first teased its hyperrealistic generation capabilities. Last year, the Tribeca Film Festival partnered with Runway and OpenAI to highlight more short films that leveraged AI. Even some film schools appear to be hopping on the AI bandwagon. Elizabeth Daley, dean of the University of Southern California's School of Cinematic Arts, said AI is being embedded in various courses, including one focused on AI creativity. She said the school encourages students to explore AI as long as it doesn't become 'an excuse not to work.' 'We need to stay in that conversation. We need to stay in the struggle to make sure that the tools that are developed are actually the tools that writers, directors, producers, cinematographers, animators need to do their work,' Daley said at a panel at the Runway film festival. 'And those will create other jobs. No doubt.'


Mint
5 days ago
- Business
- Mint
Adobe Slips After Revenue Outlook Fails to Sway AI Skeptics
(Bloomberg) -- Adobe Inc. shares fell the most in three months after the creative-software company gave a sales outlook for the current quarter that failed to calm investors who have been skeptical it can hold its own against AI-focused upstarts. Adobe has become a central focus of investors debating whether artificial intelligence tools will disrupt traditional software industry leaders. Design applications like those from Canva Inc. and image-creation tools from AI firm Midjourney Inc. have gained steam while Adobe has weaved generative AI tools through its products, including Photoshop. In February, it introduced separate subscriptions for its AI video generator, trying to compete with similar tools from rivals including OpenAI and Runway. 'Somehow Adobe has been snagged as an AI loser,' said Gil Luria, an analyst at DA Davidson, in an interview Thursday on Bloomberg TV. 'We think that's a misunderstanding of the technology,' he added. The shares slid 5.3% to $391.68 at the close Friday in New York, the biggest since day decline since March 13. The stock had dropped 12% this year. Investors responded poorly to Adobe's financial update even though the outlook topped analysts' estimates. Sales will be about $5.88 billion to $5.93 billion in the period ending in August, the company said Thursday in a statement. Wall Street, on average, estimated $5.88 billion. Profit, excluding some items, will be $5.15 a share to $5.20 a share, compared with the average projection of $5.11. Adobe's family of AI models, called Firefly, has been used to generate more than 24 billion pieces of content, Chief Financial Officer Dan Durn said in remarks prepared for a conference call after the results. That's up from 20 billion in March. Adobe had said then that it expected $250 million in annual recurring revenue from AI products. Fiscal second-quarter revenue increased 11% to $5.87 billion, compared with an average analyst estimate of $5.8 billion. Profit, excluding some items, was $5.06 per share, while Wall Street anticipated $4.98. The digital media unit, which includes Adobe's flagship creative and document-processing software, posted an 11% increase in sales to $4.35 billion. Annual recurring revenue for the closely watched segment was $18.1 billion, in line with estimates. Revenue from the unit that includes marketing and analytics software rose 10% to $1.46 billion. 'Adobe's AI innovation is transforming industries enabling individuals and enterprises to achieve unprecedented levels of creativity,' Adobe Chief Executive Shantanu Narayen said in the statement. (Updates with closing share price in the) More stories like this are available on