Latest news with #Midjourney


WIRED
2 hours ago
- Entertainment
- WIRED
'Wall-E With a Gun': Midjourney Generates Videos of Disney Characters Amid Massive Copyright Lawsuit
Jun 20, 2025 4:28 PM A week after Disney and Universal filed a landmark lawsuit against Midjourney, the generative AI startup's new V1 video tool will make clips of Shrek, Deadpool, and other famous creations. Photograph: Anadolu/Getty Images Midjourney's new AI-generated video tool will produce animated clips featuring copyrighted characters from Disney and Universal, WIRED has found—including video of the beloved Pixar character Wall-E holding a gun. It's been a busy month for Midjourney. This week, the generative AI startup released its sophisticated new video tool, V1, which lets users make short animated clips from images they generate or upload. The current version of Midjourney's AI video tool requires an image as a starting point; generating videos using text-only prompts is not supported. The release of V1 comes on the heels of a very different kind of announcement earlier in June: Hollywood behemoths Disney and Universal filed a blockbuster lawsuit against Midjourney, alleging that it violates copyright law by generating images with the studios' intellectual property. Midjourney did not immediately respond to requests for comment. Disney and Universal reiterated statements made by its executives about the lawsuit, including Disney's legal head Horacio Gutierrez alleging that Midjourney's output amounts to 'piracy.' It appears that Midjourney may have attempted to put up some video-specific guardrails for V1. In our testing, it blocked animations from prompts based on Frozen's Elsa, Boss Baby, Goofy, and Mickey Mouse, although it would still generate images of these characters. When WIRED asked V1 to animate images of Elsa, an 'AI moderator' blocked the prompt from generating videos. 'Al Moderation is cautious with realistic videos, especially of people,' read the pop-up message. These limitations, which appear to be guardrails, are incomplete. WIRED testing shows that V1 will generate animated clips of a wide variety of Universal and Disney characters, including Homer Simpson, Shrek, Minions, Deadpool, Star Wars ' C-3PO, and Darth Vader. For example, when asked for an image of Minions eating a banana, Midjourney generated four outputs with recognizable versions of the cute, yellow characters. Then, when WIRED clicked the 'Animate' button on one of the outputs, Midjourney generated a follow-up video with the characters eating a banana—peel and all. Although Midjourney seems to have blocked some Disney and Universal-related prompts for videos, WIRED could sometimes circumvent the potential guardrails during tests by using spelling variations or repeating the prompt. Midjourney also lets users provide a prompt to inform the animation; using that feature, WIRED was able to to generate clips of copyrighted characters behaving in adult ways, like Wall-E brandishing a firearm and Yoda smoking a joint. The Disney and Universal lawsuit poses a major threat to Midjourney, which also faces additional legal challenges from visual artists who allege copyright infringement as well. Although it focused largely on providing examples from Midjourney's image-generation tools, the complaint alleges that video would 'only enhance Midjourney ability to distribute infringing copies, reproductions, and derivatives of Plaintiffs' Copyrighted Works.' The complaint includes dozens of alleged Midjourney images showing Universal and Disney characters. The set was initially produced as part of a report on Midjourney's so-called 'visual plagiarism problem' from AI critic and cognitive scientist Gary Marcus and visual artist Reid Southen. 'Reid and I pointed out this problem 18 months ago, and there's been very little progress and very little change,' says Marcus. 'We still have the same situation of unlicensed materials being used, and guardrails that work a little bit, but not very well. For all the talk about exponential progress in AI, what we're getting is better graphics, not a fundamental principle solution to this problem.'

Miami Herald
3 hours ago
- Miami Herald
AI skeptic creates chatbot to help teachers design classes
AI skeptic creates chatbot to help teachers design classes While many educators spent the past two years fretting that artificial intelligence is killing student writing, upending person-to-person tutoring and generally wreaking havoc on scholastic inquiry, the well-known thinker and ed tech expert Michael Feldstein has been quietly exploring something completely different. For more than a year, he has led an open-source project with a group of about 70 educators online to build what's essentially a chat bot with one job: to guide teachers, step-by-step, through the process of designing their own courses-a privilege previously reserved for just a few instructors at elite institutions. The experimental software, dubbed the AI Learning Design Assistant, or ALDA, has yet to hit the market. But when it does, Feldstein said, it will be free. With any luck, it could mark a new era, offering teachers at all levels an easy way to design their own homegrown coursework, assessments and even curricula at a fraction of the cost demanded by commercial publishers. Feldstein has worked primarily with college instructors, and his work is widely applicable in higher ed. But it's got potential in K-12 education as well. He's pushing to democratize instructional design, a little-known academic field in which professional designers build courses by working backwards: They interview teachers to help them drill down to what's important, then create courses based on the findings, The 74 says. When it's ready, he said, ALDA could well shake up the teaching profession, making off-the-shelf AI behave like a personal instructional designer for virtually every teacher who wants one. And for the record, Feldstein said, there's an acute shortage of such designers, so this particular iteration of AI likely won't put anyone out of a job. 'What is this good for?' Feldstein is well-known in the ed tech community, having worked over the years at Oracle, Cengage Learning and elsewhere. A one-time assistant director of the State University of New York's Learning Network, he has more recently garnered a wide audience with his e-literate blog-required reading for college instructors and ed tech experts. Over the past few years, Feldstein has likened tools such as ChatGPT and AI image generators like Midjourney to "toys in both good and bad ways." They invite people to play and give players the ability to explore what's basically cutting-edge AI. "It's fun. And, like all good games, you learn by playing," he wrote recently. But he cautions that when they're asked to do something specific, they "tend to do weird things" such as return strange results and, on occasion, hallucinate. As a longtime observer of ed tech, Feldstein's approach has always been to step back and ask: What is this good for? "AI is interesting because there are many possible answers, and those answers change on a monthly basis as the capabilities change," he said. That makes the question harder to answer. Nevertheless, we need to answer it." ALDA's focus, he said, has always been on helping participants think more deeply about what teachers do: The AI probes students to find out what they know, then fills in the gaps. "As an educator, if I ask you a question, I'm trying to understand if you know something," he said. "So my question is directly related to a learning objective." By training, teachers naturally modify their questions to help figure out if students have misconceptions. They circle around the topic, offering clues, hints and feedback to help students home in on what they know. But they don't simply give away the answer. Over the course of the year, he and colleagues have broken down the various aspects of their work, including what they'd outsource if they had an assistant or "junior learning designer" at their side. The AI starts simply, asking "Who are your students? What is your course about? What are the learning goals? What's your teaching style?" It moves on from there: "What are the learning objectives for this lesson? How do you know when students have achieved those objectives? What are some common misconceptions they have?" Eventually teachers can begin designing the course and its assessments with a clear focus on goals and, in the end, their own creativity. Feldstein holds decidedly modest goals for the project. "The idea that we're going to somehow invent a better AI model than these companies that are spending billions of dollars is crazy," Feldstein said. But making course design accessible "is very doable and very useful." He has intentionally brought together a diverse group of instructors that includes both heavy AI users and skeptics. Among them: Paul Wilson, a longtime professor of religion and philosophy at Shaw University in Raleigh, North Carolina. Though Wilson has taught there for 32 years, he has dabbled in AI over the past few years as it reared its head in classes, assignments and faculty meetings. He came away from Feldstein's sessions over the past few months with the outlines of not one but two courses: a world religion survey, which he designed last summer, and a course in pastoral care. The latter, he said, is a "specialty class" for ministers-in-training who are getting their first taste of interacting with congregation members. "They're doing field work," he said, "and this particular class is going to cover the functions they would have if they were serving in pastoral ministry." The course will cover everything from the business of running a congregation to the teaching and counseling duties of a pastor and the "prophetic" role-preaching and teaching the Bible, shepherding the congregation and offering spiritual guidance. Wilson said the AI let him tweak the course design in response to test users' suggestions. "By the end, my experience was that I was working with something valuable," he said. He is offering the class this semester. "I got a very good course design, with all the parameters that I was looking for," he said. Geneva Dampare, director of strategy and operations at the United Negro College Fund, said the organization invited six instructors from five HBCUs to Feldstein's workshop. Dampare, who has an instructional design background, joined as well. Many faculty at these institutions, she said, don't see AI as the menace that other instructors do. For them, it's a kind of equalizer at colleges that don't typically offer a perk like instructional designers. But by the end of the process last November, Dampare said, many instructors "could comfortably speak about AI, speak about how they are integrating the ALDA tool into the curriculum development that they're doing for next semester or future semesters." This story was produced by The 74 and reviewed and distributed by Stacker. © Stacker Media, LLC.

ABC News
12 hours ago
- Entertainment
- ABC News
Could a new copyright lawsuit from Disney change the way we use AI?
Disney and Universal are suing AI image generator Midjourney, in what could be a landmark case for copyright and generative AI. Could it change how creative industries deal with machine-made 'art'? Also, the Australian government is forcing Apple to loosen its App Store restrictions, allowing iPhone users to download apps from outside the walled garden. What might that mean for developers and everyday users? Plus, a researcher exposes a major privacy flaw, revealing every phone number linked to a Google account using just one Gmail address. And we unpack 'vibe-coding' -- the strange new world where AI writes code based on vibes, not logic. GUESTS: Alex Kidman, freelance tech journalist and editor of freelance tech journalist and editor of Georgia Dixon, Managing Editor of WhistleOut Singapore This episode of Download This Show was made on Gadigal land. Technical production by Craig Tilmouth and Carey Dell.


India Today
13 hours ago
- Entertainment
- India Today
Midjourney launches V1 AI video generation model right after Disney accuses it of plagiarism
Midjourney, the AI startup famous for its surreal image generation tools, is making a bold leap into video. Recently, the company unveiled V1, its long-awaited video-generation model that promises to breathe life into your static images. It's a big move for Midjourney as it throws the company into direct competition with other big-hitters like OpenAI, Runway, Adobe and Google.V1 is designed as an image-to-video model, allowing users to transform either their own uploaded pictures or Midjourney's AI-generated images into short five-second video clips. Like its sibling image models, V1 is only accessible via Discord for now and is web-only at launch. advertisementAnd it's not just videos Midjourney has in its sights. In a blog post, CEO David Holz set out some pretty ambitious goals for the company's AI, saying V1 is just the next stepping stone toward real-time 'open-world simulations.' The company also revealed its plans to branch into 3D renderings and real-time generative models down the line. While Midjourney's image tools have long appealed to artists and designers, the company has taken a slightly different tack with video. Many of its rivals — such as Sora by OpenAI, Runway's Gen-4, Firefly by Adobe and Veo 3 by Google — are going after commercial filmmakers and studios with highly controllable AI tools. Midjourney, however, is positioning itself as more of a creative playground for those looking for something a little more V1 AI video generation model: Pricing and availabilityadvertisementDespite this, Midjourney is pushing ahead. Video generation doesn't come cheap, though. V1 consumes eight times more credits per clip than Midjourney's still-image tools, so subscribers will burn through their monthly allowances far faster. At launch, Basic subscribers — who pay $10 (around Rs 866) per month — can access V1, but unlimited video generation is limited to the $60 (around Rs 5,200) Pro and $120 (approximately Rs 10,400) Mega plans, and only on the 'Relax' mode, which produces videos more slowly. However, the company says it will review this pricing structure in the coming weeks as it gathers feedback from for the tools themselves, V1 offers a surprising level of control. You can opt for an 'auto' mode that lets the AI generate motion for you or a 'manual' mode that accepts text prompts to dictate exactly how you want your animation to move. Plus, there are settings for adjusting movement intensity — 'low motion' if you want subtle shifts, or 'high motion' for more energetic effects. Clips last five seconds by default but can be extended up to 21 seconds in four-second accuses Midjourney of plagiarismThat said, Midjourney is entering the video arena under a legal cloud. Only a week ago, Disney and Universal sued the startup over its image-generation models, claiming they can produce unauthorised versions of famous characters like Darth Vader and Homer Simpson. It's part of a growing backlash across Hollywood as studios grow nervous about AI tools replacing human creatives — and AI companies face questions about training data and copyright examples of V1's output suggest Midjourney is sticking to its trademark surreal aesthetic rather than aiming for hyper-realism, the sort of style that fans of the platform have come to love. The initial reaction from users has been mostly positive so far, though it's still too early to tell how V1 will stack up against more established players like Runway and Sora.


Geeky Gadgets
13 hours ago
- Entertainment
- Geeky Gadgets
Midjourney AI Video Model Officially Launches : Refining Storytelling Through Motion
Have you ever imagined bringing a still image to life—transforming a single frame into a dynamic, moving story? Midjourney's latest innovation in video AI makes this dream a reality, and it's nothing short of innovative. This innovative tool doesn't just animate images; it crafts visually stunning, high-resolution videos with seamless motion and artistic flair. Whether you're a content creator, marketer, or just someone who loves experimenting with new tech, this model opens up creative possibilities that were once reserved for high-budget studios. But here's the kicker: it's not just about making videos—it's about rethinking how we tell stories through motion. In this exploration, Olivio Sarikas uncovers how Midjourney's video AI is reshaping the creative landscape by turning static images into polished animations. From its dynamic camera effects to its customizable motion styles, this tool offers a level of control and refinement that sets it apart from traditional video generation models. But it's not without its quirks—like any emerging technology, it has its limitations. So, what makes this tool 'crazy good,' and where does it still have room to grow? Let's unravel the layers of this new innovation and see how it's changing the way we create and imagine. Midjourney Video AI Overview Core Functionality: Turning Images into Motion Midjourney's video AI distinguishes itself by focusing on transforming images into videos, setting it apart from traditional text-to-video models. This approach allows you to use high-quality images as the foundation for video creation, making sure a visually rich starting point. The tool supports a diverse range of styles, including realistic, 3D, anime, cartoon, and artistic effects, giving you the flexibility to align the output with your creative vision. Whether you are producing content for social media, marketing campaigns, or professional projects, this model adapts seamlessly to your needs. By emphasizing image-based video generation, Midjourney's tool offers a more focused and refined approach, allowing creators to achieve polished results without the complexities often associated with text-to-video models. Key Strengths of the Model Midjourney's video AI brings several notable strengths to the forefront, making it a valuable asset for creators aiming to produce high-quality animations. Its standout features include: Consistency in Motion: The model generates smooth, detailed animations with minimal visual distortions, making sure a professional-grade final product. The model generates smooth, detailed animations with minimal visual distortions, making sure a professional-grade final product. High-Resolution Output: Videos are rendered in high resolution, making them suitable for both online platforms and formal presentations. Videos are rendered in high resolution, making them suitable for both online platforms and formal presentations. Dynamic Camera Effects: Features such as pans, zooms, and transitions add depth and realism to your videos, enhancing viewer engagement. Features such as pans, zooms, and transitions add depth and realism to your videos, enhancing viewer engagement. Artistic Enhancements: A variety of creative filters and effects can be applied to elevate the visual appeal of your animations, offering a personalized touch. These strengths make the tool particularly appealing for creators who prioritize visual quality and creative flexibility. Whether you are a novice or an experienced professional, the model's capabilities cater to a wide range of creative needs. Midjourney's New Video AI Is CRAZY Good! Watch this video on YouTube. Here are more guides from our previous articles and guides related to AI video that you may find helpful. Customization Options for Creative Control One of the most impressive aspects of Midjourney's video AI is its extensive customization options, which allow you to tailor the video creation process to your specific requirements. Key customization features include: Motion Modes: Choose between low-motion and high-motion settings to achieve the desired animation style, from subtle movements to dynamic transitions. Choose between low-motion and high-motion settings to achieve the desired animation style, from subtle movements to dynamic transitions. Manual and Automatic Modes: Opt for manual prompts for precise control over the creative process or use automatic settings for faster results. Opt for manual prompts for precise control over the creative process or use automatic settings for faster results. Aspect Ratio and Resolution: Adjust these parameters to meet the specific demands of your project, making sure compatibility with various platforms. Adjust these parameters to meet the specific demands of your project, making sure compatibility with various platforms. Video Sequence Extension: Add new sequences to existing videos, allowing the creation of longer, more complex animations. These features strike a balance between accessibility and creative freedom, making the tool suitable for both beginners seeking simplicity and experienced creators looking for advanced control. The ability to fine-tune every aspect of the video ensures that the final output aligns perfectly with your vision. Technical Insights The video generation process is powered by GPU-based rendering, making sure efficient and high-quality outputs. Each video takes approximately eight minutes to complete, with the model generating four variations during each session. This allows you to select the most suitable version for your project. While the initial outputs may have lower resolution, the built-in upscaling technology enhances the final quality, delivering a polished result. However, the AI does face challenges with complex motions, such as dancing or intricate choreography, which can lead to unnatural movements or inconsistencies. Despite these limitations, the model's performance in simpler animations remains highly reliable, making it a strong choice for a wide range of applications. Pricing and Subscription Tiers Midjourney's video AI is integrated into its existing subscription plans, offering flexibility for users with varying needs and budgets. The pricing structure includes: Standard Plan: Priced at $24 per month, this plan offers limited video generation capabilities, making it ideal for casual users or those with smaller projects. Priced at $24 per month, this plan offers limited video generation capabilities, making it ideal for casual users or those with smaller projects. Pro Plan: At $48 per month, this plan provides unlimited access to both image and video creation features, catering to professionals and frequent users. This tiered approach ensures that users can select a plan that aligns with their creative goals and financial considerations, making the tool accessible to a broad audience. How it Compares to Competitors When compared to competing models such as Kling 2.1 and Veo 2.1, Midjourney's video AI stands out with its smoother motion and superior visual quality. While competitors may excel in specific areas, such as text-to-video generation or niche effects, Midjourney's balanced approach to customization, style variety, and resolution makes it a versatile choice for diverse applications. The model's ability to produce consistent, high-resolution animations with dynamic camera effects gives it a competitive edge, particularly for creators who value both quality and creative control. Future Potential and Development As an evolving technology, Midjourney's video AI is expected to undergo significant advancements in the coming years. Future updates may address current challenges, such as motion inconsistencies and difficulties with complex animations, further enhancing the tool's capabilities. Additionally, the integration of new features and improvements in processing speed could expand its appeal to an even broader audience. With ongoing development, this model has the potential to redefine the possibilities of video generation, opening up new creative opportunities for users across various industries. Its current capabilities already position it as a valuable tool, and its future advancements are likely to solidify its status as a leader in the field of image-to-video transformation. Media Credit: Olivio Sarikas Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.