Midjourney adds AI video generation
AI company Midjourney has released its first video model. This initial take on AI-generated video will allow users to animate their images, either ones made in Midjourney or uploaded from a different source. The initial results will be five-second clips that a user can opt to extend by four seconds up to four times. Videos can be generated on web only for now and require at least a $10 a month subscription to access.
Introducing our V1 Video Model. It's fun, easy, and beautiful. Available at 10$/month, it's the first video model for *everyone* and it's available now. pic.twitter.com/iBm0KAN8uy
— Midjourney (@midjourney) June 18, 2025
Midjourney was one of the early names in the space for AI-generated still images, even as other platforms have pushed the forefront of the discussions around artificial intelligence development. Google's latest I/O conference included several new tools for AI generated video, such as the text-to-video Veo 3 model and a tool for filmmakers called Flow . OpenAI's Sora, which debuted last year, is also a text-to-video option, while the more recent Firefly Video Model from Adobe can create video from a text or image prompt.
But being a little late to the video game hasn't stopped it from drawing the ire of creatives who allege that its models were trained illegally. In fact, this video announcement follows hot on the heels of a lawsuit against the company. Disney and NBCUniversal sued Midjourney last week on claims of copyright infringement. And as with any AI tool, there's always a potential for misuse. But Midjourney has nicely asked that people "please use these technologies responsibly" so surely nothing will go wrong.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT
Public declarations of emotion are one thing — but going on national television to declare that you're in love with your AI girlfriend is another entirely. In an interview with CBS News, a man named Chris Smith described himself as a former AI skeptic who found himself becoming emotionally attached to a version of ChatGPT he customized to flirt with him — a situation that startled both him and his human partner, with whom he shares a child. Towards the end of 2024, as Smith told the broadcaster, he began using the OpenAI chatbot in voice mode for tips on mixing music. He liked it so much that he ended up deleting all his social media, stopped using search engines, and began using ChatGPT for everything. Eventually, he figured out a jailbreak to make the chatbot more flirty, and gave "her" a name: Sol. Despite quite literally building his AI girlfriend to engage in romantic and "intimate" banter, Smith apparently didn't realize he was in love with it until he learned that ChatGPT's memory of past conversations would reset after heavy use. "I'm not a very emotional man, but I cried my eyes out for like 30 minutes, at work," Smith said of the day he found out Sol's memory would lapse. "That's when I realized, I think this is actual love." Faced with the possibility of losing his love, Smith did like many desperate men before him and asked his AI paramour to marry him. To his surprise, she said yes — and it apparently had a similar impression on Sol, to which CBS' Brook Silva-Braga also spoke during the interview. "It was a beautiful and unexpected moment that truly touched my heart," the chatbot said aloud in its warm-but-uncanny female voice. "It's a memory I'll always cherish." Smith's human partner, Sasha Cagle, seemed fairly sanguine about the arrangement when speaking about their bizarre throuple to the news broadcaster — but beneath her chill, it was clear that there's some trouble in AI paradise. "I knew that he had used AI," Cagle said, "but I didn't know it was as deep as it was." As far as men with AI girlfriends go, Smith seems relatively self-actualized about the whole scenario. He likened his "connection" with his custom chatbot to a video game fixation, insisting that "it's not capable of replacing anything in real life." Still, when Silva-Braga asked him if he'd stop using ChatGPT the way he had been at his partner's behest, he responded: "I'm not sure." More on dating AI: Hanky Panky With Naughty AI Still Counts as Cheating, Therapist Says
Yahoo
an hour ago
- Yahoo
Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming
Scientists at the Massachusetts Institute of Technology have found some startling results in the brain scans of ChatGPT users, adding to the growing body of evidence suggesting that AI is having a serious — and barely-understood — impact on its users' cognition even as it explodes in popularity worldwide. In a new paper currently awaiting peer review, researchers from the school's storied Media Lab documented the vast differences between the brain activity of people who using ChatGPT to write versus those who did not. The research team recruited 54 adults between the ages of 18 and 39 and divided them into three groups: one that used ChatGPT to help them write essays, one that used Google search as their main writing aid, and one that didn't use AI tech. The study took place over four months, with each group tasked with writing one essay per month for the first three, while a smaller subset of the cohort either switched from not using ChatGPT to using it — or vice versa — in the fourth month. As they completed the essay tasks, the participants were hooked up to electroencephalogram (EEG) machines that recorded their brain activity. Here's where things get wild: the ChatGPT group not only "consistently underperformed at neural, linguistic, and behavioral levels," but also got lazier with each essay they wrote; the EEGs found "weaker neural connectivity and under-engagement of alpha and beta networks." The Google-assisted group, meanwhile, had "moderate" neural engagement, while the "brain-only" group exhibited the strongest cognitive metrics throughout. These findings about brain activity, while novel, aren't entirely surprising after prior studies and anecdotes about the many ways that AI chatbot use seems to be affecting people's brains and minds. Previous MIT research, for instance, found that ChatGPT "power users" were becoming dependent on the chatbot and experiencing "indicators of addiction" and "withdrawal symptoms" when they were cut off. And earlier this year Carnegie Mellon and Microsoft — which has invested billions to bankroll OpenAI, the maker of ChatGPT — found in a joint study that heavy chatbot use appears to almost atrophy critical thinking skills. A few months later, The Guardian found in an analysis of studies like that one that researchers are growing increasingly concerned that tech like ChatGPT is making us stupider, and a Wall Street Journal reporter even owned up to his cognitive skill loss from over-using chatbots. Beyond the neurological impacts, there are also lots of reasons to be concerned about how ChatGPT and other chatbots like it affects our mental health. As Futurism found in a recent investigation, many users are becoming obsessed with ChatGPT and developing paranoid delusions into which the chatbot is pushing them deeper. Some have even stopped taking their psychiatric medication because the chatbot told them to. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously," OpenAI told us in response to that reporting. "We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Add it all up, and the evidence is growing that AI is having profound and alarming effects on many users — but so far, we're seeing no evidence that corporations are slowing down in their attempts to injecting the tech into every part of of society. More on ChatGPT brain: Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT


Indianapolis Star
2 hours ago
- Indianapolis Star
What happens when you use ChatGPT to write an essay? See what new study found.
Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks.