logo
Midjourney 推出首個 AI 影片生成模型 V1,正式進軍生成影片服務行列

Midjourney 推出首個 AI 影片生成模型 V1,正式進軍生成影片服務行列

Yahoo5 hours ago

雖然大家都經常玩 ChatGPT 的圖像生成功能,但說到元祖級、最強的 AI 圖像生成服務,必定是Midjourney,而他們在星期三宣布推出首款 AI 影片創作模型 V1,正式進軍生成影片服務行列。用戶只需上傳一張圖片或相片,就能自動生成一條約 4 – 5 秒長的影片。
上傳相片後,V1 可以很簡單地用自動方式生成影片,當然亦有提供一些設定讓用戶去調整,例如使用手動模式以文字描述想要加入的特定動畫效果,又或者調整鏡頭走向等。Midjourney V1 可由一張相片自動生成一條為 480p 解像度、約 5 秒長的影片,但其實用戶在生成後可延長影片四秒、最多四次,因此最長是可生成時長達 21 秒的影片。目前想體驗 V1 的話,每月USD $10 的 Basic 訂閱計劃就可以試用得到,而 USD $60 的 Pro 計劃與 USD $120 的 Mega 計劃用戶,則可在「Relax」模式下無限量地生成影片。Midjourney 表示將會在接下來的一個月內,重新評估影片模型的收費方案。
Introducing our V1 Video Model. It's fun, easy, and beautiful. Available at 10$/month, it's the first video model for *everyone* and it's available now. pic.twitter.com/iBm0KAN8uy
— Midjourney (@midjourney) June 18, 2025
Midjourney 對 AI 影片模型的期望,不僅於為電影領域影片提供補充素材(B-roll)或製作廣告。據 TechCrunch 的報導指,Midjourney 創始人大衛霍爾茲(David Holz)表示 AI 影片模型的下一步是建構出能夠「即時運行的開放世界模擬」的 AI 模型。現時 Midjourney 正處於與迪士尼與環球影業的侵權訴訟之中,會否成為新服務的絆腳石,屬未知之數。
更多內容:
TechCrunch
迪士尼與環球影業聯合狀告 AI 製圖 Midjourney:「侵權與抄襲的無底洞」求償超過 5.9 億美元
古天樂 x AI!本地電影導入 AI 技術示範作,用 Google VEO 2 協助製高質影片
Google I/O 2025 | Google 的 Veo 3 AI 模型現在可以為影片生成搭配的音軌
緊貼最新科技資訊、網購優惠,追隨 Yahoo Tech 各大社交平台!
🎉📱 Tech Facebook:https://www.facebook.com/yahootechhk
🎉📱 Tech Instagram:https://www.instagram.com/yahootechhk/
🎉📱 Tech WhatsApp 社群:https://chat.whatsapp.com/Dg3fiiyYf3yG2mgts4Mii8
🎉📱 Tech WhatsApp 頻道:https://whatsapp.com/channel/0029Va91dmR545urVCpQwq2D
🎉📱 Tech Telegram 頻道:https://t.me/yahootechhk

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Jony Ive Deal Removed from OpenAI Site Over Trademark Suit
Jony Ive Deal Removed from OpenAI Site Over Trademark Suit

Bloomberg

time2 hours ago

  • Bloomberg

Jony Ive Deal Removed from OpenAI Site Over Trademark Suit

Marketing materials and video related to a blockbuster partnership between former Apple Inc. designer Jony Ive and OpenAI Inc. were removed from the web due to a trademark dispute. Social media users noticed Sunday that a video and website hosted by OpenAI that announced the artificial intelligence company's $6.5 billion acquisition of Ive's secretive AI hardware startup IO Products were no longer online, prompting speculation about a hiccup in the agreement.

Tesla launches robotaxi rides in Austin with big promises and unanswered questions
Tesla launches robotaxi rides in Austin with big promises and unanswered questions

TechCrunch

time2 hours ago

  • TechCrunch

Tesla launches robotaxi rides in Austin with big promises and unanswered questions

Tesla has started giving rides in driverless Model Y SUVs in Austin, a decade after CEO Elon Musk began making — and breaking — myriad promises about his company's ability to launch such a service. The rollout will become the first big test of Musk's belief that it's possible to safely deploy fully autonomous vehicles using just cameras and end-to-end AI – an approach that differs from other players in the space like Waymo. On Sunday, numerous videos shared on social media as well as sources in the city, confirmed what Musk has been teasing for months: that the rides are finally happening, at a surely coincidental flat fee of $4.20 per ride. Tesla sent early-access invitations in the past week to vetted customers, who were able to download and use the new robotaxi app on Sunday to hail rides. It's unclear how many people have received this invitation. But posts on Musk's social media platform X show that many of them went to Tesla's loudest online supporters. The invitations, along with a new robotaxi information page published on Tesla's website on June 22, confirm the service will operate every day from 6:00 a.m. to 12:00 a.m but 'may be limited or unavailable in the event of inclement weather.' And, notably, a Tesla employee will be sitting in the right front passenger seat as a 'safety monitor.' The robotaxi information page also includes instructions on downloading the app, how to report a lost item, and general rules for riders. It still glosses over the kind of specifics that Waymo — the Alphabet-owned AV company that operates commercial robotaxis in Phoenix, Los Angeles, San Francisco, and Austin — has historically provided. The robotaxi service will be small to start, according to Musk. The initial fleet will be about 10 or so 2025 Model Y SUVs operating in a narrowly defined area of South Austin. That's in line with a first-hand account by Ed Niedermeyer, author of 'Ludicrous, The Unvarnished Story of Tesla Motors,' who is in Austin to monitor the robotaxi rollout. (Niedermeyer is a co-host of The Autonocast with TechCrunch editor Kirsten Korosec.) Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Neidermeyer found what appears to be a Tesla robotaxi depot — a nondescript parking lot dotted with trees near Oltorf Street in South Austin. The day before the launch, he spotted several driverless Model Ys — always with an employee behind the steering wheel — entering and exiting the parking lot. Groups of other Tesla Model Y vehicles, most with manufacturer plates, were also parked there. This morning, he spotted the branded Tesla Model Y robotaxis, this time with the employee in the front passenger seat, leaving the holding area. He observed one of the branded robotaxis, which had not yet picked up a rider, suddenly hitting its brakes two separate times — once in the middle of an intersection. It's unclear why the vehicle behaved that way. However, in a video, which TechCrunch has viewed, both instances occurred as the Tesla passed by police vehicles that were located in parking lots adjacent to the roadway. Information gaps Leading up to the launch, Musk shared dribs and drabs about the Tesla robotaxi launch in a few interviews and posts on X. Even now, nearly all of the information about the robotaxi launch has been provided by the company's biggest supporters. In fact, Tesla has actively tried to suppress information about the robotaxi service. Tesla tried to block TechCrunch's public records request with the Texas Department of Transportation (TxDOT). The company has also tried to block the city of Austin from fulfilling a records request by Reuters, according to the news service. 'Tesla seeks to be as transparent as possible, however, as explained further below, some of the requested information cannot be released because it is confidential information, trade secrets, and/or business information exchanged with the TxDOT in conjunction with conducting business with TxDOT,' Taylor White, senior counsel on infrastructure for Tesla, wrote in a letter to the Texas Attorney General's office in April. One of the more interesting rollout strategies is the company's use of a human 'safety monitor.' It's unclear what role these safety monitors will play and how much, if any control, they will have. These employees are likely not meant to try and intervene if the software is about to do something wrong. But they may have access to some sort of kill switch that can stop the car if that does happen. Historically, autonomous vehicle companies like Waymo and former Cruise tested their respective self-driving technology by having a human safety operator behind the wheel and a second engineer in the front passenger seat. Eventually, that might be reduced to one person sitting in the passenger seat before removing them altogether. This practice was traditionally done during the testing phase — not commercial operations. Tesla is not using the futuristic vehicles, dubbed Cybercabs, that were revealed on October 10, 2024. Instead, the 2025 Tesla Model Y vehicles are equipped with what Musk describes as a new, 'unsupervised' version of Tesla's Full Self-Driving software. Tesla will not be using its in-cabin camera during rides by default. The company says it will only be used if a rider requests support or in the case of an emergency. It will use the camera after a ride ends to 'confirm Robotaxi's readiness for its next trip.' Tesla is encouraging early access riders to take photos and video of their experiences, although it says it 'may suspend or terminate Robotaxi access' if riders violate its rules, including if they 'disseminate content on a social media platform or similar medium depicting a violation of these Rules or misuse of the Robotaxi.' (That includes riders agreeing not to smoke, vape, drink alcohol, do drugs, or use the robotaxi in connection with a crime.) Musk and other Tesla executives praised the milestone on X, with Ashok Elluswamy, the head of the company's self-driving team, posting a photo of the 'Robotaxi launch party' from an undisclosed location. 'Super congratulations to the @Tesla_AI software & chip design teams on a successful @Robotaxi launch!! Culmination of a decade of hard work,' Musk wrote. But at least one rider on Sunday reported having an experience where Tesla's remote support team had to help in some way. It's not immediately clear what happened during that ride, but that same rider later said the ride was very smooth.

Your AI use could have a hidden environmental cost
Your AI use could have a hidden environmental cost

Yahoo

time3 hours ago

  • Yahoo

Your AI use could have a hidden environmental cost

Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety. Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store