logo
Everything, everywhere, all Firefly AI: New Adobe app launches for iPhone and Android

Everything, everywhere, all Firefly AI: New Adobe app launches for iPhone and Android

Yahoo2 days ago

If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission.
Generative AI products that can create amazing images and videos indistinguishable from real ones have practically democratized photoshopping. You no longer need years of training or expensive software to create and edit any kind of image or video. Just issue commands in natural language, and the advanced AI model of your choice will deliver stunning results in seconds.
You'd think these developments would directly threaten Adobe, the creator of Photoshop. But Adobe isn't backing down. Instead, the company has adapted its tools to take advantage of generative AI innovations. Products like Photoshop and Firefly already let you use AI to brainstorm and create images and videos tailored to your needs.
Today's Top Deals
Best deals: Tech, laptops, TVs, and more sales
Best Ring Video Doorbell deals
Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98
Adobe isn't even trying to one-up the likes of OpenAI, Google, and other AI firms that might seem like competitors. Instead, the company is embracing those alternatives, integrating them into apps like Firefly.
Just like that, Firefly can become your one-stop shop for all things photo and video creation that benefit from advanced AI tools. Adobe has just expanded the list of AI partners in the Firefly app and released iPhone and Android versions.
Adobe hosted its Max event in London a few weeks ago, where it announced several big updates to Firefly, including support for high-end third-party AI models and a new Firefly Boards feature designed to help teams collaborate on AI-generated content.
Adobe also confirmed at Max that iPhone and Android Firefly apps were coming soon, though it didn't share release dates.
Fast-forward to June 17th, and Adobe has released the Firefly app for iPhone and Android. Along with it, Adobe announced a new partnership with third-party genAI services for generating and editing photos and videos, plus new Firefly Boards features.
You can use Adobe's own models, also called Firefly, in the Firefly apps to generate photos and videos. But if you prefer something from the competition, Firefly gives you that option too.
Here's the current list of partners, including the new AI models Adobe announced on Tuesday:
Image Models: Black Forest Lab's Flux 1.1 Pro and Flux.1 Kontext; Ideogram's Ideogram 3.0; Google's Imagen 3 and Imagen 4; OpenAI's image generation model; Runway's Gen-4 Image
Video Models: Google's Veo 2 and Veo 3; Luma AI's Ray2; Pika's text-to-video generator
Of those, Ideogram, Luma, Pika, and Runway are new Adobe partners for Firefly.
The Firefly app for iPhone and Android is available to download now, so you can create AI content 'wherever inspiration strikes.'
The mobile app gives you quick access to tools you might already use in the desktop version of Firefly, including Generative Fill, Generative Expand, Text to Image, Text to Video, and Image to Video.
Creators can choose between Adobe's Firefly models or rely on third-party frontier AI from Google and OpenAI.
The Firefly mobile app lets you save your creations with your Creative Cloud account, making it easy to switch between mobile and desktop without interrupting your work.
One big advantage of using the Firefly app instead of going directly to OpenAI, Google, or other genAI tools is that it brings everything together in one place. That's especially useful if you're using multiple content generation platforms for a single project.
That's exactly what Adobe is aiming for. 'We built the Firefly app to be the ultimate one-stop shop for creative experimentation, where you can explore different AI models, aesthetics, and media types all in one place,' said Adobe's vice president of generative AI, Alexandru Costin. 'Every new partner model we add gives creators even more flexibility to experiment, iterate, and push their ideas further.'
Adobe also addressed content safety, saying a 'durable 'nutrition label'' will be attached to everything created in the Firefly apps. This will identify whether Firefly AI or a partner model was used. It's unclear if this label will be visibly marked, though.
You'll need an Adobe account and a plan to unlock all Firefly features. Access to third-party models depends on your subscription. In-app purchases include a Firefly Mobile Monthly plan ($4.99) and a Firefly Mobile Yearly plan ($49.99).
Adobe also introduced new features for Firefly Boards, which debuted a few weeks ago.
Firefly Boards let you generate video using either the Firefly Video model or an AI from an Adobe partner. You can also make iterative edits to images using the AI model of your choice.
The feature helps organize your Boards with a single click so everything's ready for a presentation. Adobe Docs can also be linked to Boards.
Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more
More Top Deals
Amazon gift card deals, offers & coupons 2025: Get $2,000+ free
See the

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost
Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost

USA Today

time28 minutes ago

  • USA Today

Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost

Apple AAPL.O was sued on Friday by shareholders in a proposed securities fraud class action that accused it of downplaying how long it needed to integrate advanced artificial intelligence into its Siri voice assistant, hurting iPhone sales and its stock price. The complaint covers shareholders who suffered potentially hundreds of billions of dollars of losses in the year ending June 9, when Apple introduced several features and aesthetic improvements for its products but kept AI changes modest. Apple did not immediately respond to requests for comment. CEO Tim Cook, Chief Financial Officer Kevan Parekh and former CFO Luca Maestri are also defendants in the lawsuit filed in San Francisco federal court. Artificial intelligence: Will AI replace Google on your iPhone? Apple thinks so. Here's why. Shareholders led by Eric Tucker said that at its June 2024 Worldwide Developers Conference, Apple led them to believe AI would be a key driver of iPhone 16 devices, when it launched Apple Intelligence to make Siri more powerful and user-friendly. But they said the Cupertino, California-based company lacked a functional prototype of AI-based Siri features, and could not reasonably believe the features would ever be ready for iPhone 16s. Shareholders said the truth began to emerge on March 7 when Apple delayed some Siri upgrades to 2026, and continued through this year's Worldwide Developers Conference on June 9 when Apple's assessment of its AI progress disappointed analysts. Apple shares have lost nearly one-fourth of their value since their December 26, 2024 record high, wiping out approximately $900 billion of market value. The case is Tucker v. Apple Inc et al, U.S. District Court, Northern District of California, No. 25-05197. Reporting by Jonathan Stempel in New York; Editing by Mark Porter and Rod Nickel

Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence
Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence

The Verge

time33 minutes ago

  • The Verge

Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence

At this point, it's becoming easier to say which AI startups Mark Zuckerberg hasn't looked at acquiring. In addition to Ilya Sutskever's Safe Superintelligence (SSI), sources tell me the Meta CEO recently discussed buying ex-OpenAI CTO Mira Murati's Thinking Machines Lab and Perplexity, the AI-native Google rival. None of these talks progressed to the formal offer stage for various reasons, including disagreements over deal prices and strategy, but together they illustrate how aggressively Zuckerberg has been canvassing the industry to reboot his AI efforts. Now, details about the team Zuckerberg is assembling are starting to come into view: SSI co-founder and CEO Daniel Gross, along with ex-Github CEO Nat Friedman, are poised to co-lead the Meta AI assistant. Both men will report to Alexandr Wang, the former Scale CEO Zuckerberg just paid over $14 billion to quickly hire. Wang told his Scale team goodbye last Friday and was in the Meta office on Monday. This week, he has been meeting with top Meta leaders (more on that below) and continuing to recruit for the new AI team Zuckerberg has tasked him with building. I expect the team to be unveiled as soon as next week. Rather than join Meta, Sutskever, Murati, and Perplexity CEO Aravind Srinivas have all gone on to raise more money at higher valuations. Sutskever, a titan of the AI research community who co-founded OpenAI, recently raised a couple of billion dollars for SSI. Both Meta and Google are investors in his company, I'm told. Murati also just raised a couple of billion dollars. Neither she nor Sutskever is close to releasing a product. Srinivas, meanwhile, is in the process of raising around $500 million for Perplexity. Spokespeople for all the companies involved either declined to comment or didn't respond in time for publication. The Information and CNBC first reported Zuckerberg's talks with Safe Superintelligence, while Bloomberg first reported the Perplexity talks. While Zuckerberg's recruiting drive is motivated by the urgency he feels to fix Meta's AI strategy, the situation also highlights the fierce competition for top AI talent these days. In my conversations this week, those on the inside of the industry aren't surprised by Zuckerberg making nine-figure — or even, yes, 10-figure — compensation offers for the best AI talent. There are certain senior people at OpenAI, for example, who are already compensated in that ballpark, thanks to the company's meteoric increase in valuation over the last few years. Speaking of OpenAI, it's clear that CEO Sam Altman is at least a bit rattled by Zuckerberg's hiring spree. His decision to appear on his brother's podcast this week and say that 'none of our best people' are leaving for Meta was probably meant to convey a position of strength, but in reality, it looks like he is throwing his former colleagues under the bus. I was confused by Altman's suggestion that Meta paying a lot upfront for talent won't 'set up a great culture.' After all, didn't OpenAI just pay $6.5 billion to hire Jony Ive and his small hardware team? When I joined a Zoom call with Alex Himel, Meta's VP of wearables, this week, he had just gotten off a call with Zuckerberg's new AI chief, Alexandr Wang. 'There's an increasing number of Alexes that I talk to on a regular basis,' Himel joked as we started our conversation about Meta's new glasses release with Oakley. 'I was just in my first meeting with him. There were like three people in a room with the camera real far away, and I was like, 'Who is talking right now?' And then I was like, 'Oh, hey, it's Alex.'' The following Q&A has been edited for length and clarity: How did your meeting with Alex just now go? The meeting was about how to make AI as awesome as it can be for glasses. Obviously, there are some unique use cases in the glasses that aren't stuff you do on a phone. The thing we're trying to figure out is how to balance it all, because AI can be everything to everyone or it could be amazing for more specific use cases. We're trying to figure out how to strike the right balance because there's a ton of stuff in the underlying Llama models and that whole pipeline that we don't care about on glasses. Then there's stuff we really, really care about, like egocentric view and trying to feed video into the models to help with some of the really aspirational use cases that we wouldn't build otherwise. You are referring to this new lineup with Oakley as 'AI glasses.' Is that the new branding for this category? They are AI glasses, not smart glasses? We refer to the category as AI glasses. You saw Orion. You used it for longer than anyone else in the demo, which I commend you for. We used to think that's what you needed to hit scale for this new category. You needed the big field of view and display to overlay virtual content. Our opinion of that has definitely changed. We think we can hit scale faster, and AI is the reason we think that's possible. Right now, the top two use cases for the glasses are audio — phone calls, music, podcasts — and taking photos and videos. We look at participation rates of our active users, and those have been one and two since launch. Audio is one. A very close second is photos and videos. AI has been number three from the start. As we've been launching more markets — we're now in 18 — and we've been adding more features, AI is creeping up. Our biggest investment by a mile on the software side is AI functionality, because we think that glasses are the best form factor for AI. They are something you're already wearing all the time. They can see what you see. They can hear what you hear. They're super accessible. Is your goal to have AI supersede audio and photo to be the most used feature for glasses, or is that not how you think about it? From a math standpoint, at best, you could tie. We do want AI to be something that's increasingly used by more people more frequently. We think there's definitely room for the audio to get better. There's definitely room for image quality to get better. The AI stuff has much more headroom. How much of the AI is onboard the glasses versus the cloud? I imagine you have lots of physical constraints with this kind of device. We've now got one billion-parameter models that can run on the frame. So, increasingly, there's stuff there. Then we have stuff running on the phone. If you were watching WWDC, Apple made a couple of announcements that we haven't had a chance to test yet, but we're excited about. One is the Wi-Fi Aware APIs. We should be able to transfer photos and videos without having people tap that annoying dialogue box every time. That'd be great. The second one was processor background access, which should allow us to do image processing when you transfer the media over. Syncing would work just like it does on Android. Do you think the market for these new Oakley glasses will be as big as the Ray-Bans? Or is it more niche because they are more outdoors and athlete-focused? We work with EssilorLuxottica, which is a great partner. Ray-Ban is their largest brand. Within that, the most popular style is Wayfair. When we launched the original Ray-Ban Meta glasses, we went with the most popular style for the most popular brand. Their second biggest brand is Oakley. A lot of people wear them. The Holbrook is really popular. The HSTN, which is what we're launching, is a really popular analog frame. We increasingly see people using the Ray-Ban Meta glasses for active use cases. This is our first step into the performance category. There's more to come. What's your reaction to Google's announcements at I/O for their XR glasses platform and eyewear partnerships? We've been working with EssilorLuxottica for like five years now. That's a long time for a partnership. It takes a while to get really in sync. I feel very good about the state of our partnership. We're able to work quickly. The Oakley Meta glasses are the fastest program we've had by quite a bit. It took less than nine months. I thought the demos they [Google] did were pretty good. I thought some of those were pretty compelling. They didn't announce a product, so I can't react specifically to what they're doing. It's flattering that people see the traction we're getting and want to jump in as well. On the AR glasses front, what have you been learning from Orion now that you've been showing it to the outside world? We've been going full speed on that. We've actually hit some pretty good internal milestones for the next version of it, which is the one we plan to sell. The biggest learning from using them is that we feel increasingly good about the input and interaction model with eye tracking and the neural band. I wore mine during March Madness in the office. I was literally watching the games. Picture yourself sitting at a table with a virtual TV just above people's heads. It was amazing. More to click on: If you haven't already, don't forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting. As always, I welcome your feedback, especially if you've also turned down Zuck. You can respond here or ping me securely on Signal. Thanks for subscribing.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store