Adobe Expands Firefly AI Suite with New Image, Vector, and Video Models

Adobe has unveiled major updates to its Firefly generative AI ecosystem, including the release of new image and vector models, a revamped web app, and expanded access to video generation tools. The announcement signals Adobe’s continued push to integrate AI across its creative suite, aiming to provide users with more powerful, flexible, and user-friendly content generation tools.
New Image Model with Improved Control and Quality
At the core of the update is the launch of Firefly Image Model 4, which Adobe describes as a significant leap in image generation capabilities. Compared to its predecessors, this model offers enhanced speed and quality, along with more refined control over elements like camera angles, style, structure, and zoom. It supports image outputs up to 2K resolution.
An advanced version, Image Model 4 Ultra, has also been introduced. It is designed to render highly detailed and complex scenes with improved accuracy and control, making it suitable for professional-grade outputs that require intricate visual elements.
Also read: Fujitsu and Supermicro Deepen AI Collaboration with Launch of New Managed Services and High-Performance Generative AI Server
Editable Vector Artwork and Logo Design
In addition to image generation, Adobe introduced a Firefly Vector Model. This tool creates editable vector-based artwork—enabling users to iterate on logos, icons, packaging, and scenes. The model supports design workflows by producing vector files that can be easily modified within tools like Adobe Illustrator.
Video Generation and Expanded Access
The Firefly video model, previously available in limited beta, is now accessible to all users. It allows for video clip generation using text or image prompts, with additional controls for camera angles, start and end frame settings, atmospheric effects, and motion design. Output resolution supports up to 1080p.
Unified AI Web App and Cross-Platform Access
The redesigned Firefly web app now serves as a central platform for all of Adobe’s generative models—including the new ones—as well as models from competitors such as OpenAI, Google, and Flux. Users can seamlessly toggle between these models, and all outputs carry content credentials for transparency and attribution.
Adobe is also developing a mobile version of the Firefly app, further extending accessibility across devices.
Firefly Boards and Collaborative Tools
The company has launched a public test of Firefly Boards, a digital canvas tool for ideation and visual collaboration. It supports AI-generated content, image remixing, and team collaboration, akin to other moodboarding platforms like Cove, Kosmik, and Visual Electric. Boards is accessible within the Firefly web app.
API Availability and Content Authenticity
Adobe is making its Text-to-Image API and Avatar API widely available, and has introduced a Text-to-Video API in beta. These are offered via the Firefly Services collection for developers and enterprise users.
In a bid to enhance content ownership and attribution, Adobe is also piloting a Content Authenticity web app that allows creators to attach metadata indicating authorship and permissions for AI training usage.
What’s Next
While Adobe has stated that these models will be integrated into its wider product portfolio soon, it has not provided a specific rollout timeline. However, the updates point to a clear direction: deeply embedding generative AI into the company’s creative ecosystem while ensuring transparency, editability, and cross-model compatibility.