Using them, digital artists will be able to create professional-quality illustrations using text prompts. In addition, Firefly’s multimodal nature means that audio, video, illustrations and 3D models can all be generated via the system through words.
Firefly’s generative AI models can both create and transform audio, video, illustrations and 3D models using text prompts similar to those used in Dall-E and ChatGPT, the company said. Adobe Firefly will be part of a series of new Adobe Sensei generative AI services across Adobe’s clouds.
Firefly is currently available in beta across the Adobe suite on programs such as Premiere Pro, Illustrator, After Effects and Photoshop. Creators can sign up here to join the beta and request access to Firefly. They’ll be accessible through a closed beta program later this year.
The first model of the Firefly family is trained on images from Adobe’s Stock photo catalog, the company said. It’s openly licensed content and media from the public domain, which ensures the model won’t result in lawsuits. It also indicates that stock photographers and artists will be compensated for the use of their works in training these AIs.
Here are some of the initial Firefly features:
- Text to image: Creators can type their vision into a box and generate an image of it in seconds.
- Generate text styles and textures: Using another text-to-image generation style, this feature stylizes text. It can add textures and styles to text by interpreting a prompt and applying it to the words. Creators can turn text into essentially any style they want with a simple description.
Now, just a month after its initial announcement, Adobe announced today that it is already working on a host of upgrades for Creative Cloud video, and for audio applications such as Premiere Pro and Adobe After Effects. The additions should be coming to Firefly’s beta program later in 2023.
The new features announced this week are designed to help professional editors cut down on work and save time on tasks such as boosting colour levels, inserting placeholder images, adding effects, and autonomously recommending b-roll for a given project. Creators can simply type their ideas into Firefly’s AI text prompt and let the algorithm work away.
This includes “text to colour enhancements,” a capability that can adjust the brightness and saturation levels and shift the time of day in an image using natural language prompts, Adobe says.
Some newly announced features include:
- Advanced music and sound effects: Using text prompts, users can ask Firefly to generate custom sounds and music to fit a specific mood and scene, either as temporary or final tracks.
- Animated fonts, graphics and logos: Subtitles, logos and title cards can be generated quickly with custom animations based on the creator’s preferences described in words.
- Script and b-roll capabilities: Firefly will accelerate pre-production, production and post-production workflows, using AI analysis of script text to automatically create storyboards and pre-visualizations while also recommending b-roll clips for rough and final cuts.
Adobe says there will be how-to guides to walk new users through the process of using these features.