Adobe Firefly: Unlock Creativity with Unique Generative AI Magic That’s Not Just Different, But Better for you.

Adobe firefly generative ai

In the short life of generative AI imagery, it has been on a roller-coaster of excitement, criticism, and then back to excitement. Every update to major generators like Midjourney and Stable Diffusion brings more excitement, but also more unease, especially in the creative community.

Table of Contents

The concerns mainly stem from where these systems get their training data—the literal source. The open internet provides a huge amount of original creative data that system like DALL-E use to train. However, artists have found that their work is used without their permission or control. While these systems don’t exactly copy art, they use their training to create something new, often mimicking styles and sometimes recycling parts of the content they were trained on.

Some artists have tried, without much success, to sue companies like Stable Diffusion for copyright issues. Yet the notion that success in generative AI imagery always comes at significant cost to the artists’ community is not necessarily true.

Costin
Image C: Lance Ulanoff

Recently, at CES 2024 Alexandru Costin, who’s the VP of Generative AI at Adobe? Costin a 20-year veteran at Adobe, has been part of the AI group for a year, and it’s been a busy time that’s not only affecting millions of users on Adobe creative Cloud but also shaping the company’s future.

I really think a significant part of our software will be replaced by models; we’re shifting from a software company to an AI company, “shared Costin during out breakfast conversion.

He acknowledges that Adobe has been involved in AI for a while, like adding the Content Aware Fill tool to Photoshop in 2016. However, their approach has typically focused on manipulating exciting images rather than creating something entirely new, He mentioned, “Our customers don’t want to generate images. They want to edit the image.”

When Adobe stepped into the world of generative imaging, their goal was to use powerful tools and models to improve exciting images while also safeguarding creators and their work.

Some people think Adobe joined the generative imaging field later than other, with its innovative platform, Adobe Firefly, coming after DALL-E, Stable Diffusion, and similar tools. However, Adobe had enough experience with models, training, and results to understand what it didn’t want to do.

Costin mentioned an early Adobe AI project called VoCo, which allowed users to alter audio recordings by providing text prompts. Despite a positive reception at the Adobe Max Conference, concerns quickly arose about VoCo being misused to create audio deepfakes. Although the technology was never released, it prompted Adobe to establish its own Content Authenticity Initiative.

“In 2018, we were reacting, but since then, we’re aiming to stay ahead and be a leader in how AI is approached,” shared Costin.

Exploring the Uncharted: The Unique AI Journey

When Adobe decided to delve into generative AI imagery, they aimed to create a tool for Photoshop users to generate images usable in various contexts, including commercial content. To achieve this, they set a high standard for training material: avoiding copyrighted, trademarked, branded content. Instead of gathering visual data from the internet, Adobe opted for a different, local source.

Beyond the array of Adobe Creative Cloud apps like Photoshop, Premiere, After Effects, Illustrator and InDesign, Adobe possesses a substantial stock image library called Adobe stock, this library boasts contribution from hundreds of thousands of individuals, offering a wealth of photos, illustrations, and 3D imagery. Importantly, it is free it is free of commercial branded and trademarked content, and it undergoes moderation to execute hate and adult imagery, Costin stated, we’re utilizing hundreds of millions of assets, all trained and moderated to have no intellectual property.

Adobe utilized this data to train Firefly, They programed the generative AI system to avoid creating images of trademarked well-known characters (like avoiding putting Bart Simpson in a trickly situation).

Costin mentioned a gray area in terms of copyright especially for characters like a particular version of Mickey Mouse whose Copyright might expire, allowing renders of that mouse down the line.

What sets Adobe Firefly apart from stable Diffusion and others is an interesting twist: Adobe pays its creators for the use of their work in training its AI. The payment is determined partly by how much of their creative work used during the training.

Surprisingly, Adobe also permits creators to contribute generative imagery to Adobe Stock, possibly creating a self-finding system. However, Costin views this as a win- win situation, stating. “Generative AI enhanced creativity in a big way.

Swift Progress, No Obstacles: Racing Beyond Boundaries

Costin, mentioned that the new models are strong but “need more control.” His team has trained these models to be safer for the education sector, eliminating the capability to generate inappropriate content. At the same time, they’ve left a space for higher education, recognizing that adult artists in that context might require more creative options.

Adobe’s tools can’t use a single approach for everyone when it comes to generative AI. Costin clarified how Adobe Firefly deals with location. The models take into account where the user is located and examine “the skin color distribution of people living in your country” to make sure it’s mirrored in the generated images. They also do similar checks with genders and age groups. While it’s challenging to say if these efforts completely eliminate bias, it’s evident that Adobe is making an attempt to ensure its AI reflects the communities of its creators.

Adobe seems to be adapting effectively in that aspect, but it’s challenging to match the speed of progress with the necessary levels of control and how the public sees it.

“It is impossible to trust what you see,” cautioned Costin.” This will change how people perceive the internet.” His remarks echoed those of his colleague, Adobe Chief Strategy Officer Scott Belsky, who recently pointed out, we’re entering an era where we can no longer believe our eyes. Instead of trust but verify, it’s going to become verify then trust.

Fulfilling Desires: Tailored Solutions

Maybe, though, the journey could be somewhat smoother for Adobe. With Costin leading the way, Adobe is putting less emphasis on creating unique images from scratch and more on the basics of adjusting and improving images. I don’t often use Adobe Firefly to make completely new images, but I frequently use Generative Fill in Photoshop. It helps me match the image size I want by extending an empty space without changing the main part of the photo.

Adobe’s future in AI seems a lot like its recent history: incorporating more generative AI into important features and apps that directly address user’s main tasks and requirements.

Concerning Adobe Premiere: “Certainly, we’re focusing on video.” Costin and his team are in discussions with the Premier and After Effects teams, understanding their requirements. Whatever Adobe decides to do in this area will follow the same AI approach as Photoshop. I also inquired about batch processing in Photoshop, and Costin mentioned, “We’re considering it… nothing to announce, but it’s a crucial workflow.”

Despite the rapid development pace and the difficulties involved, Costin maintains a positive outlook on overall direction of generative AI.

“I am optimist, “he said with a smile.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top