Responsible Innovation in the Age of Generative AI | Adobe Blog (2024)

Responsible Innovation in the Age of Generative AI | Adobe Blog (1)

Image generated using Adobe Firefly.

Generative AI is changing the way we all think about creativity. Type “3D render of a paper dragon, studio style photography” and you’re instantly offered multiple variations of a portrait of a ferocious origami creature — or combine a few data points with simple instruction and a chatbot can spit out a compelling marketing email. It’s easy to see the power this technology can unlock for individual creators and businesses alike. Generative AI lets people paint with text instead of pixels (or paint). On the business side, it lets you connect with customers efficiently through auto-generated texts, emails, and content. And implemented the right way, generative AI brings precision, power, speed, and ease to your existing workflows — allowing people to focus on more strategic or creative parts of their work.

In this article

  • Grounded in ethics and responsibility
  • Transparency builds trust
  • Respecting creators’ choice and control
  • An ongoing journey

Generative AI also opens the door to new questions about ethics and responsibility in the digital age. As Adobe and others harness the power of this cutting-edge technology, we must come together across industries to develop, implement and respect a set of guardrails that will guide its responsible development and use.

Grounded in ethics and responsibility

Any company building generative AI tools should start with an AI Ethics framework. Having a set of concise and actionable AI ethics principles and a formal review process built into a company’s engineering structure can help ensure that AI technologies — including generative AI — are developed in a way that respects their customers and aligns with their company values. Core to this process are training, testing, and — when necessary — human oversight.

Generative AI, as with any AI, is only as good as the data on which it’s trained. Mitigating harmful outputs starts with building and training on safe and inclusive datasets. For example, Adobe’s first model in our Firefly family of creative generative AI models is trained on Adobe Stock images, openly licensed content, and public domain content where copyright has expired. Training on curated, diverse datasets inherently gives your model a competitive edge when it comes to producing commercially safe and ethical results.

But it’s not just about what goes into a model. It’s also about what comes out. Because even with good data, you can still end up with biased AI, which can unintentionally discriminate or disparage and cause people to feel less valued. The answer is rigorous and continuous testing. At Adobe, under the leadership of our AI Ethics team, we constantly test our models for safety and bias internally and provide those results to our engineering team to resolve any issues. In addition, our AI features have feedback mechanisms so that when they go out to the public, users can report any concerns and we can take steps to remediate them. It’s critical that companies foster this two-way dialogue with the public so that we can work together to continue to make generative AI better for everyone.

On top of training, companies can build in various technical measures to improve the ethics of their products. Block lists, deny lists, NSFW classifiers can be implemented to mitigate harmful bias in the output of an AI model. If a company is still unsure or unsatisfied with the output, they can always add or require a human in the loop to ensure the output meets their expectations.

And whenever a company is sourcing AI from an outside vendor — whether they’re integrating it into company workflows or into their own products — making sure the AI meets their ethical standards should be part of their vendor risk process.

Transparency builds trust

We also need transparency about the content that generative AI models produce. Think of our earlier example but swap the dragon for a speech by a global leader. Generative AI raises concerns over its ability to conjure up convincing synthetic content in a digital world already flooded with misinformation. As the amount of AI-generated content grows, it will be increasingly important to provide people with a way to deliver a message and authenticate that it is true.

At Adobe, we’ve implemented this level of transparency in our products with our Content Credentials. Content Credentials allow creators to attach information to a piece of content — information like their names, dates, and the tools used to create it. Those credentials travel with the content, so that when people see it, they know exactly where the content came from and what happened to it along the way.

We’re not doing this alone; four years ago, we founded the Content Authenticity Initiative to build this solution in an open way so anyone can incorporate it into their own products and platforms. There are over 900 members from all areas of technology, media, and policy who are joining together to bring this solution to the world.

And for generative AI specifically, we automatically attach Content Credentials to indicate when something was created or modified with generative AI. That way, people can see how a piece of content came to be and make more informed decisions about whether to trust it.

Responsible Innovation in the Age of Generative AI | Adobe Blog (2)

Image generated using Adobe Firefly.

Respecting creators’ choice and control

Creators want control over whether their work is used to train generative AI or not. For some, they want their content out of AI. For others, they are happy to see it used in the training data to help this new technology grow, especially if they can retain attribution for their work. Using provenance technology, creators can attach “Do Not Train” credentials that travel with their content wherever it goes. With industry adoption, this will help prevent web crawlers from using works with “Do Not Train” credentials as part of a dataset. Together, along with exploratory efforts to compensate creators for their contributions, we can build generative AI that both empowers creators and enhances their experiences.

An ongoing journey

We’re just scratching the surface of generative AI and every day, the technology is improving. As it continues to evolve, generative AI will bring new challenges and it’s imperative that industry, government, and community work together to solve them. By sharing best practices and adhering to standards to develop generative AI responsibly, we can unlock the unlimited possibilities it holds and build a more trustworthy digital space.

Want more news about Adobe Summit?

Keep up with the latest from event organizers, partners and sponsors.

Visit the Adobe Experience Cloud blog

Responsible Innovation in the Age of Generative AI | Adobe Blog (3)

https://blog.adobe.com/en/publish/2022/10/18/bringing-next-wave-ai-creative-cloud

https://blog.adobe.com/en/publish/2023/03/21/bringing-gen-ai-to-creative-cloud-adobe-firefly

https://main--blog--adobe.hlx.page/en/publish/2023/03/13/join-tig-notaro-greg-hoffman-angela-duckworth-and-more-at-adobe-summit

Responsible Innovation in the Age of Generative AI | Adobe Blog (2024)

FAQs

What are the risks of Adobe AI? ›

If Adobe's AI were indeed scanning documents automatically and by default each time they were opened, there would be immense security risks. Confidential documents might leave an organization's control. Businesses might inadvertently violate laws such as HIPAA or the GDPR.

How many people are using generative AI? ›

New Adobe research found that more than half of all Americans have taken generative AI for a spin in the past year, and more than eight in ten anticipate it will help them be more creative.

Why is generative AI the future? ›

Though there is some skepticism both at the organizational and employee levels, new users continue to discover generative AI's ability to help them with work like drafting and sending emails, preparing reports, and creating interesting content for social media, all of which saves them time for higher-level strategic ...

How generative AI is transforming the creative process? ›

Generative AI has generated more excitement, optimism, and curiosity than any other technological advancement in recent memory. Conversely, it also sent shockwaves throughout the creative and technical sectors, with folks increasingly worried about job security.

What is the Adobe AI art controversy? ›

Adobe upset many artists and designers recently by implying it would use their content to train AI models. The company had to quell those concerns with a blog post denying this. But some Adobe employees are still not happy with the response, and they are calling for improved communication with customers.

What are some threats of Adobe? ›

Adobe faces intense competition from many companies for most of its products and services. Many products from open source enterprises pose great threat to the company; for example, the digital media segment of the company faces threat from major social media platforms including Facebook, Twitter and the like.

What is the downside of generative AI? ›

Data privacy and security

One of the foremost challenges related to generative AI is the handling of sensitive data. As generative models rely on data to generate new content, there is a risk of this data including sensitive or proprietary information.

Who are the biggest players in generative AI? ›

Top Generative AI Companies (91)
  • PwC. Artificial Intelligence • Professional Services • Business Intelligence • Consulting • Cybersecurity • Generative AI. ...
  • IMO Health. ...
  • Kensho Technologies. ...
  • Unlearn.AI. ...
  • Monte Carlo. ...
  • Udemy. ...
  • Bubble. ...
  • Grammarly.

Will generative AI replace humans? ›

Thus, humans and AI both have different methods of learning, and it is unlikely that one will get the better of the other. People who use AI rather than AI itself will replace humans: There is no denying the transformative power of AI and how its adoption automates operations.

What jobs will thrive despite AI? ›

Which Jobs Are Safest from AI and Automation?
  • Health Care: Nurses, doctors, therapists, and counselors.
  • Education: Teachers, instructors, and school administrators.
  • Creative: Musicians, artists, writers, and journalists.
  • Personal Services: Hairdressers, cosmetologists, personal trainers, and coaches.

What next after generative AI? ›

Action AI. The next step in the journey of AI is likely focused on Automatic Action.

Which industry is likely to benefit the most from generative AI? ›

The healthcare industry stands to benefit greatly from generative AI. One of the key areas where generative AI can make a significant impact is in medical imaging.

What is one thing current generative AI applications cannot do? ›

Generative AI can't generate new ideas or solutions

One of the key limitations of AI is its inability to generate new ideas or solutions.

What is an example of hallucination when using generative AI? ›

AI hallucinations can take many different forms. Some common examples include: Incorrect predictions: An AI model may predict that an event will occur when it is unlikely to happen. For example, an AI model that is used to predict the weather may predict that it will rain tomorrow when there is no rain in the forecast.

How will generative AI impact our lives? ›

Generative AI can simulate different scenarios and generate possible solutions, aiding in decision-making processes. It can be applied in areas such as finance, operations research, or healthcare to optimize processes, recommend actions, or diagnose problems.

What is the disadvantage of Adobe Illustrator? ›

Cons. Bad for raster graphics – Illustrator's tools are not compatible with raster graphics, so any attempt to create pixel images will be slow and arduous. Steep learning curve – If you are just starting to learn about vector graphics, Illustrator will present a very steep learning curve.

What are the risks of AI models? ›

Examples include arbitrary code execution, data poisoning, prompt injection, model extraction, hallucinations, data drift, unexpected behavior, bias predictions, and toxic output. The effects of a model error depend largely on the use case. They can be financial, legal, or reputational.

Is the Adobe AI Assistant secure? ›

To support data security, we have built robust testing and monitoring methodologies in pre- and post-processing and engineering processes. All user content, prompts, and responses are encrypted in transit. At rest, any data stored by the Acrobat Generative AI Service is encrypted using SHA-256.

How will AI impact Adobe? ›

— Today, Adobe (Nasdaq: ADBE) previewed breakthrough generative AI innovations within Adobe Premiere Pro that will reimagine video creation and production workflows, delivering new creative possibilities that every pro editor needs to keep up with the high-speed pace of video production.

Top Articles
Latest Posts
Article information

Author: Kareem Mueller DO

Last Updated:

Views: 6526

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Kareem Mueller DO

Birthday: 1997-01-04

Address: Apt. 156 12935 Runolfsdottir Mission, Greenfort, MN 74384-6749

Phone: +16704982844747

Job: Corporate Administration Planner

Hobby: Mountain biking, Jewelry making, Stone skipping, Lacemaking, Knife making, Scrapbooking, Letterboxing

Introduction: My name is Kareem Mueller DO, I am a vivacious, super, thoughtful, excited, handsome, beautiful, combative person who loves writing and wants to share my knowledge and understanding with you.