Adobe: Policy Update Meant to Stop Child Sexual Abuse Material, Not Steal Content



Adobe accidentally ignited a public backlash after updating its term of use to imply it could view and harness customer data for its own commercial purposes. Now Adobe says the controversy boils down to a misunderstanding about its effort to crack down on child sexual abuse material. “The focus of this update was to be clearer about the improvements to our moderation processes that we have in place,” Adobe says. “Given the explosion of Generative AI and our commitment to responsible innovation, we have added more human moderation to our content submissions review processes.”Much of the backlash focuses on how Adobe’s updated terms of use state “We may access, view, or listen to your Content … through both automated and manual methods, but only in limited ways, and only as permitted by law.” Meanwhile, a second clause mentions the company tapping “machine learning techniques” to improve the Adobe service and customer experience.   

This Tweet is currently unavailable. It might be loading or has been removed.

To users, the access raised red flags, suggesting that Adobe could view customer content, including confidential projects, such as Hollywood productions. In response, Adobe says it updated the terms of use over concerns that some customers could harness Adobe products to create child sexual abuse material (CSAM). This comes as the FBI recently declared AI-generated CSAM illegal. 

How Adobe says it actually changed the language in the terms of use. (Credit: Adobe)

Many Adobe applications and features have to access a user’s content for processing. This can include “opening and editing files for the user or creating thumbnails or a preview for sharing,” the company says. The same access is needed to run cutting-edge AI applications such as Remove Background, Photoshop Neural Filters, which can add a smile to person’s face, or Liquid Mode, which can reformat PDFs to different devices. However, Adobe notes it also needs to scan users’ content for potential misuse to comply with the law. This includes monitoring for the creation of CSAM. The company can then escalate the matter to a human reviewer who double-checks whether illegal activity is occurring. Adobe can also take the same actions if it detects spam or phishing activity on its products. In the same blog post, Adobe also reassures users it’ll never use customer data to train its Firefly AI image-generation software. “Adobe will never assume ownership of a customer’s work,” the company added. “Adobe hosts content to enable customers to use our applications and services. Customers own their content and Adobe does not assume any ownership of customer work.”

Recommended by Our Editors

Ironically, Adobe updated the company’s terms of use to provider clarity. But the changes ended up kicking off a backlash amid public concerns about digital privacy and AI models being trained on user data without consent. In response, Adobe says “we will be clarifying the Terms of Use acceptance customers see when opening applications.” However, the damage may have already been done. Other users say they remain skeptical of Adobe, citing evidence that the company is already training AI models on artists’ work without their consent.

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

We will be happy to hear your thoughts

Leave a reply

AnsarSales
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart