AI has been lending new meaning to the words ‘disruptive technology’ since it broke into the mainstream, leading some to see AI interference where there is none. Adobe is the latest service provider on the block, following a service update that seemingly allows the company unlimited access to user content.
In early June, Adobe carried out an update that simply could not be avoided: “By closing this window, you’ll be unable to continue using Adobe apps and services” read the update notification. The moment the notice started popping up, Adobe users worldwide took to X (formerly Twitter) to share their outrage over the intrusion into their privacy.
The chief reason for the outrage was the first element of the notice that clarified that Adobe could and might ‘access your content through both automated and manual methods such as for content review’. The subsequent outcry online resulted from creatives and professionals who formed part of Adobe’s user demographic, and their work was the property of the software company to do as they pleased. The update meant that Adobe had a royalty-free right to copy and produce derivative content from their client’s assets. Adobe even claimed the right to sublicense this content to third-party companies.
Adobe has since explained itself in a webpage linked to its Terms of Use (ToS) page. According to this elaboration, Adobe only reviews user content to ensure that they keep learning through user experience to keep improving the products and services they provide. The actual content that can be subjected to analysis in this manner ‘includes but isn’t limited to’ images, audio files, video, text, or documents, as well as their associated data.
Additionally, Adobe will also not review data that is stored locally on personal devices, limiting itself only to content processed or stored on Adobe’s servers. According to Adobe’s FAQ section of the content analysis blog post, users can opt out of this function: Creative Cloud and Document Cloud products allow users to switch off content analysis through their account functions without affecting their ability to use Adobe’s features and services. However, this setting modification is not available to organisations or school accounts.
And that’s not all – the update is hardly as earth-shattering as it seems. The actual change that the new notice refers to took place in 2022. The notice currently making the rounds on the internet only clarifies the language used in the 2022 update, although the clarification looks to have done more harm than good. Adobe has been able to review user content for several years. According to Adobe, user content is only reviewed to provide services such as indexing, that allow users to search for documents, or update elements from CC libraries used in documents. The company also commits to refrain from training its Firefly AI model on customer content. Instead, Firefly training data, Adobe assures its users, are limited to licensed content such as Adobe Stock and publicly available content whose licenses have expired. This training in turn benefits users: Firefly uses its machine learning to make context-aware suggestions such as making predictions on the correct content to fill in images in Photoshop and providing search suggestions on Acrobat despite misspellings in the search term.
Users however take little reassurance from these statements, chiefly due to the limited nature of the opt-out of content review that Adobe provides. According to their FAQs blog post, Adobe may ignore consumer preferences in ‘certain limited circumstances’. User data that has been made public through Adobe Stock and Behance, streamed through Adobe Live, featured on Adobe Express, and submitted as tutorials in Lightroom for example, is fair game for review. Data generated in beta testing, early access programs, or using prerelease products are all also prone to being reviewed by Adobe irrespective of user preferences. This review may either be manual or automated and include metadata, structural elements, and categorisations associated with the document. The program also promises to filter out personal information before using user content for product improvement.
These reassurances however give end users little reason to believe that Adobe’s content reviewals do not amount to spyware, or that their content is being used to train AI models. After all, Section 4.2 of Adobe’s Terms of Use gives Adobe free reign to “use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, translate” user data at its disposal as it sees fit. When the update notice popped up on their screens this month, users could not uninstall the program before giving their consent.
Perhaps it is unfair to unduly villainise Adobe for making its stance wholly clear on these privacy issues. Internet service providers from Microsoft and Slack to Reddit have come under fire in recent years for using user data to train AI or for breaching user privacy. In the early 2010s, content monetisation reduced internet service users for their attention spans, and it appears as if the age of AI has reduced them to training data.
For the time being, we can only rely on Adobe’s chief of strategy and EVP of design and emerging products Scott Belsky’s X post that reiterates: “I can clearly state that Adobe does NOT train any GenAI models on customer’s content, and we obviously have tight security around any form of access to customer’s content. As a company that stores cloud documents and assets for customers, there are probably circumstances (like indexing to help you search your documents, updating components used from CC libraries across your documents, among others) where the company’s terms of service allow for some degree of access.”
(Theruni Liyanage)