Getty Images Sues Makers of Stable Diffusion over AI-photos
The Getty Images company is one of the world’s oldest and foremost stock photo resellers. Now it’s filing its suit because it claims that Stability AI, the company behind the Stable Diffusion image rendering AI, “unlawfully” scraped millions of images from Getty’s site.
In other words, Getty is alleging copyright violation. This legal battle shows signs of being a landmark case among potential conflicts between content creation systems and AI platform startups.
Getty recently shared a press statement in which the company explains its allegation that Stability AI “unlawfully copied and processed millions of images protected by copyright” for the sake of training its AI’s image rendering capabilities.
This is the main argument that caused Getty to launch its legal proceedings against Stability AI in the High Court of Justice in London”. It is of course also possible that Getty will also file another suit in a U.S. court since Getty is actually U.S.-based.
Getty CEO Craig Peters recently told the tech website The Verge that Getty has sent Stability a “letter before action” notifying the latter company of Getty’s intent to sue in UK court.
Peters explained to The Verge that,
“The driver of that [letter] is Stability AI’s use of intellectual property of others — absent permission or consideration — to build a commercial offering of their own financial benefit,”
The CEO also added,
“We don’t believe this specific deployment of Stability’s commercial offering is covered by fair dealing in the UK or fair use in the US. The company made no outreach to Getty Images to utilize our or our contributors’ material so we’re taking an action to protect our and our contributors’ intellectual property rights.”
Stability AI itself hasn’t yet commented to anyone about this case. This is also rather unsurprising since the dynamics of the situation certainly place Stability on shaky ground.
The basic fact that AI-rendering companies have indeed used massive amounts of scraped photography to train their system is something well-known. The founder of Midjourney in particular even admitted as much in late 2022.
The problem in particular isn’t even so much the scraping, though this itself is a legally grey practice online. In this case, it’s the alleged way in which shreds of those scraped images underpin the visuals later generated by AI systems that is causing the copyright violation claims.
Other AI-rendering companies have been much more cagey about admitting their internal training methods because of these very legal issues.
For example, OpenAI, another major player in this space, simply won’t comment on where it gets its training sets from.
In the case of OpenAI, it’s also the creator and owner of a system called ChatGPT, which is remarkably good at communicating naturally in text with human requests for information.
It’s widely known that ChatGPT garnered its own data from enormous amounts of written digital content of all kinds. This could open up a whole other legal can of worms that partly mirrors the case of AI photos derived from uncredited human works.
Unfortunately for Stable Diffusion, its training set is open source, making it highly visible.
This is why an independent analysis of the data used by its AI showed that photos from Getty Images and other stock sites were indeed digested to train Stability’s algorithms. Oops.
Whether use for training means that copyright is being violated is what the media and courts are now increasingly debating
Getty CEO Peters compares this case to how things were in the early days of digital music when certain online services let users access music illegally without license holders to that music being compensated.
This comparison is a bit contorted though. In the case of Napster back in the day, or torrent pirating sites today, digital content is blatantly pirated for unmodified distribution directly to consumers.
AI rendering platforms aren’t specifically doing that. Instead, their AIs base their own creations on images by others. Human artists have been doing similar things for centuries with few plausible claims of them violating copyright for having been inspired by other works.
A further question that then stems from this is obvious: can the same be claimed for AI rendering, even though it’s being done by an unthinking algorithm run by another corporation? We’ll see how the case law develops.
Getty’s press statement does claim to support AI art too, stating in its second paragraph,
“Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems…”
On a related note, Getty’s lawsuit isn’t the only one that Stability AI is facing right now.
In a separate case that’s also currently proceeding, three artists are suing Stability AI along with Midjourney and DeviantArt.
1/ As I learned more about how the deeply exploitative AI media models practices I realized there was no legal precedent to set this right. Let’s change that.
Read more about our class action lawsuit, including how to contact the firm here: https://t.co/yvX4YZMfrG
— Karla Ortiz 🐀 (@kortizart) January 15, 2023
While DeviantArt seems like something of an oddball in this other case, the company has also created its own AI art generator that it calls DreamUp.
In this latter situation, a similar accusation is being made as that alleged by Getty, namely that these platforms have trained their visual rendering programs by using billions of scraped images without permission.
One final detail worth mentioning about Getty is the slight irony behind its CEO’s claims:
Getty has its own well-established history of misusing others’ copyrighted works in explicitly direct ways without compensation or recognition, and of trying to claim control over public domain imagery.