Copyright Law and AI Art: A System Built for Corporations
This is the third piece in our series on AI art. In the previous pieces, we explored how refusing to disclose AI use can be seen as civil disobedience, and we dissected the misinformation surrounding AI art that distracts us from the real issues.
If there’s one thing that often gets brought up in discussions about AI art, it’s copyright. Critics say that AI art is stealing, that it violates copyright, and that artists should be protected by the law. But here's the truth: copyright law, as it currently stands, doesn’t actually do much to protect artists. If anything, it’s set up to benefit the corporations and gatekeepers who profit off creative work far more than the creators themselves.
Take the concept of work-for-hire as an example. When I worked as an Art Director for a TTRPG company, one of the requirements was to use work-for-hire contracts with the artists I commissioned. They warned me that some artists would resist this, and they were right—some did push back, and often it meant I had to work with less established artists who were willing to sign. It felt exploitative, like we were taking advantage of younger or less knowledgeable artists, the ones more likely to give up all their rights because they didn’t know any better or didn’t have a choice. I still went ahead with it, and the artists I found who accepted these terms told me they were okay with it—they were allowed to display the originals but not profit from them. Still, it left me with a clear understanding of how copyright law is structured: to protect the owners of the work, not the creators.
I don't think a lot of people really understand what this means. It’s the same reason musicians can lose access to everything they've created, why photographers can have their entire body of work owned by someone else, or why almost any creative can be left with nothing despite being the original author. Copyright law wasn’t set up to help us or protect us—it was set up to protect those with the resources to claim ownership.
The same principles apply when it comes to AI art. People arguing that stricter copyright enforcement is the answer to AI overlook how the system already fails human artists. Copyright is supposed to be about protecting creative work, but in reality, it’s about who has the power to enforce their claims. Large corporations, with their legal teams and deep pockets, are well-equipped to use copyright law to their advantage, while individual artists often can’t afford the costs of defending their rights. This imbalance means that even if we tried to apply copyright law more rigorously to AI, it wouldn’t solve the underlying problem—it would just give more power to those who already have it.
It’s also worth noting that copyright law has always struggled to adapt to new technologies. Photography faced backlash when it first emerged—traditional artists argued that photographs weren’t “real” art and worried that their livelihoods were under threat. The same thing happened when digital art tools like Photoshop entered the scene. Copyright law was never designed with these kinds of advancements in mind, and it’s been playing catch-up ever since. Now, with AI, we’re seeing history repeat itself. The law isn’t equipped to deal with a technology that learns from vast datasets to generate new, unique works, and trying to shoehorn AI into outdated legal frameworks just doesn’t make sense.
Instead of leaning on copyright as a way to shut down AI art, we should be rethinking how we support and value artists in a world where technology is rapidly evolving. What if we shifted the focus from rigid ownership to a model that emphasizes fair compensation and shared benefits? Imagine a system where artists are paid not just for the initial creation of a work, but for the value it generates over time—where their contributions are recognized, even if the work is used or adapted in new ways by AI or other technologies.
Despite these challenges, there are some positive changes happening. Companies like Adobe, Shutterstock, and Wirestock have started compensating artists whose work contributes to AI training datasets. In fact, I’ve personally experienced this: Adobe has paid me twice for the use of my images in training their Firefly model, which gave my Adobe Stock account a nice bump I wouldn’t have otherwise seen. This kind of fair compensation is a step in the right direction, ensuring that creators are recognized and rewarded, even when their work is used in emerging technologies like AI.
Getty Images is another interesting case. While their lawsuit against Stability AI and other companies is still ongoing, it's led to a broader discussion about responsible AI use and fair compensation for artists. Getty has begun implementing measures to ensure that artists whose works are part of training datasets are rewarded. These examples show that change is possible, and that technology can evolve in a way that supports, rather than exploits, creators. But if we try to control it with the same old copyright laws that have always prioritized corporate interests, we’re just going to end up reinforcing the existing power dynamics that keep artists at the bottom. The goal shouldn’t be to shut down AI art—it should be to ensure that artists, both traditional and AI-assisted, are empowered and fairly compensated for their work.
The conversation about copyright and AI needs to move beyond fear and protectionism. We need to start talking about how to create a new framework that actually works for artists—a framework that acknowledges the realities of technological change and focuses on equity, access, and fair treatment. Copyright, as it stands, isn’t the solution. It’s part of the problem.
In our next piece, we’ll look at the strange political divide on AI art—why the right-wing seems more accepting of it while the left-wing is wary, and why that split feels so counterintuitive given the usual values associated with each side.
Don't believe me! Always fact-check everything you read on the internet through multiple sources. Here's a list to help.
- Snopes – A well-known resource for validating and debunking urban legends, rumors, and news stories.
- FactCheck.org – A project of the Annenberg Public Policy Center that checks the factual accuracy of U.S. political claims.
- PolitiFact – A fact-checking website that rates the accuracy of claims by elected officials and others on its Truth-O-Meter.
- AP Fact Check – Associated Press journalists fact-check claims in news stories, including statements by public figures and viral content.
- Full Fact – The UK's independent fact-checking organization.
- The Washington Post Fact Checker – Known for its Pinocchio ratings, it evaluates the truthfulness of political claims.
- Reuters Fact Check – Offers a range of fact-checking services that debunk misinformation across various topics.
- BBC Reality Check – Provides fact-checking services that clarify claims seen in news stories and on social media.