AI Misinformation and the Real Issues: Debunking Myths and Redirecting Focus

This is the second piece in our series on AI art. In the previous piece, we explored how refusing to disclose the use of AI in art can be seen as a form of civil disobedience—a stand against unfair stigmas imposed on artists.

The narrative surrounding AI art today is flooded with misinformation. Too many people, influenced by fear or misunderstanding, treat AI like an ominous entity—something fundamentally dangerous, out to steal from artists, or replace creativity itself. It’s a narrative driven by emotion rather than fact, and it’s distracting us from the real problem, which isn’t the technology but the system that seeks to use it for exploitation.

There are loud voices online claiming that AI art is theft, that it steals the work of other artists to generate something new. On the surface, it’s easy to buy into this narrative—it sounds righteous, like it’s protecting artists from some Big Bad Machine. But the truth is, this interpretation of AI is based on a fundamental misunderstanding of how the technology works. 

Part of the knee-jerk reaction people have to AI art comes from the difficulty of understanding how this training works. It involves programming languages, algorithms, and complex math—abstract concepts that can be tough to grasp. For those who aren’t tech-savvy, it's easy to assume that AI just copies portions of images from the web, simply because they can understand how images are stored and displayed. 

The reality is that AI doesn’t stitch together bits of existing images—such a process would be inefficient and impractical, especially with the vast number of images involved. Instead, AI generates based on learned representations, a concept that can be difficult for many to grasp. 

This gap in understanding leaves ample room for misinformation, especially in an era of social media where everyone feels pressured to have an immediate, authoritative opinion.

It’s nothing that different from what human artists do, really. We learn by studying others, by absorbing different styles, by experimenting with what’s been done before. We internalize all the art we’ve seen—the compositions, the color palettes, the techniques—and we synthesize it into something that is uniquely ours. AI isn’t replacing that process; it’s accelerating it, assisting it, allowing artists to access inspiration, iterate faster, and test ideas that would otherwise be impossible. But people fear what they don’t understand, and instead of seeing AI as another medium—like digital art or photography—they see it as a threat. And that’s exactly where the real issues get buried.

The loudest arguments against AI art are all about its supposed moral failings, but meanwhile, the actual dangers—corporate control of creative tools, consolidation of power in tech giants, exploitation of creative labor—slip by without nearly as much resistance. Think about it: big companies are the ones developing these AI tools, the ones holding the keys to the technology, and the ones most likely to wield it without concern for ethics or fairness. If the argument is about protecting artists, then the focus should be on making sure those entities are held accountable, that they aren’t using technology to exploit creatives or strip them of their livelihoods. Instead, we get people wasting energy on attacking individual artists who are just trying to use the tools available to them.

And let’s not forget the history of all this. The so-called “threats” of AI art are nothing new. People have been manipulating photos as long as photos have existed, creating composites, playing with reality, and even back then, people worried about authenticity. But as technology became more understood, we learned to accept it as part of the artist’s toolkit. The same applies to digital art—when Photoshop first came onto the scene, traditional artists decried it as fake, as cheating. Now, it’s a staple of the creative world.

We’ve seen this fear time and time again. Every time a new technology disrupts the status quo, people cry out in panic, convinced it’s the end of “real” art. But it’s never the tools that are the issue—it’s how they’re used, who controls them, and whether we’re allowed the freedom to wield them ourselves. AI is just the latest chapter in that long history of creative evolution, but instead of embracing the possibilities it brings, people are being sidetracked by reactionary fears, and all the while, the real danger—capitalism’s exploitation of creativity—marches on.

In reality, misinformation about AI art is a convenient distraction. It allows people to focus their anger on a tool instead of on the power structures that shape how that tool is used. It’s easier to say “AI art is bad” than it is to dismantle a system that treats art as a commodity to be mined for profit, regardless of the impact on the actual artists. It’s easier to police the tools than to challenge the industries that decide how those tools are wielded.

The real conversation we need to be having isn’t about whether AI art is real or fake—it’s about who controls the technology, who benefits from its use, and how we ensure that artists, not corporations, reap the rewards of their creativity. We need to get past the fear-mongering, the knee-jerk reactions, and start asking the deeper questions: How can we make sure these technologies are democratized, accessible, and empowering for all artists, rather than just another means for corporations to cut costs and increase profits?

In our next piece, we'll explore how copyright law intersects with AI art, and why leaning on traditional copyright protections isn't the solution people think it is. We'll examine how current laws are set up more to benefit corporations than individual artists and how this system plays into the wider challenges faced by creators in the age of AI.


Don't believe me! Always fact-check everything you read on the internet through multiple sources. Here's a list to help.