You've probably heard it. In a Discord server. In a Reddit thread. Maybe from a friend who paints traditionally: “You're not an artist. You're just typing prompts. The AI is stealing from real creators.”
If you've spent any time making images with Midjourney, ComfyUI, or Stable Diffusion, you've encountered this argument. Maybe you've started keeping your practice private. Maybe you've stopped sharing your work. Maybe you've questioned whether what you're doing counts. Here's what I've been thinking about: what if that criticism isn't just wrong, but backwards?
The Accusation and What It Gets Wrong
The standard critique goes something like this: AI models were trained on artists' work without consent. When you use these tools, you're participating in theft. Every image you generate is stolen from someone who actually knows how to draw.
I understand why this argument resonates. It appeals to a sense of fairness. It defends people who've spent years developing craft. It sounds like protecting the vulnerable from technological displacement. But here's the thing—it gets the mechanism entirely wrong.
When you learn to draw, you study other artists. You copy their work to understand technique. You internalize patterns, styles, and approaches until they become part of your visual vocabulary. No one calls this theft. We call it learning. What an AI model does is mathematically similar: it processes patterns from training data and develops an internal representation of visual concepts. It doesn't store images. It doesn't copy and paste. It learns relationships—how light falls on surfaces, how perspectives converge, how styles express emotion. The difference is scale and speed, not kind.
This distinction matters. The “theft” framing misrepresents what's actually happening in these systems. And more importantly, it misrepresents what you're doing when you use them. But distinguishing the mechanism from the consent question is important—because the consent question deserves serious engagement on its own terms.
The Consent Question Deserves Real Engagement
Let me be direct about something: the fact that AI models learn through pattern recognition rather than copying doesn't resolve the question of whether artists should have consented to their work being used as training data. That's a separate issue, and it's one I take seriously. Artists who spent years developing distinctive styles have legitimate concerns when those styles become reproducible through a text prompt. The emotional and economic dimensions of this are real—people's livelihoods and creative identities are at stake.
The legal landscape reflects how unresolved this is. Getty Images sued Stability AI for training on its library without license. Multiple class-action lawsuits from visual artists allege that AI companies profited from their work without permission or compensation. Courts are still working through whether training on copyrighted material constitutes fair use or infringement—and the answer may differ across jurisdictions, use cases, and how directly outputs resemble training data.
What I find more productive than the legal arguments, though, are the emerging models for how this could work going forward. Some platforms are building opt-in training programs where artists voluntarily contribute work in exchange for compensation or attribution. Others are developing opt-out mechanisms that let creators exclude their work from training datasets. Adobe's Firefly was trained exclusively on licensed and public domain content—demonstrating that consent-based training is technically feasible. Revenue-sharing models, where artists receive compensation when their styles or techniques influence generated outputs, are being explored but remain early-stage.
None of these models are perfect yet, and reasonable people disagree about which approach is fairest. But the point is that consent and compensation are solvable problems—problems that the industry has a responsibility to address. The people building with AI tools today should advocate for these solutions, not dismiss the concerns that motivate them. You can believe that what you're doing is creative and valuable while also believing that the artists whose work trained these models deserve a seat at the table.
What You're Actually Doing
Let me describe what I see when I watch someone work in ComfyUI. They're making hundreds of decisions. Choosing models. Adjusting parameters. Building node graphs. Iterating through generations. Curating outputs. Compositing results. Refining prompts based on what works and what doesn't. Developing an eye for what's good.
This is creative work. It's different from drawing by hand—but so is photography, and we stopped debating whether photographers are artists about a century ago. The camera didn't replace painters. It freed them to do things photography couldn't—leading directly to Impressionism, Expressionism, and abstraction. Photography became its own art form. And painting, liberated from the requirement to document reality, became more expressive than ever.
We're at a similar inflection point. These tools don't replace human creativity—they change what creativity can do. The question isn't whether AI art is “real” art. The question is: what are you making with these new capabilities that couldn't exist before?
Why the Hostility Is Actually Fear
Here's what I think is really happening beneath the surface of the anti-AI movement. People are afraid. They're afraid that skills they spent years developing might become less valuable. They're afraid of economic displacement. They're afraid of a future where human contribution matters less.
These fears are understandable. They're also not new. Every major technological shift in creative tools has triggered the same response—from the printing press to the synthesizer to digital photography to Auto-Tune. But notice something: in every previous case, the technology didn't eliminate human creativity. It democratized access to creative expression. It lowered barriers. It let more people make things.
The real threat isn't that AI will replace artists. The real threat—to the status quo—is that AI makes visual creation accessible to people who couldn't access it before. People who never learned to draw. People who couldn't afford art school. People with ideas they couldn't previously express. That's not theft. That's liberation.
The Comprehensivist Returns
There's a concept from the architect and systems thinker Buckminster Fuller that keeps coming back to me: the comprehensivist. Fuller argued that industrial society forced humans into narrow specialisation—becoming “cogs” that fit into economic machines. We took the magnificently capable human and narrowed them into “data entry clerk” or “commercial illustrator.”
AI is the ultimate specialist. It's the perfect cog. And that means something profound: when the machine handles the specialised execution, humans are free to return to comprehensive thinking. To ask why. To connect ideas across domains. To dream about what should exist rather than struggling with how to make it.
If the machine can execute on command, you're free to focus on what's worth commanding. What ideas deserve visual expression. What emotions need exploring. What hasn't been imagined yet. The artists who feel most threatened are often those most attached to craft as identity. But craft was never the point. The point was always vision. Craft was just the available method. Now there are new methods—and the people using them aren't thieves. They're builders.
What Builders Do Differently
Here's the practical distinction I've observed between people who dabble with AI generation and people who build serious creative practices around it:
Builders treat their work as a body of work. They're not just generating images—they're developing a practice. They have aesthetic preferences. They iterate on themes. They maintain archives. They can trace how their work has evolved.
Builders care about provenance. They know which model produced what. They can recreate their process. They keep their prompts. They document their workflows. They understand that “I can't remember how I made that” is a problem.
Builders think about infrastructure. They organize their outputs. They back up their work. They maintain systems that let them find what they've created. They invest in tools that support their practice rather than just generating more images.
Builders take their practice seriously. They don't just defend AI art in arguments—they demonstrate its value through what they produce and how they produce it. This is the difference between someone who takes a few snapshots and a photographer who develops a visual voice, maintains an archive, and builds a body of work. If you want to move from dabbling to building, you need infrastructure that supports serious creative practice—you need to know what you've made, how you made it, and how to build on it.
The Path Forward
The debate about AI art isn't going away. Regulatory frameworks are emerging. Content provenance is becoming standard. The question of disclosure—when and how to indicate AI involvement—is being answered through legislation like the EU AI Act. And the consent question will continue to evolve as compensation models mature and legal precedents are set.
But here's what I've come to believe: the best response to criticism isn't argument. It's demonstration. Build a body of work that speaks for itself. Develop a practice with integrity. Maintain the provenance of what you create. Treat your outputs as assets worth organizing and protecting. Advocate for fair compensation models that respect the artists whose work contributed to these tools.
Do the work that serious creators do, and the question of whether you're “really” an artist becomes irrelevant. The work answers it. You're not stealing. You're building something new. And the people who build things—with intention, with craft, with vision—are exactly who the future needs.
Key Takeaways
- 1.The “AI art is theft” argument misunderstands how these systems learn: Pattern recognition isn't copying—it's similar to how humans learn by studying existing work
- 2.The consent and compensation question is real and deserves engagement: Artists have legitimate concerns about training data use, and emerging models for opt-in, opt-out, and revenue sharing are worth advocating for
- 3.AI tools democratize visual expression rather than replacing human creativity: Every major shift in creative tools—from cameras to synthesizers—has expanded access without eliminating art
- 4.Serious AI creators are builders, not dabblers: They maintain bodies of work, care about provenance, and invest in infrastructure
- 5.The best response to criticism isn't argument—it's demonstration: Build work that speaks for itself while advocating for fair compensation models
