Answering this is complicated, but I will aim to simplify things. I’ll break it down clearly and offer practical recommendations. In this edition of Render & Reason, we’ll explore:
What is ethical?
I’m training my own LoRAs… that’s ethical, right?
When resources are limited.
I’m an independent creator. What steps can I take right now?
I’m a studio exec with significant resources. What can my company start doing today?
Being Ethical Is Good Business
So settle in because there’s a lot to cover. Since it’s one of the first installments, I thought it was important to address these questions head on. In future newsletters I will expound on some of the topics mentioned here but if you’d like to know more about any specific topic now, don’t hesitate to reach out!
*For clarity: this newsletter isn’t sponsored, and I don’t receive payment for recommending products or services. If it’s here, I truly think it’s worthwhile.
What is “Ethical”?
At the start of every ethics and governance class I ever took, the instructor would pose the same foundational question: "What is the definition of ethical?" The point wasn't to find a single dictionary answer, but to establish that ethics, at its core, is a framework for determining right and wrong. It’s a set of moral principles that govern behavior.
Attempts have been made to apply a framework to answer this question and they can be boiled down to a few key interconnected principles:
Consent and respect for creators:
The Question: Did creators consent to their work being used for training?
The Problem: AI often uses internet-scraped data without explicit consent. Ethical workflows respect creators’ intentions.
Compensation and Fairness:
The Question: If a creator's work provides the foundational value for a new tool, should they be compensated for that contribution? Is it fair for large commercial entities to profit from uncredited, uncompensated labor?
The Problem: The industry often bypasses licensing, treating human-made data as a free resource. Ethical approaches require fair compensation.
Transparency (The "How it's Made"):
The Question: Is the training data and methodology openly shared?
The Problem: AI models often operate as “black boxes,” obscuring biases or infringements. Ethical AI prioritizes openness and audit-ability.
*In the EU, every citizen has the “Right to Explanation” for any high risk decisions made by AI. Currently in the US, there are no Federal laws that govern “Explainable AI”. On May 22nd, the House of Representatives narrowly passed its version of the budget bill that contains the sweeping 10-year moratorium that would block states from enforcing their own AI laws. It’s currently being revised in the Senate with significant controversy surrounding this particular point of the larger bill.
Accountability and Responsibility:
The Question: Who’s accountable if AI generates harmful or infringing content?
The Problem: Unclear responsibility creates legal and ethical issues. Ethical workflows clearly assign responsibility to creators and deployers.
The lack of clear answers to these questions is exactly why the topic is so complicated.
I Am Training My Own LoRAs on My Own Data… Isn’t That Ethical?
If you’re not sure what I mean by LoRA’s please check out this article first. Like and subscribe to The 3D Artist newsletter!
This is where the ethics get personal. You’ve carefully collected your own photography, illustrations, or product images. You’ve put hours into training a LoRA to replicate your unique style. The creative input is entirely yours, so the output must be ethical, right?
Your intentions are absolutely in the right place. Using your own work is indeed a big step toward ethical AI. But the issue is deeper…it’s in something you don’t control but must rely on: the base model.
Think of it like this:
The Base Model is the Foundation: A powerful base model, like Stable Diffusion, is a massive, multi-billion-parameter neural network. It has already been trained on billions of images and has a foundational "understanding" of what things are; a cat, a tree, a face, the style of Van Gogh.
Your LoRA is the Custom Architecture: Your LoRA is a small, brilliant modification you build on top of that foundation. It teaches the base model a very specific new concept or style based on your data. It makes the output unique and special, but it cannot exist without the foundation supporting it.
Here’s the ethical issue: that essential base model was almost certainly trained on ethically unclear data.
Popular models like Stable Diffusion were built on LAION-5B…a dataset scraped from billions of online images. This includes public domain and Creative Commons content, but also copyrighted photography, medical images, and countless professional portfolios used without explicit consent.
While your LoRA is ethically crafted, it sits atop a foundation built from ethically questionable materials. This doesn’t mean your LoRA training is inherently unethical, but it complicates the claim of being “fully ethical.” Until powerful base models built on licensed, consented data become standard, even careful creators face ethical compromises.
When Resources Are Limited
In a perfect world, the most ethical choice would also be the most accessible one. But we don't live in that world. Right now, there’s a clear and uncomfortable link between financial resources and access to an ethically clean AI workflow.
Independent artists, writers, and small studios face a difficult choice:
Open-source models: They’re free, flexible, and community-supported but come with ethically gray data histories.
Non-commercial use: Bria is a verifiably ethical and free option but is for personal art, experimentation, or academic research only.
Commercially safe models: Services like Bria (paid), Adobe or Getty offer fully licensed data and clear legal protection but often at costs many creators simply can’t afford.
This tension creates the “Ethical Accessibility Gap”, where the easy, affordable path comes with ethical compromises, and the ethically clearer route has a financial barrier.
Recognizing this gap doesn’t mean accepting it as inevitable. Instead, it’s a call to action, shifting the conversation from “Who’s ethical or unethical?” to “What’s my responsibility given my resources and influence?”
That responsibility naturally differs between independent creators and large studios. The next sections dive into practical steps each group can take starting right where you are today.
I am an independent creator. What can I do TODAY?
This is where the rubber meets the road. As an individual, you can't single-handedly fix the systemic issues of data provenance, but you hold significant power in the choices you make, the tools you adopt, and the standards you advocate for. Your actions, when multiplied across the community, create the demand that shifts the market…This is my highest hope for this newsletter, however lofty a goal.
Here are some concrete things you can do today:
Choose Your Tools with Intention
If your project needs an ethically clear workflow, especially commercially, choose your tools accordingly.Adobe Firefly: Currently this is the most accessible ethical option. Data is trained on Adobe Stock (with compensated creators) and public domain content. If you have Adobe Creative Cloud, you already have access!
Other Licensed Alternatives: Services from Getty Images, Shutterstock, and similar providers increasingly offer individual plans based on licensed libraries so be on the lookout and I’ll do my best to post any updates on my newsletter or LinkedIn.
Be Proactive: Manage Your Own Data
Ethical practices start with how you manage your own work.Sign Up for Spawning: Spawning offers tools allowing artists to opt out of AI dataset scraping. By registering your sites and galleries at huggingface.co/Spawning, you’re signaling your consent preferences. It’s not foolproof, but it’s a significant collective step toward data consent.
Adopt an Ethical Framework for Exploration
Now, let's be realistic. New, powerful open-source models will continue to be released, and their origins will likely remain complex. As creators, we have a responsibility to understand these tools. My own philosophy is that we have to use the hell out of AI now, not just for its future creative potential, but so we can become deeply educated on how it works, what its flaws are, and how we can demand and build better, more ethical versions for our own communities.
This doesn't mean we abandon our principles. It means we engage with our eyes open. The incredible team at Storybook Studios authored a fantastic resource for this called Fair Codex. You should definitely read more about it and follow Storybook Studios, but here are a few key ideas they promote:Transparency: If you use AI in your work, just say it clearly.
Credit: If you’re using someone’s LoRA with their permission, give them credit.
Consent: Pick models built on ethically sourced, consented data whenever possible.
Speak Up: Use your voice to push for better practices and fairness.
By adopting this mindset, you can explore the cutting edge of AI, arming yourself with the knowledge needed to shape its future, without sacrificing your ethical commitments. You are not just a user; you are a tester, a critic, and an advocate helping to steer the ship.
I’m a Studio Exec with Lots of Resources. What Can My Company Do Today?
As a leader of a studio, creative agency, or large company, you’re not just a consumer of AI tools, you’re a driver of industry standards. Your choices carry weight, signaling to everyone else what’s acceptable and valuable. Independent creators navigate existing systems, but you’re uniquely positioned to transform those systems altogether…For good or bad.
Here are concrete, high-impact actions your company can take today.
Mandate an “Ethical-First” Policy
Your purchasing decisions set industry standards. Approach AI ethics just as seriously as security or anti-piracy:Avoid ethically questionable open-source models for commercial use. The legal and reputational risks aren’t worth it.
Invest in commercial, enterprise-ready AI solutions like:
Adobe Firefly for Enterprise: Commercially safe, scalable, includes custom model training (Style Kits) and indemnification.
Getty Images & NVIDIA Edify: Custom generative models trained exclusively on Getty’s licensed library. It’s ethical AI at scale.
Bria AI and similar B2B providers: Specialize in traceable, ethically sourced AI models integrated directly into your workflow.
Fund the Future: Build or License Your Own Foundation
This is exciting territory for executives driven by profit. Creating your own ethically sourced foundation models unlocks valuable IP, giving your company a powerful competitive edge as a market leader.Directly license data: Contract with large stock houses or artist guilds to build your own ethically sourced, proprietary foundation models. These become valuable IP assets.
Invest in Ethical AI startups: Use venture capital or corporate development funds to support startups building the ethical AI tools your industry needs.
Champion Radical Transparency and Industry Leadership
Your audience, employees, and shareholders watch closely. Transparency isn’t just ethical, it strengthens your brand:Publish your AI principles clearly and publicly: Outline your approach to data sourcing, creator compensation, and transparency.
Clearly label AI-generated content: Differentiate between human-made, AI-assisted, and fully AI-generated work. This builds trust and respects creators.
Join or create industry coalitions: Unite with other studios to establish ethical AI standards. A collective voice has greater influence than any single company acting alone.
Being Ethical Is Good Business
Creating a Defensible Competitive Advantage
A Unique, Unreplicable Aesthetic: Open-source models like Stable Diffusion, Flux, and other community models are commodities anyone can use. But a studio training a model exclusively on its unique archives, decades of concept art, character designs, and footage, develops a proprietary creative style. This “secret sauce” is a powerful commercial moat no competitor can replicate.
Total IP Control and Security: Models trained exclusively on licensed or proprietary data remove legal ambiguity, protecting your IP from lawsuits and infringement issues. Every generated asset is guaranteed “clean,” significantly boosting your company’s IP security and long-term value.
Creating New, Direct Revenue Streams
Licensing Your Model: A studio can monetize proprietary models directly, just as OpenAI and Adobe do, by licensing them via API. Smaller agencies, game developers, or production houses eager for a clean and distinctive aesthetic become customers, transforming an internal R&D effort into a profitable business line.
Building Platforms and Ecosystems: Consider Amazon Web Services; originally an internal tool, now a global profit center. Similarly, studios can build entire platforms or service ecosystems around their proprietary AI models, creating new lines of revenue and deepening market influence.
Independent Creators Benefit Too
For independent creators, using ethical AI helps you stand out. It shows your clients and audience that you’re thoughtful about where your content comes from. People value transparency, and when you make clear that ethics matter to you, it builds trust. You’re creating a brand people feel good about supporting.
To Wrap Things Up:
A "truly ethical" workflow is more of an ideal to strive for than a current, easily achievable reality. It's easy to look at the challenges and feel that the only pure choice is to walk away. But real change is rarely born from rejection; it comes from the messy, dedicated work of pushing for something better. We cannot shape a future we refuse to participate in. The power to create a more ethical ecosystem lies in our collective commitment to education, to understanding these tools inside and out, and in our resolve to use them more ethically every day, transforming them from something we simply use into something we help improve.
Very thoughtful breakdown on this complicated subject. So glad I found your Substack!