The Patron’s Obligation
On what audiences owe creators, each other, and themselves
There is a version of this essay that begins with more discussions of AI. With Andrea Bartz, and her recent exchange with me, including her line that there is “no ethical way to use generative AI.” That line is worth engaging, and I will engage it. But it is not where this piece begins, because this piece is not really about Bartz, or about me, or about any of the names I’m about to drop and immediately move past. It begins, instead, somewhere older.
In fifteenth-century Florence, the Medici family did something that sounds simple and was not: they decided certain artists were worth funding. Not because their processes were certified by a guild or blessed by a prophet. But because the Medici looked at what these artists made and decided it was worth backing—and entered into a relationship with them that had obligations running in both directions. The artist produced. The patron judged, invested, and stood behind their judgment publicly.
This is what patronage means. It is not charity. It is not passive. Not mere consumption. It is an active relationship between a person who makes things and a person who decides those things are worth sustaining. The church commissioned Michelangelo. Private benefactors funded composers, poets, philosophers. Public institutions eventually formalized this into arts grants, fellowships, residencies. The through-line across all of it is the same: someone looked at a creator, made a judgment, and put something behind that judgment.
You are that person now. Whether you know it or not.
Passives, Parasites, and Patrons
When you follow a creator—when you subscribe, read, watch, listen, click—you have entered into a relationship. Most people do not think of it this way. Most people think of it as barely a transaction; you are a customer, the creator is a vendor, the content is a product. This framing is not only impoverished, it is the source of almost every dysfunction in how audiences and creators relate to each other today.
There is a better frame, and it is older. The word is patron. Not in the watered-down modern sense of “subscriber” or “supporter,” but in the original sense: someone who has made an active judgment about a creator’s worth and has taken an ongoing stake in their continuation. The patron is not passive. The patron has done the work of discernment—has read enough, watched enough, thought enough—to say: this person is worth sustaining. And has put something behind that judgment, whether money, attention, or public advocacy.
Then there is the alternative, the passive, which has a more insidious subset, the parasite.
Passives just consume. They don’t put much thought into who or what. They don’t know the names of the artists, creators, and thinkers they hear, see, or read. They have playlists of artists they couldn’t pick out of a lineup, reading lists of authors that blur together into an amorphous blob of litslop, and journos/academics/etc they might recognize or care about fleetingly, if at all.
I say all this mostly descriptively, less so as moral condemnation. But the parasite is what the passive will become if they let themselves slide even further. They not consume without investing and read without scrutinizing, but they demand that the legwork be done for them. And they’re sometimes, tho certainly not always, entirely too proud of just hitting “Follow” or clicking on a link to consume a piece of content. They make demands without contributing, basically, and they can poison the well.
To be clear, the parasite is not necessarily a bad person. They are locked in a pattern of lopsided relations. The passive relationship is one that functions fine until it doesn’t. They ask little and give little. But if they ask more, and don’t get it, that’s where the problems start. They feel betrayed, or misled, or let down, and suddenly want someone to answer for it. They offload the work they should have done, like a person walking blindly through the city, even crossing streets, with their head buried in their phone. If they bump into someone, the blame lies elsewhere. “Hey, watch where I’M going!” Selfish and absent-minded. Sometimes malignant, not always malicious.
I used to be closer to the passive end of this spectrum than I’d like to admit. More than a decade ago, I was burning through Kindle stories I could barely remember the titles of a week later. Blog posts, fanfic, etc consumed the way you consume fast food—quickly, thoughtlessly, without much concern for what was in it or who made it. There is nothing inherently wrong with a more detached relationship to your media diet. It becomes incoherent, though, the moment you start making demands.
The person who eats whatever is in front of them without scrutiny, but expects the government, restaurants, and grocery stores to ensure nothing unhealthy ever reaches them. They are outsourcing a responsibility they haven’t earned the right to outsource, because they haven’t done the work of caring. The person demanding caloric counts at McDonald’s while never once trying to vary their diet by eating elsewhere is not an advocate for health. They are a consumer demanding someone else do the work, really their thinking, entirely for them.
This is what most audience discourse around creators actually looks like. This is what anti-AI discourse looks like. Instead, I propose something better: Like I wrote with my “In Defense of Caring About the Audience“ piece, I think you matter. I know it, in fact. And I think you must therefore value yourself, and what you consume, enough to judge it. To appreciate it. To criticize it. And yes, to pay for it. As a true patron would.
It’s Not About the Names
Let me name some names, briefly, and then leave them behind.
Andrea Bartz believes there is no ethical way to use generative AI and has written extensively to that effect. Paul Kingsnorth has launched a campaign called Writers Against AI and pledged never to use AI in his work or knowingly support writers who do. Stephen Bradford Long created the Pro-Human Pledge to support non-AI content. Quinn Que discloses his AI use, noting a minimal, deliberate, supplemental practice without apology. Chad Rye did everything right by disclosing his AI-generated practice, uses LLMs responsibly, and still caught significant backlash for it. Mia Ballard lied, and Hachette terminated her contract.
These are the names currently orbiting this discourse. Some of them are AI users. Some are anti-AI zealots. Yet none of this is about any of us. Our practices, our positions, our public records are inputs for your judgment—data you can use to decide whether we are worth your time and your money. What matters is what you do with that data. Whether you actually do anything with it, or whether you outsource that work to someone else and call it a day.
As recently as last week, I detailed why “Using AI Detectors is Stupid and Wrong.“ Because the tools are unreliable, and because their verdicts are premised on the wrong priors. Do you have a relationship with a given creator, or don’t you? Does that creator owe you anything? Is AI usage inherently immoral? I’ve tried to raise, and suggest answers for, these questions in my work. My position is pretty clear and consistent. It’s one that survives this new paradigm because it doesn’t hinge on something as reductive and, frankly, as retarded, as “AI = bad” and one drop rules for AI usage. But I outlined the core of it in that essay already, so go read that.
The issue for today is what you’re owed and what you’re gonna do about it. I think you need to decide what level of AI usage you’re comfortable with from creators. I think you need to decide what AI-free content means to you. Whether you’ll put a premium on it, including with money. And whether you’ll trust the creators who purport to provide it (and do so affirmatively, like Long, not implying it or lying via omission a la Ballard). If you don’t trust the people whose content you consume, why are you consuming it in the first place? And if you do, I might ask, “Why Do You Trust Them?” This is a relationship after all. It must be built on more than assumptions.
For me and mine, including my readers, the trust is born of a track record. I’m competent, I’m honest (to a fault). I cite my sources, albeit only when necessary. I once famously said that I’m writing journalistic articles, not Wikipedia articles; I maintain that. Still, this and my preemptive AI Usage disclosure all combine into someone who’s worked hard and built trust. I want a relationship with you, one that’s built to last. But it only works if you want that too, and if you care enough to invest in it.
The Genetic Fallacy
There is a name for the error at the center of most AI discourse, and it is the genetic fallacy. The genetic fallacy is the mistake of evaluating something as good or bad based on its origins rather than its substance. An argument doesn’t fail because of who made it. A piece of writing isn’t bad because of what tools assisted in its production. The only question(s) with logical standing: what is this, and is it good?
The genetic fallacy is a fallacy because it produces unreliable conclusions. If your father is a murderer, that tells you something about your father. It tells you nothing certain about you. You might be a murderer. You might be a saint. The genetics—literal or otherwise—do not determine the outcome. We understand this intuitively when it comes to people. We are failing to apply it when it comes to creative work.
The argument that AI-assisted writing is invalid because of how it was made is a textbook genetic fallacy. It substitutes process for product, origins for output, means for ends. It says: I don’t need to evaluate this on its merits, because I already know how it was made, and that’s enough. It isn’t enough. It has never been enough. It is bad logic dressed up as ethics, and the fact that it is widespread does not make it less bad. I will keep banging this drum until the zealots lose, and until the masses wake up.
“There is no ethical way to use generative AI.” That is the Bartz claim, stated plainly. I want to engage it honestly: It is a position that follows from a prior—a belief about what AI systems are, how they were built, what their existence means for human creativity and labor—that I do not share. The prior is upstream of the ethical conclusion. And because the prior is doing the work, we must engage the prior. This is sadly not what most people making, or reading, that argument tend to do, because engaging the prior requires argument, and argument can be lost.
I am not better or worse than Andrea Bartz, or Paul Kingsnorth, or Chad Rye, or Stephen Bradford Long, because of my AI use. If I am better or worse than any of them as a writer, a critic, a thinker—that is for you to determine, based on what I produce. My output. Not my process. A process that is still originated by my mind, written by my hand, and overseen by my will before reaching your metaphorical table.
But yes, I use AI-infused tools, as many/most others do. I use word processors, which include AI. As I’ve noted before, Clippy, one of the first digital assistants from all the way back in the 1990s, was early/proto AI. Most current processors, like Word, Docs, et al don’t have a cute name or avatar for their AI, but it still runs in the background. Likewise search engines, a tool we all use to navigate the internet writ large. And plenty of use LLMs for recreation, for contemplation, for research, as a sounding board, and so on. This is the nature of technology. It evolves, and we with it.
You use AI whether you know it or not. I don’t care, cuz I don’t believe in the one drop rule. The zealots care, but they also tend to get rather arbitrary or conveniently capricious about where they draw these lines. What I’m asking is that we be honest with each other about all this. You can’t claim to hate the tree but still wish to eat the fruit. That’s not just hypocritical, it’s fundamentally irrational.
Nature Fallacy and the Banana
Speaking of fruits, let’s talk about nature. The genetic fallacy has a cousin, and it is the nature fallacy: the belief that what is natural is good, and what is artificial is bad. You encounter it constantly, usually without the label. Raw milk is better than pasteurized. Organic produce is healthier than conventional. Lab-grown meat is suspicious because it came from a lab. Let me be specific about the banana.
Every banana you have ever purchased at a grocery store—organic label or not, farmers market or Walmart, expensive or cheap—is the product of human engineering. The Cavendish banana, which is the main, default banana, the big yellow one that exists in your mind’s eye when someone says the word “banana,” was deliberately cultivated to be seedless, uniform, and shippable. A banana that grew without human intervention would look and taste almost nothing like what you are picturing. The “natural” banana is a fiction. The banana you eat is artificial by the logic of the nature fallacy. It is also delicious, nutritious, safe, and beautiful.
Pasteurization heats milk to kill pathogens. Raw milk advocates claim this destroys nutrients and that raw milk is healthier. That claim is conspiracist brainrot, effectively a lie. The scientific and medical analysis does not support it. Pasteurized milk retains the core nutritional profile of raw milk, it just carries fewer risks. The process that seems less “natural” produces the safer, equally nutritious product. The nature fallacy told you the opposite. It is in your benefit to disbelieve fallacious arguments.
Cultivated meat—lab-grown, cell-cultured, produced without slaughter—is currently evaluated by a significant portion of the public as suspect because it is unnatural. They’re blind. The relevant questions only are: is it safe, is it nutritious, does it taste good, is its production more or less harmful than conventional meat production? Those are the only questions not because their answers favor the cultivators, because their answers matter to public health. To your health. A nature fallacy question is ultimately just aesthetics. Like asking if car is blue or green instead of “does it run?”
The AI argument is the cultivated meat argument. It should be judged on what it produces: is the writing good, is it honest, was it disclosed, does it reflect genuine thought and craft? The fact that a tool was involved in its production is not, by itself, a conclusion. It is a starting point for evaluation. Refusing to evaluate—deciding that the process alone settles the question—is not good epistemics or ethics.
There is no writing without tools. There has never been writing without tools. The pen was a tool. The printing press was a tool. The typewriter was a tool. The word processor is a tool. Spellcheck is a tool. And yes, AI is is a tool. Each of these, at its introduction, attracted some version of the argument that it was corrupting the authentic practice of writing. Each of these, in retrospect, was simply a tool, judged rightly by what writers did with it. If you want the fruits, you have to accept the tree.
The Trust Penalty
There is a perverse incentive structure operating in creator discourse right now, and it deserves a name. We will call it the trust penalty. It works like this: a creator who discloses their AI use honestly—here is what I do and how I do it, judge me on that basis—and faces scrutiny, backlash, and reputational risk that a creator who says nothing does not face. Silence is rewarded. Disclosure is punished. The honest practitioner absorbs costs that the quiet practitioner avoids entirely.
Chad Rye is the clearest example. He disclosed. He was careful and responsible. He caught hell for it. Others said nothing, collected advances, signed contracts, built audiences—and faced no comparable reckoning until, in some cases, the lies became undeniable. Mia Ballard lied. Hachette terminated. That sequence is important: it was not the AI use that ended her contract. It was the false warrant, which is always grounds for contract terminations. She signed under pretenses she knew to be false, and when that became clear, the contract was enforced. This is not a complicated story, though many people have worked hard to make it one.
The trust penalty does not just harm honest creators. It actively trains audiences to distrust transparency. If you punish the people who tell you the truth, you are teaching yourself that truth-telling is dangerous—and teaching creators that opacity is safer than honesty. You are building the conditions for more Ballards, not fewer. The detection tools that zealots have invested hope in cannot fix this, because the problem is not detection. The problem is what audiences are incentivizing with their attention and their money.
The zealot response to this asymmetry is to say that disclosure doesn't help anyway—that self-attestation is inherently untrustworthy, that any admission of AI use is a tainted confession, and that we simply cannot take creators at their word. This is where the argument collapses under its own weight. The honor system isn't worthless. It just requires real honor. And real honor is exactly what distinguishes a Chad Rye from a Mia Ballard—not their AI use, not their process, but their relationship to the truth about themselves and their work.
We already extend this kind of trust constantly, and we do not find it strange. When you read a reporter at any major publication, you trust that they made the calls they said they made, reviewed the documents they cited, and did not fabricate their sources. You cannot verify most of it independently. You extend the trust because the alternative—demanding mechanical verification of every claim before extending any credence—produces not a more honest information ecosystem but a paralyzed one. The same logic applies here. If self-attestation about process is worthless, you have no journalism, no scholarship, no criticism. You have only what can be confirmed by a machine, which is almost nothing that matters.
What a Patron Actually Does
So what does the alternative look like? Not a checklist. A set of propositions.
A patron has done enough homework to know who a creator is, not just what they produce. They have some sense of the person—their commitments, their track record, their relationship to their own work—not just the content that lands in their feed.
A patron has made an active judgment rather than a passive habit. They can articulate why they trust a creator and, crucially, what would change that. The threshold exists. They have thought about where it is. They pay, and they do so because they believe in the work and the person behind it. They are backing someone, not just subscribing to a service. There;’s a real relationship at play, and it matters.
A patron does not expect the relationship to be scandal-proof. They expect it to be honest enough to survive scrutiny. A creator with a real patronage—one where the audience knows who they’re reading and has made an active judgment—can survive accusations, controversies, even genuine mistakes, because the foundation is not trust in a product. It is trust in a person. This is the protection from “witch hunts” that some people I’ve tussled with, who will remain nameless, should be looking for.
A good patron, when their threshold is crossed, leaves. Quietly or publicly, as the situation warrants. They own that decision, and rather than prosecuting the creator, or demanding that the creative platform where the work is housed take action, the patron does their part. No outsourcing the moral accounting to someone else.
I have done this. There was a YouTuber I watched for years whose views on a particular political subject I ultimately found untenable. He did not misrepresent himself—if anything, he became more himself over time, and I became more clear about where I stood. I stopped watching him (mostly). I did not demand YouTube ban him. I did not write a call to action. I made a judgment, acted on it, and accepted that the responsibility for having watched him as long as I did was mine. That is what epistemic accountability looks like from the audience seat.
What You Owe
You owe no creator your money, your attention, or your defense. But if you are going to be in a relationship with a creator—especially a financial one—you should enter it the way the Medici entered theirs: with eyes open, judgment applied, and accountability accepted for the choices you make. This is fundamental to the patronage model. To the Substack model. To solving the “problem” of AI usage and human-only content, should you value such things.
Stop expecting platforms, algorithms, governments, and detection tools to do your discernment for you. They cannot do it. They were not built to do it. And when you demand that they do it anyway, you are not protecting yourself or the creative ecosystem—you are abdicating the one responsibility that was always yours to begin with. You are becoming a parasite, whether you realize it or not.
The question “is this worth my support?” is yours. It was always yours. Answer it.
Build real relationships with creators whose output you can defend on the merits—not because their process has been certified clean, but because you have read them, tested them, thought about them, and decided they are worth backing. Pay them if you can. Advocate for them if you believe in them. And hold yourself accountable for those choices, including the ones that turn out to be wrong.
The Medici did not fund Leonardo because someone told them he was safe. They looked at what he made and decided it was worth backing. You have exactly that capacity. Use it.
As for me: I am not losing sleep over AI detectors, false accusations, or the court of discourse opinion. My word and my work are the only credentials I’ve ever offered. If that’s enough for you, I’m glad to have you. If it isn’t, that’s a judgment you’re entitled to make—and one I’d rather you make consciously than by default.
Tony Montana, of all people, said it best:
BIBLIOGRAPHY
On the banana/nature fallacy claims
Smithsonian Magazine — “Building a Better Banana.” Authoritative on the Cavendish’s cultivated origins and the Gros Michel replacement history. smithsonianmag.com
Bayer Global — “History of Bananas.” Accessible summary of the 10,000-year domestication history.
On pasteurization/raw milk claims
FDA — “Raw Milk Misconceptions and the Danger of Raw Milk Consumption.” Your primary source for the protein equivalence claim: “The protein quality of pasteurized milk is not different from that of raw milk.”
PMC/National Institutes of Health — “Raw Milk Consumption: Risks and Benefits” (peer-reviewed). Confirms no reliable scientific evidence supports raw milk health benefit claims; pasteurization causes no significant nutritional change. pmc.ncbi.nlm.nih.gov/articles/PMC4890836
NC State University College of Agriculture — “What’s the Difference Between Raw and Pasteurized Milk?” Accessible expert summary: “nutritionally, pasteurized milk is really, really close to raw milk.”
On the anti-AI position
Kingsnorth’s own Substack (The Abbey of Misrule) — “Writers Against AI” (February 2026) and “News & Views” (July 2025). Primary sources for his pledge language and the “never knowingly support writers who do” line.
Literary Hub — “Against AI: An Open Letter From Writers to Publishers” (June 2025). The collective document. Over 70 named authors including Lauren Groff, Dennis Lehane, and Jodi Picoult demanding publishers pledge never to release “books that were created by machines.”
Terribleminds (Chuck Wendig) — “My Open Letter To That Open Letter About AI In Writing And Publishing” (December 2025). The zealot position at full individual volume.


Fantastic article. Thats all I ask for from anyone that comes across the stories I have produced with he aid of AI. Just make an educated decision instead of a knee jerk one ans if you do wish to support me then do it whole heartedly. Thanks for mentioning me.
This is an excellent article. More people need to read and think about this because AI is here to stay. The real debate should be whether people are using it responsibly or not.