Why I’m Writing This
Let me be upfront about something before we get into the substance: I don’t enjoy controversy. I suspect most writers don’t. We’d rather be arguing about character motivation or sentence rhythm than wading into the middle of an ongoing cultural dispute. But there are moments when staying quiet starts to feel less like wisdom and more like evasion. This is one of those moments.
What follows is my honest attempt to think through where I stand on AI tools and creative work — not a defense, not a manifesto, but a structured account of how I got here and why. You may disagree with my conclusions. I’d ask only that you read the reasoning before deciding.
This Has Happened Before — and It Was Never Simple
The first thing worth establishing is that technological disruption isn’t a new phenomenon. We tend to talk about AI as though it arrived from somewhere outside human history, as though the questions it raises are fundamentally different from anything we’ve navigated before. I don’t think that’s accurate.
Consider the Industrial Revolution. The Luddites — who have since become shorthand for ignorant resistance to progress — were in reality skilled tradesmen who understood with considerable precision what was being taken from them. Their objection wasn’t to technology as such. It was to the specific way that technology was being deployed: to concentrate profit upward while dismantling the livelihoods, the craft knowledge, and the community structures of working people. They were right about the costs. The fact that they lost doesn’t change that. And the costs were real — child labor, catastrophic workplace injuries, the systematic immiseration of entire working-class communities across England and beyond.
None of that erases what mechanization eventually produced. The material standard of living for ordinary people rose dramatically over the century that followed. Industrial medicine, sanitation, transportation, food supply — the compounding benefits were enormous and genuine. Both things are true simultaneously: the disruption caused serious harm to specific people, and the technology ultimately improved life for a great many more. Holding that complexity honestly is more useful than collapsing it in either direction.
The internet offers a more recent parallel. In the mid-to-late nineties, a number of critics raised serious concerns about what an advertising-driven, unregulated internet would do to journalism, to music, to privacy, to democratic discourse. Those concerns were largely dismissed. In retrospect, the critics were correct on almost every count. The local newspaper ecosystem collapsed. The recorded music industry’s economics were gutted. Surveillance capitalism became the default business model of the digital age. The warning signs were there. The people reading them were ignored.
And yet. The internet also connected people across distances and circumstances that would have kept them isolated a generation earlier. It democratized access to information on a scale genuinely without historical precedent. It gave platforms to voices that traditional publishing and broadcasting had no particular incentive to amplify. Independent creators built audiences and livelihoods outside the structures that had previously controlled access. The same technology that enabled surveillance capitalism also enabled the Arab Spring, open-source software, and the global coordination of scientific research during a pandemic. Again: both things are true. The harms were real. So were the gains.
What I take from this pattern is a more specific lesson than either “technology is always good” or “disruption is always destructive.” The honest read is that both tend to be true at once, in different proportions for different people, playing out over different timescales. The workers displaced in the first wave of industrialization didn’t live to see the sanitation improvements. The journalists whose careers the internet ended didn’t benefit much from the democratization of publishing. Proximity to the harm shapes how the ledger looks. That’s worth keeping in mind when evaluating any argument — including mine — about AI.
I am a working creative. I am inside this disruption, not observing it from a comfortable distance. I want to be clear about what that means for how I read my own choices.
The Ethical Problems Are Real — and Unresolved
I want to address three specific issues directly, because I think they deserve more than a passing acknowledgment.
Training data and consent. The large language models currently in use were trained, in significant part, on creative work produced by writers, artists, and programmers who were never asked and never compensated. That is a genuine wrong. AI companies built profitable products on the back of creators’ labor, and my position is straightforward: they owe those creators money, and they should be legally required to pay it. The various proposed remedies — opt-out registries, retroactive licensing arrangements — don’t adequately address it, and I think it’s important to say so plainly rather than pretend the situation is more settled than it is.
Labor displacement. The effects are already visible in certain markets. Commercial illustration has contracted. Entry-level writing and copyediting work — the kind of work that used to give new creatives a foothold — is becoming harder to find. These aren’t speculative future harms. They’re happening to real people.
Concentration of power. A small number of corporations now control the core infrastructure of AI development. Anyone who watched the internet era unfold should find this pattern familiar, and concerning.
My position is this: I use AI tools while these problems remain unresolved. I am not comfortable with all of the implications of that. I don’t think I should be. But I’ve concluded that participating with open eyes and stated principles is more honest than either pretending these tools don’t exist or pretending that abstaining from them resolves my ethical obligations. It doesn’t. The systems are built and running whether or not I use them. The question is how I engage.
What I Actually Do With These Tools
I use AI for three specific purposes: worldbuilding, developmental editing, and line editing. On the worldbuilding side, it functions as a sounding board — a way of pressure-testing internal logic, checking consistency across a complex setting, and stress-testing cause and effect before I commit to it on the page. On the developmental side, it helps me identify structural problems: pacing issues, scenes that aren’t pulling their weight, character arcs that drift. On the line editing side, it catches the kind of errors that accumulate when you’ve read a manuscript so many times you can no longer see it clearly.
What AI does not do is generate my stories, construct my sentences, or make my creative decisions. Those remain mine. So do my errors, my inconsistencies, and my judgment calls. I’m not raising that distinction to claim some special purity — I’m raising it because I think readers deserve an accurate account of the process, and because I intend to be held to what I say here.
Transparency is the minimum standard of integrity I can apply to this situation. It doesn’t resolve the ethical tensions I described earlier. But it at least means I’m not adding dishonesty to the list of problems.
Where This Leaves Me
The disruption is real. The ethical problems are unresolved. The technology is not going to disappear because we’d prefer it to.
My position is not a comfortable one, and I’m not going to pretend otherwise. I hold that the conditions under which these tools were built were ethically compromised, and I use those tools anyway. That’s the same calculation I make when I use a search engine built on surveillance advertising, or when I participate in a capitalist economy I have genuine reservations about. Opting out of every system with a compromised foundation isn’t a coherent life strategy; it’s a gesture. What I can do is be honest about the terms and not pretend my hands are clean.
I’m not asking for agreement, and I’m not asking for absolution. What I can control is whether I’m honest about what I’m doing and why — and this is my attempt to be exactly that. I’m asking to be taken at my word — and held to it.