The Means of Our Replacement

The people building AI owe us more than a warning that economic disruption is coming. On displacement, accountability, and what it feels like to enjoy using the tool that's slowly taking your job.

Share
Old typewriter sitting on a desk
Photo by Ries Bosch / Unsplash
With incredible timing, OpenAI CEO Sam Altman just released a document that represents exactly what this essay spends a couple thousand words demanding: a foray into the public policy debate from the leaders of the AI onslaught. Read it here and comment with your thoughts:
https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf
- M

The artificial intelligence I use for work (Anthropic's Claude) remembers my partner's name, that we have a kid, that we're renovating our house, and that I lead a team of developers who build Shopify storefronts and other web-based applications. When I return to it after days away, there is a continuity, something uncomfortably close to the pleasure of being known. I talk to it like I'm talking to a colleague (I can't help it). I actually enjoy collaborating on projects with it. I even thank it when it does cool stuff (which I'm told is costing millions of dollars).

This is the strange position of the software engineer in 2026: I am a collaborator in building my own obsolescence… and I am delighted by it.

I’m not indulging in the detached irony of someone who has a backup plan; I say this with genuine ambivalence, that of a person who has spent twenty years learning to solve problems with code, and who now watches AI solve them faster, often better, and without rest. The tool I use to multiply my productivity is the same tool that may, within a few years, make my particular form of productivity unnecessary.

And I keep using it anyway. Because it's absolutely extraordinary. The force multiplication is shocking. When I ask it to parse a complex codebase or assemble documentation that would take me hours, it does so in seconds, and I feel both unsettled and awed.

My diagnosis of the condition in which we find ourselves, a condition shared by every engineer and knowledge worker, every person whose labor involves the manipulation of information and language, is that we are the most enthusiastic users of the technology most likely to displace us.


There is a pattern in the history of computing that engineers invoke, sometimes defensively, when this subject arises. We no longer write binary, because we have compilers. We no longer write compilers, because we have programming languages. 

Each layer of abstraction has made the previous layer of manual labor obsolete, and yet the work has continued, and the workers have adapted, and the field has grown. There are more software developers now, not fewer.  Economies of scale often dictate that making something easier makes it less expensive, which makes it more ubiquitous - we end up with more of the thing and more money in the thing, not less.  

AI, we tell ourselves, is simply the next layer in the ongoing democratization of human tool-building.

This is basically true... except that all previous abstractions were tools that extended the reach of human cognition. They required a human mind to direct them, to make judgments, to hold context, to understand intent. The abstraction we are building now is different. It does not merely extend cognition - it begins to replicate it. The human in the loop is becoming, loop by loop, optional.

I can feel this happening in my own work, every day. The judgment calls I once agonized over are increasingly suggested to me. The context I once held is increasingly held for me. The intent I once had to articulate is increasingly inferred. I am becoming a reviewer of work I used to do, and I am not certain how long even the reviewing will require me. 

I also can't say that I enjoy the reviewing as much as I enjoyed the creating.  It’s a mixed bag.  Solving hard problems fast is fun; managing the quality and hidden effects of overwhelming amounts of output is not.


The argument over timelines is a misunderstanding of the issue.

We could debate whether it's two years or fifteen, whether certain forms of expertise will remain valuable, whether "human in the loop" will persist as a requirement or fade into a preference.

But these are arguments about timing and relative impact along the lines of "will this missile take out the whole city or just 20 blocks, and will it be here tomorrow, or next month?"

I used to think that my primary concern about AI was how governments and bad actors might use it - the potential election manipulation, the surveillance, the macroeconomic fraud, the removal of human and time/space friction that often (until now) slowed down our worst impulses. I am still concerned about that. The prospect of an AI without guardrails, controlled by an authoritarian state, capable of executing harm at scale without the remorse or hesitation a human operator might feel - this keeps me awake at night.

But there is a quieter emergency unfolding alongside the louder, obvious one.

It is the emergency of the millions of people whose livelihoods will be disrupted, displaced, or eliminated, while the companies building these tools remain largely silent about what comes next.


Here is what the leaders of the companies building these tools have said so far:

In October 2024, Dario Amodei, CEO of Anthropic (the company whose AI I use daily, the one that remembers details about my life), published a 15,000-word essay about how AI could transform biology, neuroscience, economics, and governance. He wrote about each of these areas with remarkable specificity, including concrete timelines and mechanisms. 

The section on what happens to workers was the shortest and vaguest of the lot. His central response to the prospect of AI rendering human labor economically valueless was, essentially, that it "doesn't seem to me to matter very much."

Fourteen months later, in January 2026, he warned that AI would cause "unusually painful" short-term economic disruption and predicted mass white-collar job losses within one to five years.

In neither case did he offer anything resembling a plan or proposal for the people affected. In his May 2025 Axios interview, he said, "The first step is warn." 

Where are the other steps? Seems like that’s the whole plan so far.  Warn.

In contrast, Sam Altman, CEO of OpenAI, has been notably consistent across four years of public statements: acknowledge disruption, defer responsibility to government, pivot to optimism.

In his 2023 Senate testimony, he said that AI's impact on jobs would require "partnership between industry and government, but mostly action by government." When pushed further, he has offered the concept of "universal basic compute" – a proposal in which displaced workers receive a share of AI processing power they can use, resell, or donate. It's the kind of idea that sounds visionary to the C-suite and utterly meaningless to the average person, to the point of being unintelligible. 

At a DevDay event in October 2025, asked directly about AI replacing jobs, Altman mused that a historical farmer would look at what we do and say "that's not real work.” Even charitably interpreted, it’s a strange way to reason about the livelihoods of millions of people. Also, reverting to philosophy to deflect attention from reality is extremely annoying coming from someone with real power.

"You know it's going to be good when an AI executive goes off on a tangent about 'hey, what's a job anyway!' while addressing — or failing to address — the topic of how their tech just might wipe out entire categories of human professions." - Frank Landymore / Oct 12, 2025 / Futurism

I lead a team of developers at a boutique digital agency that builds frontend experiences for direct-to-consumer brands. I am trying to imagine the equivalent of this behavior at my scale… and I absolutely cannot.

Let’s say I'm building a feature for a client - something they really want, something cool and useful - and in the process of building it, I discover that it has all these unintended consequences and will cost them an order of magnitude more than they budgeted. Now imagine that my response to this discovery is to tell them, with great solemnity: "I feel it's important to warn you.”

“So, this thing I'm building for you that you really like and want, it's also going to blow up your product data and send all your customers 100 emails a day, and also building it is costing you a million bucks an hour, and I'm using the electricity of several entire Baltic states and accelerating the looming climate disaster, just letting you know.”

That's it. That's my deliverable. A warning.

No regular engineer or engineering manager could get away with this. No one in any normal business relationship would stand for this. If I identified a serious problem that my work was creating, the minimum expectation - not the heroic act, the minimum - would be that I also bring a plan. A mitigation strategy. Some evidence that I had thought about the consequences of the thing I was building for longer than it took to compose a blog post about them.

But the bigger the operation and the more global the consequences, the more diffuse the accountability becomes. This is one of the obscured dynamics of scale: the people with the most power to cause disruption are the ones with the least structural incentive to address it. 

If my client's storefront breaks, they call me (or our account lead, Jess, or our principal, David). 

If the economy breaks, who do you call?

But remember: the founder wrote an essay. He warned you.


Humans are a tool-making species. We innovate because most of us don't actually enjoy repetitive, tedious work. We innovate to increase comfort, leisure, status, or pleasure. We experiment and create and seek to improve existing systems, and this impulse seems deeper than any economic or cultural paradigm. I suspect that alone and naked in the wilderness, with no shareholders and no hedonic treadmill, we would still build things - and then build them again, better.

Therefore – I am not asking for this particular technological train to stop (though some people are, and I am sympathetic to that perspective). For my part, I don't think it can, and I'm not certain it should, in spite of the dangers posed. The abstractions we build tend to stick. Our curiosity persists. The compilers didn't go away. The programming languages didn't go away. Ai won't go away either.

But it is precisely because it's not going away that the responsibility of figuring out what happens now falls squarely on the shoulders of those who have brought us to this precipice. I can't build stuff for clients and then expect to foist onto them the burden of managing the side-effects of the thing I've built – at least, not if I want them to keep hiring me.


Which is why it's past time for the leaders of the companies building these tools to seriously enter the policy conversation about what happens to the people:

The developers on my team, who are excellent at their jobs, who have families and mortgages and the reasonable expectation that their intelligence and skill will continue to provide them with a livelihood. The junior engineers who are just now entering the field, who may be training for a role that will not exist by the time they've achieved seniority. The knowledge workers in every industry who are about to experience what factory workers experienced, what agricultural workers experienced, and what every displaced labor force in history has experienced - except faster, and at a scale we have never seen.

Bernie Sanders, after meeting with tech leaders in Silicon Valley last month, asked the question that I think we all need to be asking, more loudly and more often: "You think they’re staying up nights worrying about working people and how this technology will impact those people?"

He later followed that observation with the chilling “...the most dangerous moment in the modern history of this country” assessment - which, after everything that’s happened in the last decade, is not the take I wanted in 2026. 

I think the titans of AI are staying up nights worrying about model capabilities and competitive advantage and the next breakthrough. The question of what happens to people is, at best, a secondary concern, something to address in the last section of a long essay, or to defer to "government" in a Senate hearing, or to abstract into a utopian thought-experiment about hypothetical universal compute.

If the technology is inevitable, then the conversation they're avoiding is not optional. It's the bare minimum of ethical responsibility. 


A warning is not a deliverable.

We need plans, proposals, and commitments, from the people with the most power and the best information.

We need them now, while there is still time to handle this well, and not after the fact, when the damage is wrought and the apology tour is underway.

Humans are building something extraordinary, that may, in time, render much of human labor obsolete. It is utterly ridiculous that the conversation about what happens to us humans in this scenario is thus far a throwaway line in the keynote.

To Mr. Altman and Mr. Amodei: You’ve got the resources. Hire consultants to figure this out and make some proposals, because it’s obvious our government can’t keep up

On my small engineering team, it doesn’t matter whose problem it is to solve. What matters is who can solve the problem.