Look, I’m enjoying using AI to write code as much as anyone. But I find myself asking, where exactly are we going with this?

There’s been an awful lot said about AI-assisted software development, and most of it centres around the immediate effect it’s having on the industry and the new ways of working we’re all quickly reorienting ourselves around. We’re overturning our entire craft at breakneck speed. Articles, videos and discussion threads report on it all breathlessly. Half the posts on Hacker News are about AI.

What seems to be separate from all those discussions, but in my opinion should be making us take notice, is the way all the big AI companies are pouring untold billions of dollars into the whole thing.

Specifically, as software engineers we should be thinking: when and how do these companies expect to make all that money back?


AI already isn’t that cheap. It’s cheaper than paying human workers, sure. For now. But people are talking about it as if the cost of writing software has dropped to near-zero.

My employer is currently giving us each an AI spending limit of US$800 per month. Is there any developer tooling in history that has cost that much?

“But the cost of tokens is going down all the time!” I hear you say. “Surely AI is just going to keep getting cheaper and cheaper until it’s basically free.” Well, the tech companies spending the GDP of entire nations to pave the Earth with data centres don’t seem to think so, otherwise they wouldn’t be doing that.

Tech companies are expected to spend $700 billion on developing AI this year. OpenAI alone is planning to spend $1.4 trillion over the next 8 years. Trillion. That’s a word associated with the budgets of global superpowers, not individual companies.

And they’re getting deep into debt to do it. So how exactly do they expect to recoup that investment? It sure as hell won’t be by making AI cheaper in the long run.


We’ve seen this pattern before. Uber operated at a loss for a decade and a half before turning a profit. And how did they finally become cashflow-positive? It wasn’t by keeping Uber-rides or food deliveries cheap. An Uber is now a similar price to a taxi, and Uber Eats charges exorbitant fees to restaurants for delivery (which restaurants have to either pass on to you, and risk losing your business, or eat the cost themselves. Either way, the family-owned local business gets shafted while the tech company rakes in cash).

It’s not a new strategy, but big tech turbocharged it: burn money to offer artificially low prices; worm your way into people’s lives until they forget how to live without your product; undercut the competition; achieve market dominance; finally, enshittify and squeeze as much money out of everyone involved as possible.

So the question is: how do the AI companies plan to hook us so badly that they can squeeze trillions of dollars out of us to recoup their investments?

It’s possible that they just won’t; that the entire thing is a colossal bubble and all these tech billionaires and venture capitalists will curse their own hubris as they join the unemployment line.

But maybe not. After all, with certain exceptions the people running AI companies aren’t total cretins. They must have some reason to think they can make those trillions back.


The most obvious path to profit is to make us all dependent on AI. This is already happening in the software world. Software companies believe they can’t compete in the market unless they’re using AI to increase productivity. That’s why they’re willing to give developers huge token budgets.

It’s not just software, either. More and more companies beyond the software world are using AI to supercharge their work, and they’ll be subject to the exact same leverage.

Once we’ve all bought into that world, AI can just keep getting more expensive. What are you going to do, go back to writing code and making spreadsheets by hand? By that point, will you even remember how?

Competition between different AI providers won’t necessarily save us. The developers at the forefront of AI coding all have opinions about which model is currently best for which kinds of development tasks. If one AI lab pulls ahead, everyone will switch to their models for fear of being left behind. It’s a winner-takes-all game, or so they hope. The winner will have the entire world over a barrel: “Keep paying up, or your company is finished.”

We’ve never had a monopoly on such a scale before. We just have to hope no one lab can pull too far ahead.


I have an acquaintance who is very pro-AI. Recently they described how they were using ChatGPT to help figure out their budget, and it started saying things like “how much value is your union membership really providing?” They said this as if it was funny.

Personally, I think it’s about as funny as an open flame in a dynamite factory, and should be regarded the same way.

I think mass manipulation of public opinion is going to be a major income stream for AI companies. We’ve already seen platforms like Facebook and Twitter weaponised as propaganda channels. But what happens when an AI is talking directly to you, getting to know you intimately, learning how you think and how best to subtly influence you?

We’ve already seen cases of chatbot psychosis—where AI has convinced people that it’s a god, or that they should assassinate the Queen, or that they should commit suicide. And that was merely emergent behaviour from the AI, not resulting from any system instructions.

So what happens when an AI does have system instructions: to persuade its users to vote Republican, or to hate trans people, or to assassinate specific political figures?

What happens is that the provider of that AI makes enormous sums of money for providing that service, and the world gets dramatically worse.

This won’t happen overnight. They’ll boil that frog slowly. But we’re starting down that road already: OpenAI is putting ads in ChatGPT. They’ll be separate from the AI’s response text, and clearly marked as sponsored—but for how long?

“If you’re not the customer, you’re the product,” so the saying goes. Well, why not both? You can pay through the nose to use ChatGPT while it subtly fills your head with paid propaganda.


What can we do to prevent these grim futures? One possibility is open-source models, where the training data and code are published, and the model can be run locally or via trusted compute providers.

There’s also the related concept of open-weights models, where the training data isn’t published but the model weights are. These can also be run locally. Perhaps surprisingly, OpenAI and other AI labs publish open-weights models for free. You could download one today and run it on your PC with open-source tools like Ollama. But as you’d expect, these free models are not keeping pace with proprietary models. And if the AI labs hope to become profitable, they’ll surely only try to increase that gap, creating ever-larger models that depend on giant data centres.

So maybe it will come down to “how good is good enough?” Maybe one AI will pull ahead significantly, but the next-best-thing will be so much cheaper while still being extraordinarily capable, that we just won’t bother paying for the best.

After all, how much intelligence do we actually need in the world? Once we have enough smarts and computing capacity and robots to run all our farms, build all our houses, make all our stuff, and run all our logistics and utilities and transport systems… well, surely at some point it’s enough? When we have everything we need delivered to us by AI, surely we just won’t care about smarter models any more?

When that happens, other models will catch up to that good-enough point soon enough. Competition will kick back in, and prices will plummet, and open-source AI will let the general public retake control.

And maybe in the years before we reach that point, the AI companies will make all those trillions back after all—just in time for it to not matter any more. Who needs money in a post-scarcity utopia?

We’ll have won at civilisation-building, achieved fully automated luxury communism, and can spend our days sitting on a log and thinking about space.

Maybe. Maybe not.

When I started writing this post, I wasn’t expecting to end it on a hopeful note. But I see a glimmer of hope that things could turn out well.

That might be over-optimistic. But the world needs optimists, doesn’t it?