In some of the recent projects I’ve been working on, I’ve been using the fantastic OpenRouter, which makes it trivially easy to test and compare the output of different LLMs. It’s an interesting experience, because it’s easy to see the differences in quality, speed, and cost between each model. It puts me in the mindset of “what’s the cheapest model that gets the job done?”

As I was cycling through the different models, a dark future of employment dawned upon me. What if we humans are just considered another “model” to route a request to, whose output will be compared against all of these other models? People will send a request to cheaper models like Claude Haiku, to more expensive models like Sonnet and Opus, and then to the most expensive models of all, humans, which might cost 100x, 1000x, or 10000x for a given request.

If you do all of your work through a computer today, this is already something that is 100% technically feasible.

The biggest shift here is going from binary thinking to gradient thinking. A lot of people tend to solipsistically think of the AI replacement question as a binary yes/no. Can an AI do my job or not? With that lens, the answer is usually “no” (at least from their own perspective).

But this is not how those paying for jobs (micro-jobs like a single LLM, but also macro-jobs like a normal salaried position) will think. They’ll think of it like a gradient. Can an AI do this job? The answer is yes already. It can take the inputs and give the same kind of outputs that a human would.

The only question is, how good of a job can it do? And at what price?

When you start to think that way, things get a lot scarier.