We don't want intelligence

A few things happened, or announced around AI this week. As they seem to be almost every week now. Anthropic's CEO wrote a long post about how awesome the AI future would be. I haven't read the whole thing yet. As I said, it's long.

I wrote about it too, a couple of times this week: about a world full of agents and about the future of the web and how these models affect that future.

To say that I have been thinking about AI is an understatement.

Anyway. Reading through the dreams that people have about agents, and how AI will change medicine, education, personal productivity, the world; I realise there is a fallacy in all of this. We aim to create intelligent systems. It's called AI (Artificial Intelligence). But what we want are slaves. We will automate away the things people don't want to do. Autonomous cars will replace drivers. Agents will replace human reps. Assistants will help doctors detect cancer early (This sounds great by the way. Especially here in Finland, we have a shortage). The thing is what we want are slaves. Working on things, that we either don't want to do. Or we want to remove humans from the equation, because humans have rights! And we fall ill. And so on.

We are barrelling toward this world, where we have these intelligent things(beings?). Assuming, that they will not want anything else but do what they are told to.

If an ant comes to you, and tells you please help our colony. What would you do?

But that's what the goal is. AGI, or any other fancy way of calling the same thing.

Us, humans have general intelligence. Machines in the field that they are trained in, are way better at that thing than humans ever could. Think, the systems designed to play chess. Whenever we get AGI, the machines will be that better than humans at everything.

And then, how do we get them to care for us.

If, we want intelligence, then we better treat the machines as our partners. We can't both have intelligence, and then expect them to slave away filling out our excel sheets.