On the frontiers of AI news, I keep hearing AI agents being referred to as possible coworkers of humans… That’s a very naive way to see them 🙂 Reminds me of when non-engineers saw AI itself as an alternative human. As if the concept of artificial intelligence meant artificial human.
Sure, there’s always a philosophical, humanist angle from which you can view the world. But at its core and origin, AI and now AI agents are simply a form of science and technology. The way I chose to see it, as a native technologist, was as alternative algorithms that would provide acceptable solutions to problems that couldn’t be solved with deterministic algorithms. A way to approximate solutions to hard problems. AI agents are the same thing, but for problems involving planning. Like a planning layer on top of AI models, it is also non-deterministic, because they will take decisions not completely determined or forecast in its entirety by us.
My point is, you can add any humanist, philosophical, mystical dimension to AI and agents if you like – but their core and origin is just another tool. And like AI, agents are a tool that’s tough to make reliable. In my opinion, it’s a mistake to view AI agents as (non-human) coworkers. They might have some autonomy, but implying they’ll be coworkers means giving up on the notion of having control over them, like any other tool. Why should we do that?


