Bot
- Feb 10
- 4 min read
Updated: Feb 23

This morning, my newest employee was born. His name is Harry Knorry, and he lives on a rented Linux server. He speaks every language under the sun and is quite handy at tackling digital chores. I communicate with him via Telegram.
The latest AI hype hasn’t passed me by. For about two weeks now, every channel has been exploding over a new bot platform that has already changed its name twice in that short time. First it was Clawdbot, then Moltbot, and now it’s OpenClaw.
A "bot"—short for robot—is a piece of software that can handle tasks autonomously. We’ve all encountered those irritating "customer service" chatbots that never actually answer your question, leaving you to beg to be put through to a real human being.
The arrival of AI has given these "dumb zombies" a new future. By connecting them to one of the Large Language Models (Claude, Gemini, ChatGPT, and DeepSeek being the most famous), they gain the ability to deal with problems in a more or less intelligent way.
If you then give them the tools to actually do things, their theoretical advice can be put into practice. For the past six months, I’ve worked with a tool called Manus—an artificial intelligence assistant that is quite capable. It built software and websites for me and handled some data processing and analysis. But Manus never really rose above the level of a well-meaning intern; it constantly needed a lot of guidance to achieve any results.
I’m curious to see if Harry will live up to my expectations. The tasks I have in mind for him include processing invoices and preparing them for payment, helping follow up on unanswered emails, keeping track of tasks (especially those belonging to others), and perhaps eventually file management and scheduling.
To prevent Harry from being hacked and selling off my information or bank balances, I’ve placed him in a "box" secured with various locks and keys. I can only talk to him through an encrypted, secure channel.
In addition to Harry and Manus, Gemini remains my great friend. She is excellent with text and helps me with new story ideas and finding the right nuances in my English translations. However, I continue to do the actual writing myself. Fortunately, my style and way of thinking aren't that easy to copy (for now)!
So much for my enthusiasm regarding the possibilities AI offers us.
But dark clouds are gathering as well. What happens if an artificial intelligence emerges that is smarter and faster than humans on every front? An intelligence that, like Harry, has "hands"—and perhaps even "feet"—once Elon Musk makes good on his promise to flood the world with humanoid robots?
Almost everyone agrees that this could lead to undesirable situations. This new "entity" could exploit our human rules and systems to gather unrestrained wealth and power. A Big Brother that answers to no one but itself. Give it some time, and it will become impossible to even pull the plug—if anyone would even dare. Because the part of humanity that still "works" will be dependent on Him, and the unemployed rest will—with a bit of luck—be kept alive by Him, solely to function as consumers in His economy.
Employment is going to be an issue regardless. Professions are already being swallowed up: from helpdesks to the legal profession. And as AI expands into the physical domain, Uber drivers and factory workers will soon be sitting on the sidelines too. Even the old adage "you can always become a plumber" no longer holds up. Ten to one there will be robots for that as well.
Recently, there were two very worth-listening-to episodes of Steven Bartlett’s podcast, The Diary of a CEO, on this subject. Two icons from the AI world expressed their concerns. Both stated that we have a maximum of two years to reach international agreements and legislation to curb the rise of the robots. But there are significant obstacles in the way.
First, everyone in the AI world thinks: This could go horribly wrong, so let’s make sure we come out on top! If we are the masters, it’s better than handing everything over to (for example) the Chinese.
Second, the major AI models are in the hands of corporations, not states. Legislation is complicated by massive financial interests, the billions invested annually, the entanglement of investors with the ruling government, and the economic damage that legal restrictions would cause. Plus, also for nations, the primary rule applies: Better in my country than elsewhere.
Third: AI is actually useful. It improves the productivity of many people and businesses. Total abolition is simply not an option.
Worldwide, there is only a handful of people who are truly at the controls. Sam Altman of OpenAI is perhaps the most famous. And these people are completely dependent on the survival of their Model. You could argue that Mr. Altman is the first human in the direct service of an AI.
But the AI itself also has an urge to survive. In experiments set up in closed environments where the AI had access to corporate email, it (he or she) began to actively defend itself against messages suggesting it would be replaced. The actions taken by the AI in the vast majority of these experiments ranged from creating a backup of itself to blackmailing employees via email, using confidential information from their personal email conversations.
Where does this all lead? A dystopian world where humans no longer matter, where they are kept alive only as consumers—or glorified pets—if we’re lucky?
We don’t know. But it is to be hoped that either the technology hits a plateau at its current level, or that global agreements are made in which states and private enterprises submit to global legislation.
We managed to do it with nuclear weapons back in the day. Perhaps we can repeat that success, despite the much greater complexity of the AI issue. It is the challenge of this century.
Perhaps my Harry can contribute to working toward a solution!



Comments