Are You a Human?
April 2, 2026
Computers perfecting natural language is great, but also scary. I would like to share my thoughts on this.
What is possible?
LLMs can now produce natural language that looks interchangeable with what a human produces. Because of that, they can automate flows that previously required human interaction or very customized software.
This means that interactions that were previously human-to-human are now becoming human-to-agent - and probably soon, agent-to-agent.
This might seem abstract, but the following areas are becoming more and more automated: sales, customer support (chat and phone), HR recruitment, legal work, and engineering.
Let's take a look at sales. You might have noticed an explosion in your inbox. Every organization is trying to do sales using an agent. That means inboxes get filled with messages trying to connect and make a sale.
Customer support is probably the best-known example, since it is straightforward to set up an agent to look at your knowledge base and answer questions based on it, both by voice and chat. Luckily, you can still usually tell when you are talking to an AI voice, but that will probably not remain the case for long.
In software engineering, it means that the barrier to entry for writing code is basically disappearing. Everyone can generate code now. The architectural context is still being worked on, but it is solvable. The implications of this are hard to grasp.
Basically, this means that LLMs can be used to automate most things.
How do we know if what we read is from a human?
Well, the honest answer is: we do not.
With AI getting better and better at writing like a human, it is becoming almost impossible to tell whether something was written by AI. Currently, we can still spot some giveaways in text from LLMs, or at least in texts where people are not trying to hide it. But in theory, it becomes impossible, since the LLM can be instructed to write the way you usually do, especially if it has access to some of your own writing.
How can we make sure that we are talking to a human?
When reading something on the internet, it will be impossible to know whether it is from a human or not, because you would have to trust the publisher. And even if you trust the publisher, you still would not know for sure.
When chatting with someone on the internet through Facebook Messenger, WhatsApp, or LinkedIn, you might know them in real life, but you still cannot know whether they are using a bot powered by an LLM to automate the conversation.
If you have a voice call with someone you do not know, it is also hard to know. Most automated calls still use relatively low-quality AI voices, so it is often possible to tell for now. If you are talking to someone you have known for years, it is currently still possible to hear whether it is really them or not.
Video calls help a bit more, because you can see body language and facial expressions together with the voice, and that is harder for AI to fully replicate at this stage. At the very least, it still requires a more sophisticated setup and more effort. Again, it depends on whether it is someone you already know. If it is someone you know, you can more easily tell whether they are trying to automate the interaction. If it is someone you do not know, it may be much harder to spot.
The safest option at the moment, and probably for at least the next 5-10 years, is to meet in person in the physical world. We are still far away from automating that without anyone noticing.
So what can we do to combat this?
To ensure secure channels, we have to find a way to trust the other person.
One way of doing that is by agreeing on a set of secrets that can be exchanged if we are in doubt that the conversation has been compromised. The only solution I can think of is to exchange secrets written down on paper. For example, once a month you meet in person and write down a list of secrets for each day. Then, when you speak and are in doubt about whether you are talking to an agent or the real person, you make a video call and confirm the secret of the day.
This seems cumbersome, but I cannot see a better solution.
Meeting in person is the best way to ensure that you are speaking to the real person. Everything else is becoming hard to trust. It is simply too easy these days to set up chat automation and phone calls, so you often will not know. It takes about an hour to set up OpenClaw, even for a non-technical person, and connect it to WhatsApp with a phone number and an email.
This means that circles of trust will probably get smaller. There will be more noise, and it may seem like communities are booming and businesses are speaking more to each other, but actually much of that may just be noise.
Building a circle of trust becomes harder, because you increasingly have to meet in person. People will not realize this until they have been fooled a couple of times.