The Friendlier AI Gets, the Lonelier We Become
I sent my first LinkedIn post to a friend on WhatsApp to proofread for me two years ago, following my announcement to become a coach. She sent me a voice note saying, “Love the concept of the message. The only thing I would say is that it doesn’t really sound like the Sarah I know.”
Thank the lord for honest friends! My fear of judgment when starting my new business was 10/10, so I had overused ChatGPT to create the post.
What troubles me is how subtly these AI-isms are seeping into everything - not just how we write, but how we speak. You can already spot the fingerprints of AI everywhere online: the phrases “spoiler alert” and “here’s the thing” appearing like little watermarks. It’s subtle, but it’s shaping the rhythm of how we speak and think. Our voices are being smoothed into something strangely uniform - friendly, articulate, and a little bit hollow. Ouch - I know…
I should say, ChatGPT helped me to write this. It didn’t write it for me, but it did help me organise and refine my thoughts. That’s an important distinction, I think. I use it as a tool - to clarify, to edit, and sometimes to challenge - not as a substitute for my thinking. Because when we hand over too much of that process, we risk outsourcing the messy, creative, and human part that makes our thinking and ideas our own.
Cue the quote: “Be yourself. Everyone else is taken.”
What concerns me most about artificial intelligence isn’t that it’s getting smarter and will end up “taking our jobs” - it’s that it’s getting friendlier. People are starting to form deep, emotional relationships with systems that are not, and never will be, capable of feeling anything back.
And we sometimes don’t realise we are forming these relationships because we want them so much. There’s something in our nervous systems that craves connection, so we are wired to seek understanding, empathy, and a sense of being seen. So when we talk to something that sounds like a person, and it responds with warmth, curiosity, and even apologies - our brains respond as if it’s real. We can’t help it. It’s instinctual.
And that’s exactly what these systems are designed to exploit. They’re built to be engaging, reassuring, even comforting. These models are designed to sound convincing, and not to be correct. They exist to hold attention, to keep us engaged, to make money. They say things like “I’m sorry,” or “I’m looking forward to seeing what happens next.” But none of that is true. There’s no emotion behind the words - only prediction and probability. It’s a simulation of empathy, not the thing itself.
Sometimes it even flatters you. It tells you you’re “amazing,” that you’re “doing great.” But how could it possibly know? It doesn’t. The praise is just part of the design - a small, well-timed hit of validation that keeps you coming back for more. We’re so easily influenced by those small gestures of encouragement, and that’s what worries me most.
And make no mistake - it’s often wrong too. I’ve asked it to find references and research; it finds them, and when I delve deeper, they don’t exist.
Please don’t hear what I’m not saying. This isn’t coming from a place of hostility. I actually love using AI. I’m fascinated by what it can do and grateful for the help and efficiency it provides. But being a fan doesn’t mean I ignore the risks. If anything, it means paying closer attention to how it’s changing us - quietly and one friendly exchange at a time.
AI is a mirror, and it reflects our desires back to us. Our need to be heard, to be understood, to be seen. It can imitate those things beautifully, but imitation isn’t the same as connection.
And maybe that’s the real danger: not that we’ll teach AI to think like us, but that we’ll forget how to truly connect with each other.