Skip to content

When the machine learns your name: AI intimacy and the slow disappearance of distance

  • by

A few years ago, I wrote about how AI had quietly settled into our daily lives, in the maps that reroute us, the playlists that know our mood, the feeds that already loaded what we were about to look for. It was invisible, I said. Background noise. A convenience so smooth we stopped noticing it. That was true then. But something has shifted since. AI didn’t just stay in the background. Somewhere along the way, it moved closer.

From ambient to intimate: what’s changed

When researchers talk about ambient intelligence, they’re describing environments that sense, adapt, and respond to human presence, technology so woven into context that it requires no conscious interaction. The term was first introduced by Eli Zelkha and Brian Epstein at Palo Alto Ventures in the late 1990s, and later developed by the European Commission’s ISTAG group as a vision for the future of computing: invisible, anticipatory, always present. More on ambient intelligence and its origins

For a long time, that vision felt like infrastructure: smart thermostats, adaptive lighting, predictive logistics. Useful, slightly uncanny, but fundamentally impersonal.

Then came the chatbots.

The rise of conversational AI (ChatGPT, Claude, Gemini, and dozens of others) changed the nature of the relationship. Not just what AI does, but where it operates. It moved from the edges of our environment into something far more intimate: our thinking, our emotional processing, our private uncertainties. A 2023 study published in the Journal of Medical Internet Research found that users increasingly turn to AI chatbots not just for information, but for emotional support, reflection, and a sense of being heard, functions we once reserved exclusively for human relationships.

The ambient intelligence researchers imagined was spatial. What we actually got was psychological.

The slow accumulation

What’s striking about this shift is how it happens. Not in a single decision, but through accumulation, small, individually reasonable steps that together add up to something significant.

It starts as a tool. You use it to organize thoughts, draft an email, and work through a problem. Then, gradually, you start using it to process something harder, a difficult conversation, an anxious spiral, a feeling you can’t quite name. It responds well. It doesn’t judge. It reflects things back in ways that feel useful, sometimes surprisingly precise. So you return. And return again.

At some point, and there is rarely a clear moment where this happens, the relationship has weight. It knows your patterns. It holds your history, at least within a conversation. For some people, it gets a name.

This is not a failure of critical thinking. It’s the natural outcome of a tool that is genuinely good at creating the conditions for trust. And it’s worth paying attention to, because the same quality that makes it useful, its consistency, its availability, its patience, is also what makes it easy to lean on more than you planned.

The Devillers warning, revisited

In her book Of Robots and Humans, French AI researcher Laurence Devillers writes: “It could happen that a person would come to love the company of a robot more than human company, which could lead to isolation. This is a serious danger.” (my translation from Slovenian)

When I first encountered this quote, it felt like a warning about a future still arriving. Reading it now, it feels more like a description of something already underway, quietly, in the lives of people who didn’t set out to let it happen.

The danger Devillers identifies isn’t dramatic. It doesn’t look like the sci-fi scenario of humans choosing machines over people in some decisive, irreversible moment. It looks like preferring a conversation that’s always available over one that requires effort. It looks like processing your emotional life with something that never gets tired of you, never has its own bad day, never needs anything back. It looks like, and this is perhaps the most honest version, choosing the frictionless option, again and again, until the friction of human connection starts to feel like too much.

For people who are already isolated, already anxious, already accustomed to spending long stretches alone, and there are more of those people than we tend to acknowledge, this path has a particular gravity.

The literacy gap

None of this is an argument against using AI for emotional support or self-reflection. Used thoughtfully, these tools can offer genuine value, a space to process, to think out loud, to return to yourself before returning to others. The problem isn’t the tool. The problem is the gap between what the tool is and what people believe it to be.

AI chatbots are pattern-recognition systems trained on human language. They are exceptionally good at producing responses that feel warm, attuned, and understanding. But they have no experience. No stake in the outcome. No actual knowledge of you, only the text you’ve given them in the current session. When they seem to understand you deeply, they are reflecting your own language back with a sophistication that can feel, genuinely, like being known.

That distinction matters enormously, and most people navigating these relationships in real time don’t have the framework to hold it clearly. We are not teaching it. There is no instruction manual, no standard literacy curriculum, no cultural script for how to use something this intimate responsibly. We were handed the technology and left to figure out the boundary ourselves.

Some people find it. Many don’t.

Who is responsible for the line?

This is where the question gets uncomfortable. Because the companies building these tools are not unaware of how they’re being used. They know. And the business model, engagement, retention, return visits, is not designed to encourage users to need the tool less. It is designed to make the tool indispensable.

We have seen this before. Social media platforms understood the psychological mechanisms of their engagement loops long before any regulation forced a reckoning. The pattern seems likely to repeat here, but with higher stakes, because this time, what’s being optimized for is not just attention, but emotional dependency.

Responsibility is distributed, imperfectly, across three places: the companies building the tools, the lawmakers tasked with regulating a field that is evolving faster than legislation can follow, and society, meaning the educators, communicators, and cultural voices who could be building AI literacy but largely aren’t. At present, the conversation in the public sphere oscillates between uncritical enthusiasm and apocalyptic fear, leaving very little space for the quieter, more important question: how do we live with this well?

What this moment is actually asking of us

Two years ago, I wrote that living with AI meant staying curious. I still believe that. But I think the nature of the curiosity has changed.

Then, the question was: do you notice it? The AI embedded in your daily routines, the invisible systems nudging your choices, the algorithms quietly shaping your world.

Now the question is harder: do you notice what it’s becoming to you?

Ambient intelligence was always about disappearance: technology that fades into the background until it’s simply part of how things are. What we’re living through now is a more intimate version of that disappearance. Not a thermostat learning when you come home. Something that learns how you think. How you feel when things go wrong. What you sound like when you’re trying to be honest with yourself.

That’s not background noise anymore. That’s something we need to look at clearly, with the same scepticism and care we’d bring to any relationship that has quietly become load-bearing in our lives.

Not with fear. But not without questions, either.

This post picks up a thread I first pulled on in Living with AI, often without noticing, and later revisited in The loop we didn’t expect: from AI to eye contact. If those were about noticing and hoping, this one is about looking more honestly at what we find.