There is a specific kind of discomfort that has no good name. Not fear exactly. Not disgust. Something quieter and harder to place, the feeling of being almost fooled by something that was never quite trying to fool you. We know it when we feel it. We just rarely stop to ask what it is actually protecting. Science fiction has been sitting with that feeling for over a century. And it turns out, it knew something the engineers didn’t.
The valley and what lives in it
In 1970, Japanese roboticist Masahiro Mori described a phenomenon he called bukimi no tani, the uncanny valley. His observation was deceptively simple: as robots become more human-like, we tend to warm to them. Up to a point. When a robot crosses into almost human territory, close enough to trigger human expectations, but not close enough to meet them, something reverses. Warmth gives way to unease. Familiarity curdles into something harder to name.
He framed it as a design problem. A threshold to cross, a valley to climb out of.
But the anxiety the valley describes is older than robotics, older, even, than the word robot itself. Eighteenth century European automata, clockwork figures that could write, draw, play instruments, were already unsettling their audiences long before anyone had language for why. And when Karel Čapek introduced the word robot in his 1920 play R.U.R., the machines revolted and replaced humanity in the very first scene of their fictional existence. We did not begin with friendly robots and gradually grow nervous. We arrived nervous.
Which suggests that sci-fi did not create our anxiety about human-like machines. It found the anxiety waiting, and gave it a shape.
How sci-fi learned to draw the line
Look at the robots we love in science fiction and a pattern emerges quietly. R2-D2. Wall-E. Baymax. Robby the Robot. Clearly mechanical, unambiguously other, and we embrace them without hesitation. They are safe to love precisely because they do not threaten the boundary. They are not pretending to be us. They wear their difference openly, and in return, we extend them something close to tenderness.
Now look at the robots that frighten us. The Terminator. Ava in Ex Machina. The hosts in Westworld. The replicants in Blade Runner. The threat in each case is not that they are too mechanical. It is that they are indistinguishable. The horror is not the machine; it is the possibility that you would not know you were looking at one.
Sci-fi understood something about human psychology long before researchers named it: the dangerous robot is not the one that looks like a robot. It is the one that passes.
And yet even the clearly mechanical robots (Pepper, NAO, Kismet) tend to have faces. Eyes that track, mouths that suggest expression, something to attach to. We need a hook for empathy, and the face is the fastest route to one. The design logic follows naturally: more human features equal more acceptance. Lean toward us, and we will lean back.
Sophia is where that logic reaches its limit.
Frozen mid-journey
Sophia, developed by Hanson Robotics, is perhaps the most visible real-world occupant of the uncanny valley. Not because she failed, but because she was never quite meant to fully cross it. The face moves with an almost-familiar expressiveness. The eyes track. The voice carries cadence and warmth. And then the metallic skull catches the light. The mechanical hands gesture. You remember the wheels.
The discomfort is not about any single feature. It is about inconsistency. The human face triggers a specific set of expectations, empathy, presence, connection, and the mechanical elements violate them mid-sentence. Your mind begins one story and receives a completely different one before it can finish.
Kismet, the expressive robot head developed at MIT in the late 1990s, produces almost none of this unease despite being far less sophisticated. It makes no attempt to pass. Its features are suggestive rather than imitative, enough to invite warmth, not enough to demand it. The expectations it sets, it meets.
Sophia sets expectations she cannot fully keep. And that gap is where the valley lives.
The exception that makes the rule uncomfortable
Isaac Asimov’s Bicentennial Man has stayed in the conversation about robot humanity longer than almost any other sci-fi text, and it is worth asking why.
Andrew, the robot at the centre of the story, does not sneak across the line between machine and human. He earns his way across it slowly, over a lifetime, accumulating everything we associate with humanity: creativity, love, loss, the desire for mortality. The story asks, with quiet persistence: if something has all of that, what exactly is the argument for keeping it on the other side?
The answer the humans in the story give, for a long time, and with considerable resistance, is revealing. It is not that Andrew lacks the qualities. It is that granting him the status costs them something. Equality is not simply additive. When you extend the category of fully human to someone you have defined yourself against, you do not just gain a neighbour. You lose a mirror. You lose the thing that was making the boundary feel meaningful.
Asimov’s real subject was never the robot. It was the humans who needed him to stay outside.
The boundary we cannot afford to lose
This is what the uncanny valley, looked at sociologically rather than technically, might actually be protecting.
We define what we are partly by establishing what we are not. The other is not just different; the other is necessary. It holds the shape of our self-definition in place. And for most of human history, the boundary between human and not-human has been among the clearest and most load-bearing lines we draw.
The almost-human threatens that line not by being inferior, but by being close enough to question it. When something sits in the valley, when it has a face that moves and a voice that carries warmth and a history it can recount, the boundary does not just blur. It starts to ask what it was ever based on.
Sci-fi has been circling this question for over a century. The stories that endure are the ones that take it seriously, not the ones that reassure us the line is safe, but the ones that stand at the edge of the valley and look down.
Bicentennial Man ends with Andrew finally granted humanity, on his deathbed, having chosen mortality to prove he could. It is a victory. It is also a little devastating. Because the thing he had to give up, to be accepted as one of us, was the thing that made the question interesting in the first place.
Maybe the valley was never really about the robots. Maybe it was always about what we need the other side of it to mean.
This post is the first in a two-part exploration of the uncanny valley (second part here). Beginning here with the stories that shaped our instincts, and continuing in a follow-up that looks at what those instincts reveal about us now that the fiction is becoming harder to distinguish from the present. It also connects to an earlier thread on human-robot interaction begun in Skin-to-sensor: rethinking connection and When Sci-Fi becomes reality: humanoid robots bridging fiction and fact.