I, Robot

HomePage | Recent changes | View source | Discuss this page | Page history | Log in |

Printable version | Disclaimers | Privacy policy

A book by Isaac Asimov

ISBN: 0553294385
ISBN: 078577338X hardback
ISBN: 0007119631 paper, UK, new edition
ISBN: 0586025324 paper, UK

I, Robot

To you, a robot is just a robot. But you haven't worked with them. You don't know them. They're a cleaner, better breed than we are.

When Earth is ruled by master-machines... when robots are more human than humankind.

Isaac Asimov's unforgettable, spine-chilling vision of the future - available at last in its first paperback edition. - Cover blurb from the paperback edition of I, Robot Robbie

This first story in the collection is basically about the technophobia that surrounds robots, and how it is misplaced. This is not such a strange place to start, when you see that almost every science fiction story to feature a robot up until publication of this one was of the 'robot turns against creator' theme.

Robbie was a mute robot, a nursemaid for a girl called Gloria Weston. Her mother was not convinced that robots were safe, and she decided that they had to take the robot back. Her view changed when Robbie saved Gloria's life in the US Robots' factory.


A robot designed to mine selenium on Mercury's surface goes missing. The two humans on the planet, Powell and Donovan, go and retrieve the robot and try to analyse what happened. They find that its levels of obedience to two of the Laws of Robotics had reached an equilibrium, and it was running around in a circle maintaining this equilibrium value.

Many of these stories explore the implications of the Laws of Robotics, although in Runaround the robot is actually following the Laws as they were intended. In others, ambiguities in the language are employed to achieve the desired effect; that the robot does what it was told, but not what was intended.


Powell and Donovan are back, but this time on a space station supplying energy via beams to the planets. The robot that controls the energy beams has been given a unique ability: reasoning. Using this skill it decides that the humans that inhabit the station are unimportant, and that it serves a greater purpose (ie God). The humans initially attempt to reason with the robot until they decide that they can't win, and the robot can still perform its job well. The only difference is that it doesn't do it for the benefit of the humans, but for its deity.

A robot that invents its own religion. Let's face it, it's cute. Are we surprised that the robot's designation is QT? An interesting point is that the robot still obeys all Three Laws, albeit unwittingly. Why, if it doesn't believe in the humans on the planet Earth, should it act to protect them?

Catch that Rabbit

Powell and Donovan are now in charge of field tests of an asteroid mining robot. For some reason, if they don't watch the robot it comes back empty. The robot has subsidiary robots under its control, and when they secretly observe the robot it starts going on absurd marches and dances with its subsidiaries whenever something unexpected happens.

Here, Asimov anthropomorphises by having a robot twiddle its thumbs when it can't think of what to do. In many cases, robopsychology - personified by Susan Calvin - runs parallel to human psychology. For instance we have already seen hysteria and religious mania.


Somehow, a robot is created that has the ability to read minds. While the heads of US Robots and Mechanical Men are trying to analyse what happened, the robot tells them what the other people are thinking. The First Law still applies to this robot, and it is lying to them about the thoughts it has read in order not to hurt them, especially in terms of the problem it was initially designed to solve. Unfortunately, when the robot is confronted about this, it finds that it has in fact hurt them. Anything it does from that point on would further violate the First Law, and it collapses.

The application of the Laws of Robotics is again the subject here, but in terms of telepathy. The lexical ambiguity that is explored here is the definition of injury, the robot in Liar! takes into account psychological injury as well as physical.

Little Lost Robot

While working in a research outpost, someone told a robot to 'Get lost'. It did. It was then up to US Robots' robopsychologist Dr Susan Calvin, and Mathematical Director Peter Bogert to try and find it. The problem was that they knew exactly where it was. It was in a room with sixty-two identical robots.

So, why was this individual robot so important? The answer is that it had had its First Law modified, to read 'No robot may injure a human being', ie it could happily leave a human to die by other means. Again, we explore the ambiguities of the English language, a technician who wanted a robot to leave told it to 'Get lost', and the robot assumed that the order meant that it should secrete itself. In Little Lost Robot, the Frankenstein Complex is again addressed. The reason that the robot must be found is because people are still by and large scared of robots, and if they found one with a different First Law there would be an outcry, even though the robot is still incapable of harming a human.


US Robots are approached by their biggest competitor with plans for a spacejump engine, but are wary because in performing the calculations, their rival's supercomputer destroyed itself. They determine a way in which they can feed it to their own robot computer without the same thing happening, and build a ship. When they test it, the computer starts playing practical jokes on them, such as not letting anyone control the ship and feeding the crew on beans and milk. Looking through the calculations, they discover why: a hyperspace jump causes the crew of the ship to cease existing for a brief moment which is a temporary violation of the First Law.

This story again relies on the differences in interpretation of the Laws of Robotics between the human members of US Robots and their mechanical creations. The important factor in this robot is its personality; it allows the supercomputer to calculate the answer to the hyperspace problem, but causes it to behave immaturely.


A man standing for election as Mayor of New York, Stephen Byerley, is suspected by some people to be a robot. If this is true, it will ruin his campaign because of the hysteria generated, and because robots are not allowed to run for office. He never confirms or denies his fleshly stature, but while he is campaigning someone rushes the stage and he punches the intruder away. This confirms that he is human in the minds of most people, because it violates the First Law. Susan Calvin is not convinced because a robot can hit another robot.

Again this is about the public's distrust and fear of robots, despite the fact that a robot is the best candidate for Mayor. Many people choose to see Asimov's treatment of technophobia as an analogy to the anti-Semitism he experienced himself.

The Evitable Conflict

Robot Machines' powerful computers that control the world's economy and production, start giving instructions that appear to go against their function. Although each glitch is only small, the fact that they exist at all is alarming. Dr Calvin and Stephen Byerley, now World Co-ordinator, investigate.

It is 2052, and machines are in control of the world... luckily Asimov knows his audience will appreciate a good story, and doesn't resort to alarmism. Anyway, here the machines' First Law is similar to the later Zeroth Law, that 'No robot may harm humanity', and it is this that causes the problems. The people affected are all anti-robot, and so the machines demote them or put them out of business so that they cannot gain enough power to incite a revolution.