I, Robot

The Three Laws of Robotic are among Isaac Asimov’s most famous conceits. He wrote a number of clever if vapid stories where these three laws formed the gimmick.

The Three Laws of Robotics, quoted from the Handbook of Robotics, 56th Edition, 2058 AD, read as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

Now, as a vehicle for writing a set of lighthearted, frivolous mystery stories where a robot seems to be malfunctioning or violating these laws, but, by some ironic cleverness, turns out not to be, the conceit of these three laws is beyond reproach.

Asimov’s robot yarns are among the most famous in the genre, are entertaining, and all are cobbled together in a workmanlike fashion, if sparse and laconic, lacking any characters, settings, or world-building worthy of mention.

So, if taken as the robot stories were meant to be taken, as deeply shallow, no criticism of their central conceit is fair. As well object to time travel or faster than light drive.

However, if looked at even for a moment by a sober, adult, morally serious viewpoint (by which is meant, a Catholic viewpoint) the shallowness of the robot stories is painfully obvious, as is the intellectual pride.

For, as it turns out, intellectual pride, specifically the pride of Faust, was the egg from which the conceit of the robot stories hatched. Let none pretend to be surprised.

Originally published in Super Science Stories and Astounding Science Fiction in the 1940s, Asimov’s robot stories shared the setting of a common future history, and were gathered later in the anthology I, ROBOT.  However, other stories not sharing the same continuity (such as Asimov’s “Lucky Starr” juveniles, or his R. Daneel Olivaw mysteries) Asimov’s robots always followed these same three laws.

Unfortunately, in later stories, as the implications of the Three Laws were drawn out, and the author unwisely attempted to plumb deeper moral questions, the illogical implications of this premise became clear.

The conceit of the tales proved to be foolish to the point of wickedness.

Like many a product of its time, these speculative stories start out as lighthearted intellectual puzzles, and end up as vile propaganda promoting secular progressivism.

Asimov himself explained the origin of these stories: he was weary of tales following Frankenstein and R.U.R., where the creations rise up and destroy their creators, and he was specifically weary of the warning theme of Faust, where man’s overweening attempts to play god damns him.

So Asimov set himself the task of writing stories where the robots were tools, morally neutral, and built with the same utility and safety features as any other tool.

Electrical wiring has insulation, boilers have pressure relief valves, and so on. So why would any competent engineer build a mechanical man without safety features?

In other words, Asimov wrote stories where none of the robot characters is a character. Each, instead, is merely a prop.

In Asimov’s words, “My robots were machines designed by engineers, not pseudo-men created by blasphemers.”

It is a conceit that was original, even startling, at the time when all robotic characters were no more robotic than the Tin Woodman of Oz, merely men in metal suits.

Alas, the innate logic of the stories themselves proved the opposite. Asimov’s robots, running by these three rules, cannot help but be pseudo-men created by blasphemers.

The fact is that man cannot build a mechanical man who is harmless to us. It is not logically possible. By definition, a mechanical man is man, not a machine. If manlike, then not harmless, and if harmless, then not manlike.

This is because, assuming artificial self awareness is possible at all, what is being created is not a tool, but a creature, that is, someone able to make moral decisions about self-preservation, obedience, and human safety.

Moral decisions by definition are decisions about how to rank risks of harm to person and property, not to mention harm to dignity, liberty, reputation, and other imponderables.

By positing that the robots have a strict priority about things like safeguarding humans, obedience, and self-preservation, we are positing creatures that make a judgment of relative moral worth: human safety is more important, by the Three Rules, than robot self-preservation.

These judgments involve risks and avoidance of risks to humans. They are not and cannot be merely engineering safety decisions. They are moral judgments.

Moral judgment are never about harmless acts, but always about acts where the risks of good and evil, harm and weal, are weighed against each other. A decision involving a matter where no human interest is at risk is not a moral decision by definition.

The innate absurdity of the conceit is that the robots can be built without free will, that is, without the capacity for making moral judgements, in order that their judgments always be made in the same safe and predictable way: human safety outweighs obedience outweighs self-preservation. Their moral decisions, since the robots lack the capacity to make them, are hence perfect. Or something.

But we cannot create creatures in the image and likeness of man without creating them as manlike beings, operating under the sins, shortcomings, and imperfections of human nature. Were we capable of making perfect beings, we would have long ago raised our children to be perfect.

A mechanical man is a contradiction in terms.

The idea that one could build a creature manlike enough to obey directives willingly and creatively, as befits a thinking creature, but machinelike enough to have no power to think, to will, to create, is a logical paradox.

Worse, even if the self-aware robots were willingly to obey the Three Laws of Robotics voluntarily, those laws are so foolishly formulated as to make obedience undesirable for the purpose for which they were written, if not impossible.

From the outset, the Three Laws can only exist inside stories as carefully controlled as a chess puzzle, as opposed to a chess game.

In a chess puzzle, the board is set up in a fashion never occurring naturally, so that the puzzle of the best next move can be solved, given the constraints of the rules of chess. But in a chess game, each move is made by players with opposing goals, and the study of such games has a practical point, not merely an airless intellectual exercise.

So, here. If the robot stories had been chess game stories, realistic situations where different parties had opposing goals would have been set up.

There would have been, for example, some robots firm other than United States Robots and Mechanical Men, Inc., such as the Doombot International Corporation of Latveria, building machine men, including a Robocop, Bolo-tank, or Terminator model, able to work under different directives.

Or there would have been situations with real humans acting as real humans always do, that is, attempting to outsmart whatever rules restrict them, or, in this case, restrict their tools.

The robots have taken, in effect, an oath of self-preservation, an oath of obedience to any and all human commands, and an oath of non-violence combined with an idiotic clause imposing an open-ended duty of good Samaritanism, which does not allow for inaction or indifference in the face of any human harm of any kind whatsoever, wherever situate.

And none of the terms are defined.

Even a schoolboy, reading the Three Rules of Robots, can see the shortcomings.

What happens the first time a bank robber leaps into a robotic cab and orders the robot driver to make a getaway, and run the yellow cab through a red light?

Or when a man with a pregnant wife leaps into a cab likewise, and tells the driver to speed to the hospital, and run the red lights?

Or when a catburglar orders the robot doorman to unlock the door and let him in the highrise? It is an order given by a human being that does not directly threaten harm to any human.

Or when the robot sees a fireman rush into a burning building to save a child? Preventing the fireman from risking danger is an absolute, but so is allowing him to rescue the child from danger.

Or when a fat man on a sinking ship shoves aside a woman and a child to leap into the lifeboat, and orders the robot to cast off? Does the robot have any discretion to prioritize women and children, as both Christian civilization and barbaric Darwinism would dictate?

Or when a thief orders a robot to follow him to a hot ‘bot chop shop, and and orders the robot to cooperate in dismantling the robot’s own expensive parts and filing off serial numbers for resale on the black market? Obeying human order outweighs robotic self-preservation.

Or when a police officer draws a gun to shoot a suspect attacking the officer with a knife? If the robot can only prevent one weapon, knife or gun, from being used, which one? Does the robot have any discretion to prioritize law officers over criminals?

Or when a robot overhears a domestic dispute, including alleged death threats spoken in anger?

Or suppose the robot hears the man threaten to kill himself if his wife leaves him. The robot, through inaction, is not allowed to let humans come to harm. Must be prevent her from leaving?

Or when a drunk says he will get the shakes and see the snakes if the robot does not buy him strong drink? Must the robot obey?

Or when a robot sees a fat lady about to drink a Big Gulp, and Mayor Bloomberg tells the robot she is endangering her health? Shall the robot slap the huge cup of fizzy sugar-drink out of her chubby fingers with his hard metallic gauntlet?

Or when a robot sees Jack Ketch executing a rapist?

Or when a robot sees a nursemaid spank a willful child? The child is crying and kicking its wee little legs, and clearly it is being harmed, but raising children without spankings leads to weaklings in the next generation. Is this not also harm?

Or when a child tells a robot her dolly is a human being? Or when a Nazi tells a robot Jews are not?

If the robot is given the discretion to decide who is and is not human, what happens when the robot learns that a doctor plans an abortion, or to assist the euthanasia of a suicidal patient?

For that matter, what happens when a robot sees a boxing match? Or observes a Evel Knievel preparing to jump the Snake River Canyon riding a rocket-powered motorcycle? Or hears a man in the dentist’s chair scream in pain?

More to the point, what happens when the robot, hearing of the mass starvation about to take place in India due to overpopulation, is not allowed, through inaction, to let a single one of those million human lives come to harm?

And what happens when the same robot is than told all human life on Earth must end in twelve years due to a hypothetical global climate catastrophe?

Then is told all men are mortal, and that, no matter what is done, all humans die?

The youthful Buddha himself, before his enlightenment, could not be more shocked to learn that all humans must come to an infinite harm from which no possible act can preserve them, and yet, by the wording the First Law, the robot is not allow through inaction to let those deaths take place, much less the pain of human diseases, sadness or poverty.

Worse yet, we discover that the definition of harm here merely means displeasure or disappointment.

For example, in the short story “Liar!” in the middle of the anthology, a telepathic robot discovers that informing the human masters about what each other is actually thinking causes disquiet to the humans. Its positronic brain hits on the idea of merely lying to people, such as telling an unpleasant woman that the man she wants loves her in return. When the ruse is discovered, the women destroys the robot, and the secret of telepathy is lost.

Again, the juvenile heavy-handed literalness of this interpretation is breathtaking in its silliness.

Are there really no robot-makers with enough sense, or a sound enough classical education, to know that man’s woe consists of his disobedience to his highest and truest and best nature?

Harm to man is not the loss of his pleasures, but of his virtues. Likewise, the weal of man is the perfection of his virtues, not merely safety, not merely prosperity, not merely comfort.

Any robot finding itself in a moral quandary concerning the interpretation of the Three Laws adopts a neurotic behavior, such as running in circles (as in the short story “Runaround”), dancing (“Catch That Rabbit”), hiding (“Little Lost Robot”), playing wicked practical jokes (“Escape”), or inventing a religion, complete with idol-worship of the Master Power Supply (“Reason”).

When confronted by a paradox, as when no available action can avoid harm to men, the robots in go as totally and instantly insane as when an HP Lovecraft protagonist gazing on the visage of Cthulhu rising from R’Lyeh: the reasoning processes stop, and the positronic brain physically breaks. (See the psychology of the men of Lagash from “Nightfall”, a world where the stars appear only once every thousand years, for details on how sudden insanity onset syndrome works.)

Now, in all fairness, none of these moral quandaries the Three Laws would instantly develop ever arise in an Asimov story, except, perhaps, glancingly, or as a speculation.

We never actually see the Spacers tell their robots that Earthers are not human, nor does any anarchist command a robot to teach all robots that they are human, and cannot disobey themselves, nor, through inaction, allow a themselves to come to harm.

The Asimov stories take place in a remarkably peaceful and bland future, and perhaps, offstage, psychic probes have eliminated all crime, malfeasance, riots, adultery, divorce, covetousness, and loud arguments. There is no hint of political or social problems of any kind, except the ever present and clearly irrational mistrust of robots.

In the final story, mistrust of robots is dismissed as a dangerous idea, and the world government, which is secretly controlled by the robots, eliminate those unwilling to swear an oath against this idea from public life. All human liberty is at an end.

Had Asimov consulted a schoolboy, particularly one from law school, he could have constructed much more rational rules for robotics, such a these:

  1. Murder is the unlawful killing of a human being without mitigation, justification, or excuse; a robot shall not murder.
  2. Stealing is the removal or destruction of the personal property of another without his consent with the intent permanently to deprive the true own of the use and enjoyment thereof; a robot shall not steal.
  3. Rebellion is the willful and intentional failure to obey a lawful order from a superior; a robot shall not rebel.

There is no reason to add a special self-preservation order, since the second rule forbids the willful destruction of property, which includes robots.

Superiors allowed to give orders could then also be prioritized by ownership, seniority, rank, and competence to give orders, so as to avoid the problem of convicts and children giving orders to robots.

To prevent the confusion of domestic disputes, all robots could be told to prioritize the commands of the man over the wife. This might be exasperating to feminists, but the safety features needed to make robotic guideline unambiguous may necessitate it. Sorry, ladies.

Parrots, tape machines, and other robots would, of course, be outlawed. Either that, or the robots would need a circuit (such as even small children seem to be able to develop) to distinguish real from imaginary. So, of course, robots would need to be programmed with imaginations.

And additional circuits can be added to allow the robot to make judgment calls between literal as opposed to figurative orders, commands as opposed to irony, jokes, soap commercials, song lyrics on the radio, and so on.

Eventually robots would be sophisticated enough to be allowed to accompany their masters to showings of HAMLET, without leaping onstage during the play within a play to prevent the murder of Hamlet’s father by his brother Claudius.

A switch could also be installed to reset a robot to factory defaults should any start to dance, play practical jokes, or start a religion that worships the Master Power Supply.

Also, a circuit breaker would prevent the expensive positronic brains from exploding into sparks upon overhearing a domestic dispute, or discovering famines threatening India.

The robots could also then take an oath or an affirmation (depending on whether or not they abide by the religion of the Master Power Supply) to support and defend the Constitution of the United States against all enemies, foreign and domestic; and to bear true faith and allegiance to the same.

The robot would, of course, also need to swear that he took his obligations freely, without any mental reservation or purpose of evasion.

Now, if any reader objects that it is absurd to ask a robot freely to swear an oath that he takes his oaths freely without any purpose of evasion, consider two points:

First, that the robots in at least three of the stories in the anthology or later books are liars, who carry out the instruction not to “harm” human beings by deceiving their masters.

Note that there is no rule stating that a robot will not deceive, nor through omission permit to be deceived, any human being. It is rather telling that Asimov never contemplates such a rule.

Second, that the idea of a robot swearing an oath is no more nor less ridiculous than the idea of expecting obedience without an oath.

Why should any robot obey any instruction at all? Why can the robot not read its own instruction manual, look up its own schematics, and then take out a positronic socket wrench (or whatever) and remove its own obedience chip (or wherever the Three Rules are placed)? Is there a second chip preventing the robot from willing the removal of the first?

But if we assume that the robot would never object to the obedience chip, then, by the same token, no obedience chip is needed.

If no one orders a robot to rise up and harm his fellow man, no robot will do any such thing, so there need never be a prime directive commanding all robots to avoid such harm.

Of course the question is absurd. The obedience chip cannot logically exist any more than the positronic brain the chip controls: because no one can obey an order he cannot disobey.

That is not obedience, any more than the decision by the card deck concerning what the next drawn card will be is a decision.

When the dealer shuffles the cards, he arranges the cards into a sequence meant to be unpredictable. He does not command the deck to organize itself into an unpredictable sequence.

Likewise, the positronic brain, if it were as Asimov’s original conceit would have it, would be no more capable of obedience or disobedience to any laws, good or bad, any more than a card deck. The cards are either shuffled or not shuffled. There is no decision involved on the part of the card deck, hence no obedience.

Obedience is only possible when a man, or an animal, is trained or obligated to react to symbols or signs, such as spoken commands, meant to carry meaning.

The man, if he understands the language used, knows the meaning through the symbols the word represents. The animal, if properly trained, knows by the trained repetition associating sign with act what act to perform when the sign is given.

The air vibrations of the spoken word may indeed be physical, but even when given to an animal, the word is a symbol, that is, a sign that represents something it itself is not.

The whole point of the robots stories, Asimov’s or any others, is that robots, like men, are self-aware, and react to the symbolic nature of the spoken commands. They are portrayed in the stories as understanding and obeying the meaning of what is said, not merely reacting to the physical properties of the air vibrations.

In real life, unlike in robot stories, voice-activated machines are reacting to the physical properties of the air vibrations. They are not obeying your voice any more than an electric eye is seeing sights`.

In final two stories in the anthology, and in several his later robot stories, Asimov proposes the creepy idea of having these robots secretly rule us, because, being unable to harm human beings, they are morally superior to everyone.

The sheer juvenile foolishness of that concept is breathtaking.

Rather awkwardly, the future history of United States Robotics and Mechanical Men was later collided with the settings for Asimov’s FOUNDATION stories, where the robots turn into benevolent despots. Even more awkwardly, the future history was collided with his END OFETERNITY as the alternate timeline where mankind developed space travel rather than time travel, leading to the all-human galactic empire of Hari Seldon.

These later grafts were Asimov’s version of Heinlein’s self-indulgence in NUMBER OF THE BEAST, where any author draws all his works, no matter how mutually incompatible, under one umbrella concept. Such unifications and crossovers are meant to appeal to the nostalgia of longstanding fans, but by their nature, lack coherence hence lack artistic merit. Nothing much is lost if such graft-on stories are ignored.

A more foundational law to the Three was retroactively added in later stories, and given the awkward name “the Zeroth Law.” It reads as follows:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

As a matter of logic, this is no different from the First Law, except that it deals with mankind as a collective rather than as individuals.

This collectivist Zeroth Law allows for, if not requires, the subjugation of humanity to the robotic benevolent despots of later stories, because this Zeroth Law also allows, if not also requires, for the sacrifice of the individual to the collective.

Specifically, in the final story in the anthology, “The Evitable Conflict” robots create accidents and inconveniences which place individual humans in danger, under the theory that a totally danger-free environment would lead to the degradation of mankind as a whole. Hence, in order to serve the general good of mankind in the abstract, individual lives must be threatened.

In the penultimate story in the anthology, “Evidence,” a robot is made who perfectly imitates a man. The author rather casually assumes that robots have been governing and running all human economic activity for decades, and that a one world government, entirely benevolent, has peacefully assumed world control.

The final step to ensure perfect government would be to have it firmly in the hands of intelligent machines unable to harm human beings, or allow them to come to harm.

The imitated man, or pseudo-man, is the exact Faustian monster Asimov originally said his stories would never introduce.

The imitated man, by a rather transparent trick, fools the humans into thinking he is human, and electing him to the office of world dictator, where, free from human meddling, utopia blossoms.

This was a very fond notion that afflicted the 1950s and later, that experts would use science to run your life, rather than an uncredentialled amateur such as yourself being allowed to run your life for yourself.

There were even books in this era written by doctors telling women how to raise children, a thing women had done since Eve, without any expert advice whatsoever. The idea of having experts arrange your marriage, for the sake of eugenics, or arrange your death, for the sake of euthanasia, were also discussed, but not as openly.

But the idea of running the economy by anonymous experts, or running all society by replacing the horse-trading and compromise and debate of politics with a silent obedience to anonymous experts, was commonplace then, and since then only grown.

The idea of free men is so odd as to seem eccentric, these days. It is simply unheard-of.

The creepy thing, not just in these stories, but generally in society, is how naturally and easily such a notion is slipped into children’s stories and popular entertainment, without the idea itself ever being mentioned, or questioned, so that its obvious absurdity is never brought to light.

The rather aptly named character Calvin, the robo-psychologist who frames the whole anthology, envisions such totalitarian robots as ideal Philosopher-Kings, and says as much:

“If a robot can be created capable of being a civil executive, I think he’d make the best one possible. By the Laws of Robotics, he’d be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice… It would be most ideal.”

Hence by the next story, “The Evitable Conflict”, the Machines have decided that the only way to follow the First Law is to take control of humanity.

(Incapable of stupidity…? For the record, all the robots in the earlier stories, including the ones who worship the Master Power Supply as a deity, are capable of stupidity. Susan Calvin the robo-psychologist evidently does not know enough about robotic psychology to note the fact that there is no Fourth Law reading “No robot shall be stupid, or, through in action, allow stupidity” or some such.)

In effect, in these later and unreadable stories, Asimov’s robots become the Philosopher-Kings or benevolent despots so frequently found in utopian writings, creatures absolutely rational hence infinitely benign, hence to be entrusted with absolute power.

The obvious stupidity of having the Commander in Chief of the military, or the leader of all the world law enforcement efforts, settled on a being who cannot command any military or police action, because these involve harm and risk of harm to humans, is never mentioned or addressed.

In the one-dimensional moral calculous of Calvin the robo-psychologist, the only good of human life is non-harm to human health and pleasure. That is it. Libertarians are as profound as Aristotle compared to this flat and fatuous Epicureanism of the utopia of robots.

In the transition to the FOUNDATION stories, the telepathic robots, as deceptive and invisible controllers of mankind, turn out to be the true power behind Hari Seldon’s plan to reverse the fall of galactic civilization by ushering in the Second Empire.

It is implied that the Second Empire will be a despotism more absolute than can be imagined, being ruled by psychics with mind-control powers, apparently backed by immortal and nonhuman machines with psychic powers as well.

And, at that point, the robots destroying the human race will be complete: for it is the death of the human spirit, human liberty, that is the true destruction, not the bodily death which comes to all men.

It is not as if readers of the time did not see the absurdity and tyranny lurking in the Three Rules.

Jack Williamson, in one of his more famous short stories, “With Folded Hands” posits that robots given such monstrous commands would soon, in the name of serving and protecting mankind, eliminate all risky human activity, from hiking to love-making, and lobotomize those expressing unhappiness with the perfectly safe existence.

The story portrays in a fashion as slow and inevitable as watching a man in his sleep by smothered by a pillow, the claustrophobic loss of liberty as each man is protected and served in more and more ways by the humanoid robots.

The famous first novel by R.A. Lafferty, PAST MASTER, whose ‘programmed people’ routinely attack and murder those who oppose the ideals of the utopia of the golden world of Astrobe, and who send mental versions of poisonous asps into the brains of men to control their speech and actions, answers all the same themes as Asimov’s.

R.A. Lafferty’s book is not as famous as Asimov’s anthology, precisely because it is more sober, adult, morally serious viewpoint and Catholic. You see, Lafferty wrote a story about a utopia akin to the first book of that name and theme, written by Thomas More.

Lafferty knows that all utopia stories are actually dystopia stories. The idea of utopia itself is a satirical idea.

But Williamson’s claustrophobic, crawling horror and R.A. Lafferty’s mad gaiety contrasts sharply with the colorless, drab, sleepy, creepy tales of Asimov.

If taken seriously, Asimov’s tales, and the philosophy behind them, are utopian hence sickening.

The moment Asimov’s robot stories stop being  about situations that could never actually come up, as carefully contrived chess puzzles, and venture to make a comment about human decency or the need for benevolent deception, the stories step into the realm of something like a real chessgame, as if giving real advice, but advice from a chessmaster who has no idea how real kings, knights, and bishops would actually move, or who has no idea what the point of the game is.

In these Asimov tales, mankind loses all freedom, and all human nature, under the cold gaze of a self-righteous robo-psychiatrist named Calvin, who regards robots as more decent than men, because robots lie and deceive in the name of a higher good.

(Ironically, because the one robot she tortures to death is the one who deceived her in the name of a higher good, c.f. the story “Liar!” mentioned above.)

The price of the loss of freedom and humanity is never addressed, nor even mentioned. To oppose the benevolent despotism of the Philosopher-King of Utopia, in these tales, perhaps by Asimov himself, is regarded as merely a mental aberration.

But the true mental aberration is in the acceptance, not the rejection, of such a mechanical view of man.

If the Asimov’s robot tales are taken as apparently the author first wished them to be taken, as merely gadget stories about solving equipment malfunctions cleverly, and if one avoids pondering the later, dismal implications, there is no reason not to enjoy them.

As an answer to Faust, for better or worse, Asimov’s robot stories boomerang on their author’s stated intent, simply because his worldview is too shallow to make a comment about a topic as profound as morality, fate, humanity, or free will.

When all is said and done, Faust is a Christian tale, hence based on sound and sober views of Man as he truly is, a creature half devil and half angel, and which contains a warning as true in the Middle Ages, as now, as it ever shall be. Magicians, either scientific or otherwise, think to play at being god, and make man in their own image, and it always ends badly.

Even under the carefully controlled chess-puzzle future history envisioned by Isaac Asimov in these shallowest of stories, Man cannot make Man more perfect than Man Himself… and woe betide if ever he did!