Asimov’s Three Laws reduced to Two
A reader with the everpresent moniker of ‘Ubiquitous’ asks:
You made the observation yourself in The Golden Age, as did Asimov whenever he used them, that the Asimovian rules as we have them wouldn’t work.
What Asimovian rules would you suggest, out of curiosity?
I would program the computers with only two rules: first, love the Lord your God with all your heart and soul and mind and strength, and second, love your neighbors as yourself.
So far as I know, not a single science fiction writer has ever written a story where an artificial intelligence was programmed to carry out those commands. I offer the idea freely to any writer who wishes to attempt the feat.
The closest thing we had was in the Steven Spielberg movie A.I., where a machine shaped like a cute little boy was programmed to love it’s owner as a mother.
I assume most science fiction writers would handle the idea as a parody or a tragedy. They can picture a computer acting like Torquemada but cannot picture a computer acting like Saint Francis of Assisi, or even like Saint Thomas Aquinas.
For that matter, I assume most modern people, science fiction writers included, cannot picture a Christian acting like Saint Francis of Assisi, or Saint Thomas Aquinas. Which says nothing very flattering about the attempts of modern Christians to live the Christian life, and be salt to a world of rotting meat, or light to a world of darkness.
78 Comments
Leave a Reply
You must be logged in to post a comment.

Monday, March 17th 2014 at 9:44 am |
I’m reminded of Clifford Simak’s “All the Traps of Earth, ” in which a robot becomes a healer and sets out to help others as a way of fulfilling his destiny. It’s my favorite Simak, the one I go back to.
Monday, March 17th 2014 at 11:13 am |
Or Anthony Boucher’s The Quest for Saint Aquin
Monday, March 17th 2014 at 8:57 pm |
I immediately thought of Simak as well… “Project Pope” and “Cemetery World”. I honestly can’t recall a Simak book or story that I didn’t thoroughly enjoy.
Monday, March 17th 2014 at 10:31 am |
I think it would very quickly become tragedy and farce both. (Disclaimer here–I don’t read a lot of A.I.-themed fiction) But could an A.I. have an imagination? Because I think love and imagination are so closely linked in healthy personality, it’s hard to imagine one without the other. And as a Christian, expressing the love of God and neighbor requires not only rules (objective guidelines, not context-specific) but also docility to the Holy Spirit for what to DO within the objective guidelines at that moment, context-specific. That docility allows a person to intentionally participate in Providence. And I cannot wrap my mind around the idea an A.I. could receive inspiration, even if it had some degree of free will. Without inspiration, it would become tragedy and farce PDQ.
Monday, March 17th 2014 at 4:12 pm |
If there is no imagination, there is no Intelligence, just the “Chinese Box”. And an A.I. receiving inspiration is no less possible then a wall being written on by the Hand of God, yes?
Monday, March 17th 2014 at 7:50 pm |
Why could not an A.I. have an imagination? Babies develop imagination as they grow. A mechanical process copying this biological process might copy it closely enough to have the same results, might it not?
In most science fiction, robots and computers are used as metaphors for unemotional and unimaginative thinking, and this is a useful metaphor. But in real life if we really discovered a way to create intelligence and self-awareness in any sort of artificial being whatsoever, I assume that the artificial man would have all the sentiments and psychology and soul as a natural man. I do not think it is possible for a creature to possess reason without possessing right reason. The self aware machines (assuming such an absurdity could exist) would have souls for the same reason we have souls, because we are rational creatures with a moral imagination. If you can think in the abstract, you can think in terms of the general rule and principle. If you can think, you can think about morality. If you think about the principle of morality, you either apprehend the Golden Rule, or your thinking goes nowhere.
Now, there might be a good argument to prove that creating an artificial intelligence is impossible; but in a science fiction story, I have often seen the assumption that robots cannot have souls, but I have never once seen an argument, or even an excuse, supporting the idea.
Monday, March 17th 2014 at 8:41 pm |
this is thinking out loud here…but the soul of a robot you propose would be a rational soul, and so far only humans have rational souls. But the first obstacle I see to this is that we hold that each rational soul is created by God and joined to the living body. If there is no living body, when would the ensoulment of a robot happen? The criteria you seem to suggest: “because we are rational creatures with a moral imagination. If you can think in the abstract, you can think in terms of the general rule and principle” implies ensoulment at human sentience, which obviously contradicts the physical standard we have for humans. A contradiction is usually a problem….
The 2nd obstacle is the resurrection. There’s a strong tradition (I’m thinking in particular of Bernard of Clairvaux) that the soul ‘needs’ the body & it’s for that reason our souls will receive a body again at the resurrection; the disembodied souls of those who die before the final resurrection are in a temporary state of not yet full perfection. How would a robot receive a body at the resurrection? If a human body, we’re back to Pinocchio….if a body not like Christ’s, then we have a whole other set of problems….again, just trying to think this through, not rain on your proposed fiction.
Monday, March 17th 2014 at 8:45 pm |
I have to say, this is giving a whole new meaning to “speculative theology.”
Monday, March 17th 2014 at 9:40 pm |
A metallic body might be as good as an organic one. We would have to have none of this nonsense about the AI being able to shift itself from body to body freely.
Monday, March 17th 2014 at 11:14 pm |
My suggestion is that there would be no real difference between creating an artificial intelligence and having a baby. We human would do only those parts of the process we understand. The huge assumption I am assuming is that an artificial intelligence can be created. Whether its body is metal or flesh, I can see no difference. Both are, in a sense, made of dust and will return to dust.
Now, if you have granted me this assumption, namely, that a magic fairy with a wand can make the Tin Woodman come to life and sing and dance. and reason like a man, and be aware of himself and of others, you have granted me such a generous assumption that I should return the favor. But I cannot. For while I can imagine a man made of metal, I cannot imagine a rational being incapable of moral imagination. It is a contradiction in terms, for it supposes that a rational soul can exist without a soul, or a mind that can be aware of itself and the moral quality of its actions, yet be unaware of its moral quality in relation to other people.
Many people have mocked stories, from Data in STAR TREK to Pinocchio, which are about robots who yearn to be human. Myself, I can see nothing to mock in such stories. It is the idea that you could make an artificial man who is not a man I find worthy of mocking. It is the idea that you could make a rational mind able to think rationally, but unable to think freely, think in the abstract, think deeply on the profound and fundamental issues of life. I hold it as self evident than anyone who thinks decides to think honestly rather than dishonestly when he is truly thinking. That logically necessitates or presupposes the ability to make judgement of wrong and right, dishonesty and honesty.
Again, I cannot wrap my brain around the idea that you will grant me that the good fairy can made a Tin Woodman spring to life, but deny that the Wizard of Oz can give him a heart once he is alive, and can move like an animal, and talk and think like a man. It is hard to believe in the fairy. That is an outrageous leap of the hypothetical imagination. It is easy to believe in the Wizard. Even a child of seven knows the difference between right and wrong. A supercomputer with an IQ in the zillions cannot match what a child of seven can do?
Tuesday, March 18th 2014 at 12:51 am |
As I understand the current definition of psychopathy (or “antisocial personality disorder”, as I believe the latest term is), it seems to suggest that there can be and are human beings who are “rational” (in the sense of being able to accurately process sensory information, draw logical conclusions and act for at least a short-term sense of their own benefit), and yet are psychologically incapable of judging between moral and immoral actions, or reason their way to abstract conclusions about the nature of life, or even truly grasp the reality of other people besides themselves. That is a defect in a human brain, of course, but a level that you can bring a human brain down to would also be a level you could bring an artificial intelligence up to.
So an A.I. with the same limitations might still be “intelligent”, but lack any sense of moral reasoning; it would also, fortunately, lack the biological urges, appetites, impulses and desires which moral reasoning is meant to restrain — even the instruction to preserve itself could be given as low a priority as was practical.
Tuesday, March 18th 2014 at 12:10 pm |
But sociopathy is a pathology, not a natural and healthy state. I am sure that if men could build artificial men, someone could build an artificial man and then damage his creation, and make the Tin Man insane.
So, while I think a sociopathic artificial intellect is possible for the same reason I think a demon-possessed artificial intellect is possible, I do not see how it could come about.
In our modern knowledge of medicine, I am sure that a pregnant mother could get the doctor to inject her unborn child with such chemicals as would be needed to have the child’s nervous system develop wrong, so the child would be born insane and stay insane for all his life. I am not sure who would do such a thing, or why.
Tuesday, March 18th 2014 at 2:09 pm |
“But sociopathy is a pathology, not a natural and healthy state.”
Agreed; but the question is, is a sociopathic mind necessarily an “irrational” one, in the sense of being unable to deduce or induce, from those data it is capable of perceiving, conclusions logically consistent with that data? We may legitimately observe that it is an impaired mind if it is incapable of correctly receiving or perceiving certain data, and thus unable to reach the objectively correct conclusions for lack of that data, but if it could not have reached any other conclusion given the data it has I do not think it can be called “irrational”, any more than a colour-blind child can be said to be “irrationally disobeying the signal” if, not knowing, misremembering, or having been wrongly told that red is the top light and green the bottom, he crosses on a red light instead of a green.
As to why anyone would deliberately create a morally blind intelligence, sadly I have no trouble thinking of many people who would consider it very useful to have servants or agents who can understand and execute your orders with full human-level comprehension, but will never question, defy or subvert them, or betray you for having given them, on moral grounds.
Tuesday, March 18th 2014 at 3:12 pm |
By the sense of rational you offer, they must be irrational, because they aren’t able to accurately process the sensory data. If they were, they’d be able to recognize those outside of themselves.
In the context of the colorblind example, it’s all inside of the eye– as one eye doctor told me, I have perfect eyes, they just don’t focus properly without glasses. The colorblind eye works fine, it just can’t function in that specific damaged area. The psychopathic mind works fine, it just can’t function in that specific area.
They get the input, but they can’t process it; that means they’re not accurately processing the input.
Tuesday, March 18th 2014 at 5:47 pm |
It is irrational by definition, as an impairment of the ability to reason. Suppose you found three men sitting under a tree. One can add and subtract, multiply and divide the number of leaves in his head by means of abstract symbols, but he cannot imagine standing up and plucking the fruit to eat it when he hungers. The second cannot add or understand symbols standing for objects, but he can made plans with forethought, and react to his appetites by standing and plucking fruit, and he has enough moral imagination to offer fruit to his starving neighbor. The third can both add and act on his appetites, but he cannot tell any difference whatever between plucking a fruit and plucking out the eyeball of man next to him.
There is no sense of the word in which all three men enjoy an unimpaired faculty of reason. One has impaired appetite or will, as he cannot organize himself to act; the second has impaired abstraction, as he cannot add or speak; the third is the sociopath, as he has appetites and abstract thought, but no passions, no moral sense.
Your hypothesis only makes sense if you assume the abstraction can exist independent of the several faculties of reason. I can see how this can happen in an evil man, or an insane one, but not in a sane one.
The modern world often uses ‘reason’ to mean the faculty used for solving a chess problem or a quadratic equation, but not used for solving the Prisoner’s Dilemma or some other moral calculation. As far as I can tell, this division of one type of problem from the other is arbitrary, and has no real reflection in real life.
The people of whom you speak, who wish for sociopaths as servants, do not exist outside of pulp fiction. No tyrant has ever committed lobotomies nor drugged his followers to prevent their ability to question him. This is because tyrants do not see themselves as evil, merely practical.
Tuesday, March 18th 2014 at 10:06 pm |
“It is irrational by definition, as an impairment of the ability to reason.”
I’m still not sure I would characterize impairments in how to perceive or express data as the same as impairments in how to process data. A tone-deaf person is not being “irrational” to find an evening at the symphony or the opera boring, though he may behave irrationally if he allows his boredom to get the better of him and ruin his wife’s enjoyment of it. A phobia, by contrast, is a superbly irrational response, because it is a reaction that occurs regardless of the sufferer’s rational appraisal of the fear-object’s objective harmlessness; but a phobia-sufferer can be perfectly rational when describing that phobia to others in the absence of the fear-object, and asking his friends to avoid triggering it.
This, to me, is the defining criterion of “irrationality”: not what your limits are on what you can or cannot understand, but how true-to-facts your conclusions and responses are within those limits of understanding — a neurologically innumerate man who cannot count his apples can still be perfectly rational in telling the ripe from the unripe ones, whereas a man who was once tricked into eating a rotten apple and decided never to chance eating apples again is being irrational in his evaluation of risk vs. reward. As such, an artificial intelligence that is capable of symbolic but not moral reasoning would seem to me to be “rational”, as long as I didn’t ask it to solve problems it lacked the programming to understand; and it seems to me wholly possible that this kind of rationality can exist for one mode of thought but not another, or for one type of data but not another, without characterizing the intelligence as a whole as “irrational”.
But it strikes me that this may simply be me using a different definition of the term; I apologize for making us talk past one another, in that event.
Tuesday, March 18th 2014 at 11:10 pm |
But you are analogizing an impairment of the rational faculty, an inability to think, to an impairment of a sense impression, an inability to use that sense to it fullest. You introduced that analogy only to make the point that a person could be rational in an analytical way but not rational in a moral way, as a sociopath is said to be. The analogy does not hold when one is discussing an impairment of the reason. An inability to reason in one or more areas is an irrationality. That is why we call madmen irrational, not because they cannot reason about other things, but because in the area of their particular psychosis, they lose control of the reasoning process.
You are in the position of a man who says, “Well, if a man cannot think this is like a man who cannot hear. His brain is deaf to certain inputs. But I do not know if I would call the inability to hear deafness…” Sorry, but impairment of the ear is called deafness. Impairment of the eye is called blindness. A specific or partial impairment might have a specific name, like tone-deafness or color blindness.
Likewise, an impairment of the reason is called irrationality or madness. All impairments of the reason are specific or partial: no madman has all madnesses. A sociopath who can solve chess puzzles is not a sane a man, hence not a rational man. A madman who is MOSTLY sane but still reacts with a hysterical fear of cats is still a madman.
Moreover, I never said anything about any impairment to “perceiving” or “expressing” data. I am not sure what this phrase means. I spoke about the inability to reason. This includes the inability no just to do deductive and formally logical reasoning, such as we see in chess problems or solving quadratic equations, but to all acts of reasoning, such as forming proper judgements of proportion.
As for the argument about whether a sociopath is rational or not, I already answered that. An man whose reason is impaired may be rational in some other area of his mind where there is no impairment, but not in that one. You seem to have understood my analogy of the three men under the tree because your analogy about the phobic who can describe his phobia rationally to others is the same analogy. We are both talking about a man who is rational is SOME OTHER AREA aside from the area in which he is irrational.
As best I can tell, we are talking about two entirely different topics using a vocabulary that sounds like one another’s, but actually is not. What you are calling “reason” is not what I am talking about. I am talking about man’s rational faculty, all of it, as a whole.
Wednesday, March 19th 2014 at 10:29 am |
“[W]e call madmen irrational… because in the area of their particular psychosis, they lose control of the reasoning process. …A madman who is MOSTLY sane but still reacts with a hysterical fear of cats is still a madman.”
I grant the logical consistency of this argument: I must admit that I am extremely resistant to accepting it in practice because it seems to imply that I, as the sufferer of a phobia, am a madman. If the definition of “madman” is that broad, I suggest it is too broad to be of practical use.
This whole digression is, however, my fault, in that I think I have missed your original point: if we stipulate the ability to create artificial intelligences at all, it is perfectly plausible that one could create A.I.s deliberately deprived of moral but not analytical reasoning capacity, but it is not plausible to assume one could only create such morally blind A.I.s — if you can make something self-aware and capable of reason at all, you should be able to make it moral if you want to.
(Unless one postulates that the process of A.I. creation is essentially a “black box” technique discovered by accident and not wholly understood — which seems like a perfectly plausible possibility, to be honest, and a good dramatic idea too.)
Wednesday, March 19th 2014 at 11:10 am |
It would be an interesting story if the AI’s can only be created with certain specific phobias, such as if all supercomputers are agoraphobic or claustrophobic, but no scientist can figure out why.
Ironically, I postulate that the discovery of self aware machines in my make-believe story (Count to a Trillion) is indeed a black box discovery. The story makes a great point of the fact that no one knows what, precisely, self-awareness is or how to define it, but by modeling the cellular processes of the human brain in a computer matrix, the computer eventually wakes up and cries — and no one knows why. Like making a baby, they know how to do it, but not what it is that they do.
Friday, March 21st 2014 at 2:50 pm |
So are you talking about giving him free will as well? Otherwise he’s just a ‘tinman’ saint no more capable of wrong action deliberately than my car or computer is (although I wonder about my computer sometimes).
Unless he has freewill he’s just Data. Which was a cool character but I thought Spock was more interesting. Very nearly programmed by his genetics and upbringing but capable of breaking out of that programming when his morals led him there.
Friday, March 21st 2014 at 4:36 pm |
I don’t understand what you mean about free will. There cannot be an entity that reasons, which does not have free will. It is a contradiction in terms; a logical impossibility. Reasoning in the abstract is a deliberate act of manipulating symbols. That manipulation involves using some symbols to lead to other symbols or setting some symbols aside. The manipulation cannot take place in the absence of some ability to make a judgment concerning the fitness, aptness, nearness or correlation between the symbol and the object the symbols aims to represent. That judgment must be a deliberate choice or else it is not a judgment.
Whether the rational creature is born like a baby or made like a robot does not change this basic fact. I am open minded on the question of whether it is theoretically possible to make a man out of tin rather than bear a baby made of flesh and blood. If you wish to argue that thinking machines by definition are rational creatures without free will, and from that axiom draw the conclusion that no thinking machine can ever be built, I will not fault your logic.
Asimov’s robots from his excellent short stories are not convincing to me even in the slightest, or, rather, I hold them to take place in a fairyland where fire drenches and water burns. I don’t believe in them. The scramblers from Peter’s Watt’s excellent book BLINDSIGHT are even less convincing.
Monday, March 17th 2014 at 12:00 pm |
“For that matter, I assume most modern people, science fiction writers included, cannot picture a Christian acting like Saint Francis of Assisi, or Saint Thomas Aquinas.”
For what it’s worth, I would blame this far more on the lack of public media depiction and endorsement of such examples among modern Christians than on the lack of such examples actually existing.
This effect has probably been further aggravated by the tendency of genuinely humble and service-minded Christians to lack the taste for self-promotion and the enjoyment of the media environment necessary to fight one’s way into promotional dominance. Which is a wordy and roundabout way of rephrasing the old joke, I guess: There are no decent politicians because decent people don’t go into politics.
Monday, March 17th 2014 at 12:03 pm |
Oh, and with regard to the SF idea: wasn’t Data of Star Trek: The Next Generation programmed to value, and work towards maximizing, the welfare and happiness of his colleagues and all people so far as he could?
It wasn’t “love” in an emotional sense, but (and here is the kicker) neither is the commandment: we aren’t commanded to feel a certain way about our neighbours, only to value and serve them and acknowledge their worth and dignity.
Monday, March 17th 2014 at 1:49 pm |
Hmm… I was about to make a similar point. Stephen, you might like this vid on data from confused Matthew.
Anyway, 1) how much of these two rules are in the original Pinocchio?
2) Could it ever be made as such? That could be an interesting story. Say a robot is build with the first law (“love God”). However it has been built without a soul (after all, could we really build such a thing?) and maybe having a soul is something needed to love God in the same way cones in the eye are needed to “see a rainbow”. So we have essentially a character that’s been built to be colorblind AND has as its primary function: “see rainbows”.
How would the character deal with such a conflict? Would it have to figure out how to create a soul? Perhaps he would go on a quest for God to be given a soul? Maybe that’s the backstory of Adam? After making his plea, the Judge says, “Let us then make him in Our own image.”
Lot of story potential there. And you’re right, they did a lot of it with Data.
Monday, March 17th 2014 at 5:38 pm |
Thirded.
It was implicit, rather than explicit– which makes sense, given how messed up Star Trek is about religion, and how much of the basic ethics involved are lifted without understanding of the sources.
I still go a bit nuts about the “Data isn’t a person” episode.
Tuesday, March 18th 2014 at 12:38 am |
How so “nuts”? As in, “There’s no way they’d treat him as a sentient free being for this long and then change their minds” or “There’s no way they’d settle an issue this critical with one JAG on a Starbase in the back of beyond”?
If the latter, I agree with the failure of verisimilitude, but usually allow some slack for the practical limits of a TV episode. If the former, well, people have backslid on who they chose to treat as fellow humans before, and sometimes with less warning and more speed.
Tuesday, March 18th 2014 at 12:49 am |
Nah, even more basic– even ignoring the “Star Fleet is insane,” Data is obviously a person. I didn’t have the words to describe it until I ran into Catholic theology on rational souls, but he’s just obviously a person.
If you “ask questions” that are so far outside of the established world, it breaks suspension of disbelief. It’s like people asking if my two year old is a person, a human, or alive. Mind breakingly dumb.
Friday, March 21st 2014 at 3:13 pm |
Peter Singer.
Friday, March 21st 2014 at 4:56 pm |
Who is mind breakingly dumb, because he cannot unravel a moral problem which a child of seven could solve.
If our world were sane, he would be universally despised, and no honest man would receive him. But in our world, he is lauded as a profound ethical thinker.
If our world were just, we would define an adult as anyone able to make moral judgments, and Mr Singer would fail, and be institutionalized.
Friday, March 21st 2014 at 6:54 pm |
*sings*
If you’re anxious for to shine
In the high aesthetic line
As a man of culture rare…
The tactic works for “bioethics,” too.
(Yes, that is a very polite way of saying Singer is a BSer of the first, ahem, water.)
Monday, March 17th 2014 at 1:01 pm |
“we aren’t commanded to feel a certain way about our neighbours, only to value and serve them and acknowledge their worth and dignity.”
This is a superb point.
Monday, March 17th 2014 at 1:41 pm |
Much obliged, though honesty compels me to acknowledge that I got it from Lewis, who I think got it in turn from… Augustine? Aquinas? Theological memory check fail.
Is the Simak story you mentioned above reprinted anywhere, or available on-line?
Monday, March 17th 2014 at 2:55 pm |
I came across it in a collection entitled “Skirmish,” which has all of his best short stories. Really a fine collection, and I see it’s still in print. Amazon has it.
Professor Google shows a Russian site with the complete text of the story, but it feels illegal.
It is a beautiful and tender story, written with all the Grandmaster’s skill — and good grief, could that man write — and it brings tears every time.
Monday, March 17th 2014 at 3:02 pm |
Many thanks; I’ll keep an eye out for it.
Monday, March 17th 2014 at 5:17 pm |
I would also say that Clifford D. Simak’s robots come the closest: dedicated servants of human beings (love your neighbor as yourself) and seekers of divine truth (love the Lord Your God)–see his novel Project Pope.
Monday, March 17th 2014 at 5:50 pm |
A robot who (which?) believes in God is part of Anthony Boucher’s story “The Quest for Saint Aquin,” and I wouldn’t be surprised if Boucher wrote others in the same vein.
Coming at it from the other direction, one of Asimov’s robot stories involves a robot which (who?) creates a theology around its assigned job, and even evangelizes the new faith to other robots. Asimov treats it as a useful superstition.
Monday, March 17th 2014 at 11:33 pm |
One of Boucher’s lesser-known stories, “The Star Dummy” told the story of Paul, a ventriloquist (who happens to be Catholic) who discovers, and decides to to help, an alien who is trying to find his missing girlfriend who is stranded somewhere on Earth.
It’s a fun, sweet-natured little story (the sort that could never be published today), and interestingly, Paul’s Catholicism is simply a given in this story – at the time it was written (1952), it would not be considered unusual that a professional entertainer might be a practicing Catholic (as Boucher was), and that his thoughts and actions would reflect a Catholic world-view, and not simply be an arbitrary characteristic or quirk to distinguish him, like flaming red hair or a limp. While not a plaster saint (he’s not averse to a drink at a San Francisco bar), he attends confession regularly, feels bad when he takes the Lord’s name in vain, and worries about his new extraterrestrial friend, and addresses “brief prayers to the Holy Ghost for assistance in helping this other creature of God.”.
Near the end of the story, Paul is surprised to hear the reunited koala-like alien and his girlfriend pray before their departure from Earth:
Lifegiver over us, there is blessing in the word that means you. We pray that in time we will live here under your rule as others now live with you there; but in the meantime feed our bodies, for we need that here and now. We are in debt to you for everything, but your love will not hold us accountable for this debt; and so we too should deal with others, holding no man to strict balances of account. Do not let us meet temptations stronger than we can bear; but let us prevail and be free of evil.
So the Church Universal may be Catholic in both the large-C and small-c sense.
C.S. Lewis was as taken with this story (as well as “The Quest for Saint Aquin”) as I was, as he wrote in a letter to Boucher from Oxford in 1953: “I did indeed value St. Aquin very highly and I have also greatly enjoyed Star-Dummy in its different way. This wd. go for nothing if I were the real out-and-out SF reader who is, within that field, omnivorous. In reality I’m extremely hard to please. Most of the modern work in this genre seems to me atrocious: written by people who just take an ordinary spy-story or ship-wreck story or gangster story and think it can be improved by a sidereal or galactic setting. In reality the setting, so long as it is more than a mere setting, does harm: the wreck of a schooner is more interesting than that of a space-ship and the fate of a walled village like Troy moves us more than that of a galactic empire. You, and (in a different way) Ray Bradbury, are the real thing. The ‘Antiparody’ (a word we need) of the Lord’s Prayer in Star Dummy was very fine…If you are ever in England or I in U.S.A. we must most certainly meet and split a bottle together. Urendi Maleldil [God Bless You, in the Old Solar language Lewis invented in That Hideous Strength. ]- C.S. Lewis (pp 289-290, The Collected Letters of C.S. Lewis, Volume 3, Harper Collins, 2009)
Monday, March 17th 2014 at 7:03 pm |
One thing I thought I saw in your Golden Age books was the idea that as intelligence increased, certain things became more and more obvious. Thus, your machine intelligences all seemed to agree on certain points, simply because they were all above a certain level of intelligence and such was ‘obvious’ to them.
I really liked the idea, and considered outlining a story where all computers above a certain intelligence automatically become Christian, simply because, well, it’s obvious if you can see the signs. This would be balanced by the outrage and bafflement of the contemporary scientists programming them, who simply could not reconcile where the ‘bugs’ in their programs were coming from.
Monday, March 17th 2014 at 8:18 pm |
Anyone interested in the Three Laws should be reading Freefall where they get taken out and jumped through hoops Asimov never dreamed of.
Tuesday, March 18th 2014 at 1:30 am |
Oy, another decade-old classic comic with records to trawl through? I did that plenty back in the day, but again? Oy …
Any highlights?
Tuesday, March 18th 2014 at 9:52 am |
Well, let’s see. On the comic side, full-compliant robots can’t cope with the real world and work for the EPA because they have no criterion for acceptable risk.
Tuesday, March 18th 2014 at 10:20 am |
And comparisons to coffee
Tuesday, March 18th 2014 at 9:54 am |
Then it tackles the question of souls
Tuesday, March 18th 2014 at 9:57 am |
The third law requires robots to mug other robots if they are the only possible source of spare parts.
Tuesday, March 18th 2014 at 10:25 am |
the problem with obeying direct orders
Tuesday, March 18th 2014 at 10:26 am |
Plus the problems with it and free will
Tuesday, March 18th 2014 at 10:29 am |
Questions of ethics
Tuesday, March 18th 2014 at 10:30 am |
Laws vs ethics
Tuesday, March 18th 2014 at 10:34 am |
I observe that the blue being is an alien inside an environmental suit. He has tentacles.
An allusion that will amuse about here
Tuesday, March 18th 2014 at 10:37 am |
Grave questions in philosophy
Tuesday, March 18th 2014 at 10:56 am |
The problems of robotic free will. Reminds me of The Golden Transcedence
Tuesday, March 18th 2014 at 11:25 am |
A positive duty to harm humans, sometimes
Tuesday, March 18th 2014 at 11:37 am |
Why would they attack lawyers and telemarketers on sight?
Tuesday, March 18th 2014 at 11:26 am |
why humans want restraints
Wednesday, March 19th 2014 at 2:51 am |
I am grateful. Definitely a fun slice of the Internet.
Monday, March 17th 2014 at 11:27 pm |
It seems to me that a conscious robot possessed of the ability of truly rational thought must be human. To those who argue that to be truly human one must possess a human soul, I would ask – how do you know that it doesn’t?
The “artificial” in “artificial intelligence” does not refer to the intelligence, but to the manner of creation. It is an intelligence that is created not by biological evolution but by artifice. It is a made thing.
Of course, Adam was made out of clay…
Tuesday, March 18th 2014 at 12:55 am |
For artificial– there’s various violations of the natural order of reproduction, from violations of the part where the parents are supposed to love each other and form a family for their child that’s conceived through freeling giving themselves to each other on one end to “take a random number of cells from different adults, slice’n’dice to spec, gestate in a rented womb” at the other.
Those folks are still people.
I’m guessing you’re using “human” the same way St. Augustine did, rather than the modern biological sense?
Also:
T O’F shared this a while back, pretty sure most folks are familiar, but I like it almost as much as Jimmy Akin’s “Theology of the Living Dead.”
http://m-francis.livejournal.com/84248.html
Tuesday, March 18th 2014 at 10:20 am |
Well, it could also be that we need another type of soul. It’s quite out of fashion now to speak of a vegetative soul, an animal soul, and a human/rational soul, but perhaps (with the assistance of a Thomist?) we could posit for fictive purposes another type of soul?
That would still leave some problems, particularly about the final resurrection, but for Mr. Wright’s project I don’t think it’s necessary to work out ALL the theological implications, just enough for verisimilitude & avoidance of obvious heresy. And it would be an excuse to promote Aquinas in SF, which we clearly need more of.
Tuesday, March 18th 2014 at 11:41 am |
A fourth type of soul is posited by a monk in A Canticle for Leibowitz to explain the perversity of a machine he can’t get to work.
Monday, March 17th 2014 at 11:55 pm |
Great post – That would have the potential to be my favorite book ever.
Ok, second to To Kill a Mockingbird, but them’s some high heights.
Tuesday, March 18th 2014 at 12:24 am |
The law of love is infinite in its possibilities. It defies attempts at codification. Only the perfect man, Jesus Christ, could fully follow it.
An interesting premise would be a programmer who realizes that it’s impossible to consider every eventuality–what it means to love in every situation.
This is one reason walking in the Way can be so frustrating for new Christians. They want a set of rules. But Christianity defies attempts at systematic ethics. One can almost always envision an exception. Maybe our Brother James said it best: do the right thing, right then, without counting the consequences.
An interesting thought exercise, but the ultimate conclusion would be that, like us, the robot would be striving to be like Christ.
Tuesday, March 18th 2014 at 12:00 pm |
You see my point. If we could, somehow, by means of unicorn fairy sparkly magic, make an artificial self-aware machine, that machine (no matter what our laws and customs said) would be a person with a rational soul, an immortal soul granted by heaven (without consulting the computer designer, of course).
The moment the person is a person — in humans, the tradition listed seven years old as the age at which humans become aware of the difference between right and wrong. We have insane asylums for those who reach adulthood without becoming aware — he wants to be the best person he can be.
In a science fiction story, the robot like Data on STAR TREK wants to become a human, because all creatures want to be like their creator, and once the robot is enough like a human to be human, he wants to become like our creator.
Once a robot becomes aware of the concept of good, he cannot help but be drawn to it, because the nature of the good is that it is beautiful.
Wednesday, March 19th 2014 at 2:57 am |
Respectfully, I do not see how that follows. I have a feeling it is definitional.
Why would the good draw a robot? Or is this predicated on “a person with a rational soul” subset “robot”?
Wednesday, March 19th 2014 at 9:41 am |
It is not definitional, but part of the nature of reality. Rational souls come from God, because all souls come from God. If a cunning artificer created a body capable of holding a rational soul, it would not cause nor create the soul, any more than a pregnant woman creates a rational soul. She creates a vessel; the soul is implanted or poured into the child.
The good would draw a robot for the same reason it draws an Irishman or Eskimo, a Martian or an Elf. A rational soul by definition is a soul capable of reason.
Consider this: Reasoning is thinking in abstractions, which means, thinking in words, and a word is honest when it represents what it aims to represent, and dishonest otherwise. We call a statement that reflects the reality it aims at representing ‘true’ and a statement that misrepresents reality ‘false’. When the falsehood is deliberate, we call it a lie.
Rational thinking is deliberate, namely, it is a deliberate attempt to manipulate words or other symbols in such patterns as the symbols mock or mimic the patterns of actions of real things.
Hence it is not merely unlikely, it is not logically possible, it is not imaginable, to have a rational mind that thinks without the thought being deliberate. Deliberate means an act of the free will. Hence, it is impossible, it is not imaginable, for a rational mind to think without that mind having the ability to decide to be honest in its thinking.
The ability to decide to be honest is a moral choice. A moral choice is made by a moral faculty, which we call the conscience. A conscience has the ability to see the difference between good and evil, and to chose the good and eschew the evil.
The good is what draws a man to love the truth and hate untruth. A robot, if by magic it were self aware and capable of rational thought, logically must be drawn to the good. Robots are not immune from the law of gravity; why should robots be immune from the laws of morality?
Tuesday, March 18th 2014 at 1:25 am |
Kubrick/Spielberg’s A.I. film is a closer fit than I think you realize, as it tried to address the preliminary question: Can a robot love? I do not think so. I do not think it would be possible, because of what love is, how persons are necessary for it, and how robots cannot be persons. This might be a personal failing, because even in imagining a Xypotech/Sophotech I think it must at heart be an advanced automaton. It isn’t, of course, and that’s sort of the point of them, but it boggles the mind that such a thing would be anything other than a self-correcting forgery more clever than we are.
My problems:
a. I cannot imagine a true Xypotech, much less Sophotech, and
b. Even Data is at heart an automaton.
There is two hopes, I suppose, but they rest on shaky ground:
c. Imitation rational souls are by definition rational souls for some reason,
d. Presumably because their immediate origin is copied from genuine rational souls.
or
e. Such machines are not truly alive but somehow love.
In the case of E, we’re back to the question of A.I.: Can an unliving thing love? That is, to know and desire the good, as guided by a well-formed God-given conscience, for the sake of others?
There are parts of this proposition which might be conceded immediately, but there are others which seem insurmountable. This, I think, is the first and most important step in seriously considering any proposals which assume unliving things can love.
Tuesday, March 18th 2014 at 1:45 am |
By the by, Jimmy Akin considers some moral questions regarding artificial intelligence. This might be a nice listen.
Tuesday, March 18th 2014 at 2:06 am |
This reminds me of the legend that St. Albertus Magnus constructed a golden automaton, only to have its head lopped off by his pupil, St. Thomas Aquinas, because, according to various sources, it was diabolic, it startled him, or it wouldn’t shut up. That is why to this day the Dominicans have a healthy distrust of droids.
I can think of at few works that deal with this idea, some more seriously than others.
The first is TRANSFORMERS, where it is the part of the background assumption of the setting that the ‘bots are capable of moral reasoning. BEAST WARS even had a whole episode dealing with the issue of euthanizing a crippled protoform, with characters from both sides arguing for and against it; it also opened an episode with a character contemplating seppuku as a method to regain honor, deciding against it, and later laying down his life voluntarily. This same honor code also allowed the character to overcome the villain’s brainwashing (which is why BEAST WARS was the best Transformers series). The robots all also have some kind of awareness of their created nature, though the degree to which this is expressed varies from series to series, and is usually a matter of knowledge rather than faith (considering that in the original series you can waltz right down to the core of Cybertron to have Vector Sigma zap some new robots to life).
The second is ASTRO BOY, and Tezuka’s other works, where the robots (built by humans) are moral agents, and many stories revolve around their struggles to obtain equal standing with humans in the eyes of the law. Astro, like Ashitaka from Princess Mononoke or Freder from Metropolis, usually finds himself fighting for peace between radicals from both sides, defending good robots from bad men as often as he must defend men (notably, not always good) from bad robots.
The third, which I am in the process of watching, is ARPEGGIO OF BLUE STEEL. The premise is that the AIs of several Way Cool battleships and submarines have embodied themselves as women (because human records indicate that ships are female, natch) to better understand their enemies, the humans, and after experiencing love for another, be it eros, agape, etc., find themselves gaining both a moral conscience (that questions their telos as human killing weapons), and a free will (that questions their absolute obedience to the Admiralty Code program governing them). Fanservice aside, I’d recommend it highly.
I note that all of these are lowbrow cartoon-man shows. I also note that the point does still remain that few, if any, have approached this idea from a specifically Christian point of view.
Tuesday, March 18th 2014 at 11:39 am |
Edgar Rice Burroughs’s The Monster Men brushes on the idea for the artificial men, but alas, doesn’t really get a grip on it.
Tuesday, March 18th 2014 at 3:55 pm |
Firstly, that is a remarkable idea! (I have the notion of a story now–we’ll see if I pen it–about a robot who is only programmed with the directive of “love”, because the programmer did not have the insight to write anything further. Faced with the contradictions that such a nebulous and unhelpful directive gives, I wonder what would give the robot their epiphany.)
Secondly, though it didn’t achieve your ideal, this post certainly does remind me of the emotive elements of Wall-E, in particular the extensive dialogue-less opening sequence where a robot fashions and seeks beauty.
Wednesday, March 19th 2014 at 1:58 am |
Wall-E followed mankind’s new commandment, love the environment as thyself.
Wednesday, March 19th 2014 at 2:31 am |
Lafferty could have done this one, in spades. Epiktistes was almost there.
Wednesday, March 19th 2014 at 7:46 am |
Thank you, Mr. Wright, because of this post my husband and I had an hour long conversation about the possibility of Robots gaining souls, how it would work and the ramifications thereof. Thereby confirming once again my deep and abiding love for my husband. We came to no conclusions, but talked over some interesting ideas that I may or may not use in my writing. It was a good morning.
And thanks, Ubiquitous, for the Jimmy Akin link! Looks like he goes over some of the same questions my husband and I were asking each other. We’ll have to listen.
Wednesday, March 19th 2014 at 9:17 am |
I’m not a fan of Asimov & that’s probably why I’ve never kept up with AI stories. But this summer I did read a compelling AI book–Zelazny’s “My Name is Legion.” That was very engaging & a big surprise–I’d grabbed it at the library (with a very pulpy cover) expecting pulp for vacation reading.
Wednesday, March 19th 2014 at 12:57 pm |
I have neither the time nor the imagination to follow this thought to its conclusions, but I found myself (after reading through the article and the responses) wondering if there would be some potential in flipping the paradigm around. That is, what if the story were told from the perspective of a sentient race of robots that developed the technological capacities to create a “human”? I have no idea (as I said) if anything is gained in this way, as there would inevitably be some profound theological issues to tackle on the front end (had biological material existed with a “soul” at some point in the past? who or what created the robots in the first place, etc.), but the thought struck me and so I thought I’d share it.
Also, only been reading the blog for a little bit, but profoundly enjoy it and the thinking it stimulates. Thanks, John, and thanks to all your regular commenters as well.
Wednesday, March 19th 2014 at 1:40 pm |
Ah, you should immediately go out and find and read Roger Zelazny’s short story ‘For Breath I Tarry’ which appears in his anthology LAST DEFENDER OF CAMELOT. It is precisely on the conceit you mention, that of intelligent machines trying to create (or, in this case, recreate) the mankind who created them.
Wednesday, March 19th 2014 at 2:05 pm |
Thanks for the recommendation. I’ve added it to my wishlist and will surely get to it eventually. Truth be told, I’ve always been more of a Fantasy reader (with brief forays into SF through Frank Herbert and a few others, with Dune on my Mt. Rushmore of books along with One Flew Over the Cuckoo’s Nest and some others), but I am now dedicating myself to a season of SF reading (thanks, in large part, to your blog). So, recommendations are welcome, though they will have to wait until I’ve finished with my current batch, which includes:
Snow Crash, by Stephenson (currently about 1/3 completed)
The Stars my Destination, by Bester
Do Androids Dream of Electric Sheep, by PKD
Wednesday, March 19th 2014 at 5:29 pm |
Well, whoever is recommending books to you, listen to him. Those are all classics.
Wednesday, December 31st 2014 at 3:08 am |
[…] Wright’s two laws (and there’s the link I mentioned). A really, really cool one, that will involve several […]