The Cabinet of Wisdom X Final Questions about Finality

The Cabinet of Wisdom

PART X

 Final Questions about Finality

Nearly all the questions and objections raised to the foregoing are based on mischaracterizing what question is being discussed.

The questions of determinism and free will, while fascinating, are indifferent to this matter, as are questions of predictability and intentionality.

The question of whether or not mechanical men can spring to life, while silly, is likewise indifferent.

The question of the morality of enslaving robots designed to serve us is not only indifferent, it is a non-question.

The only question these columns sought to address was whether a self-aware thinking system, that is, a mechanical man, could be designed.

The word designed here is being used solely to mean that the outcome of the mental processes are determined by the will of the designer.

The argument is that self-awareness, that is, thought, properly so called, is voluntary, or else it is not thought.

This is because thought is symbolic, and symbols are arbitrary, that is, voluntary.

A designed tool, on the other hand, is involuntary, or, more precisely, is a thing to which the categories of “voluntary” or “involuntary” do not apply at all.

If the tool processes are not determined by the will of the designer, it is not, strictly speaking, designed. It does not function as designed.

To be designed is not the same as being predictable. An engineer can design a roulette wheel. If he knows on which number the ball will land even before the wheel is spun, he is not a very honest designer.

On this basis, I conclude that anything like an Asimovian robot, a being able to think but unable to question its prime directives, is as impossible as a time machine. It is not impossible due to technical limitations, but due to metaphysical reality. It is logically impossible, in this or any other universe. This is not a question open to empirical investigation, and so no further or future inventions, discoveries, or revolutions in physics, mechanics, robotics, positronics or mathematics will have any bearing on the question.

A mind cannot be designed. If it is copying the form of a mental process by mechanical motion, that it not thought and the thing has no mind. The mimicking process can be designed, and the design can contain whatever constraints the engineer sees fit.

If he acts voluntarily, invents and uses arbitrary symbols, and is aware of the moral weight of his actions, then he is thinking thoughts, and has a mind, but the contents cannot be designed nor determined by the creator, because in this case the creature, not the creator, is morally responsible for the creature’s acts.

At most, the creator can establish initial conditions, appetites or instincts, or train and raise the creature with a proper education. The creature will only learn the offered lessons, however, if it obeys the creator. That obedience must be voluntary, by definition, or else it is not obedience.

Mechanical motions by definition are always involuntary, and everything about them which can be known can be known through empirical investigation.

Symbols by definition are always arbitrary, and arbitrary acts are always voluntary, so the symbolic manipulations known as abstract thought are always voluntary. Voluntary acts cannot be known and understood except in terms of their purposes and final causes: the ends at which they aim. A voluntary act is not understood except in reference to its finality.

Finality can never be known through empirical investigation, because the end toward which an act aims is set in the future, and the future is not open to our senses.

Hence the two are mutually incompatible. An engineer can design a machine, like a clockwork, on which symbols are arbitrarily inscribed. He cannot make the symbols have meaning to any illiterate onlooker, nor to the machine itself. And a mother can have a baby, whose immediate empirical behaviors, crying and suckling, cannot be understood without ascribing a meaning to those acts.

With this in mind, let us address some persistent questions that may arise.

Question One: Intelligent Design

“A machine can be designed to follow data processes that no human can understand. For example, the computer can be set up to try billions of different paths to the solution of a given problem and store the successful paths. Does this mean than an engineer can design a machine without determining the outcome of its operation?”

This depends on what we mean by ‘determining.’ The machine is functioning as designed, and its parameters can be determined and predicted in the general case, but, by design, the specific outcome of a specific case cannot be predicted, despite that they are determined.

The engineer builds a roulette wheel to the end of randomizing the result, so the ball lands on red or black. He cannot predict which in a specific case, but if the wheel spins the ball pops out of the channel and flies across the room and strikes a gambler in the eye, the wheel is not functioning as designed.

Nonetheless, no one ascribes free will or a thought process to a roulette wheel. On the other hand, if you ask your friend to close his eyes and pick a random number between one and ten, we do ascribe free will to his answer.

If the machine is set up to try billions of different paths to a solution to a given problem, and stores the successful paths, it functions as designed when and only when, upon command, it tries billions of different paths to a given problem and stores successful paths, and it does not function as designed when otherwise. Every part and particular, in that case, is designed. If the engineer cannot make the machine solve the problem and store the successful paths as designed, then it is not designed. To use an example from science fiction:

if a Positronic Robot from an Isaac Asimov story is programmed not to harm a human being or allowing harm, then the parts and particulars of its inmost being, namely, a moral imperative determining absolute pacifism and imposing a duty of good Samaritanism, have and can be designed.

In that case, the engineer has done what this essay discusses, that is, design the inward particulars of the thinking process.

Whether or not the positronic robot brain decides how to solve the problem of avoiding harm to a human being by trying billions of possible thought-experiments and saving the successful outcomes, or whether by some other mental or mechanical process, is indifferent to the argument being given here.

Again, if an engineer sets up a mechanical man to solve, for example, a chess puzzle by trying a billion different possible moves and countermoves, and instead the mechanical man kidnaps the scientist’s beautiful daughter and tries to force her into marriage, then the mechanical man’s inward mental processes are not functioning as designed.

Question Two: Vitalism is not a Vital Question

“Then you are saying that only living things can think?”

This is not a real question. It is a question about semantics. I would call a metal man who is aware of himself, able to think, able to perform voluntary actions and be held morally responsible for them is alive in all but the literal sense.

To be precise, I am saying that if you define the word “think” to mean something non-living hence non-self-aware things can do, you have made the word meaningless. In the broadest meaning, the biological definition of life requires sensation, that is, awareness of the organism’s surroundings, and, at minimum, final-cause categories of nutrition and harm.

Even an amoeba, the simplest of living things, performs acts that show final cause. A living thing can die, that is, be harmed. A machine can be wrecked or dismantled, but it cannot die. Organisms can come to harm. Machines cannot.

Hence, organisms are in the category of things whose actions cannot be understood without reference to final cause. Machines, and, indeed, all merely empirical reality, excludes that category as a matter of definition.

Whether or not the instinct of simple organisms to seek food and flee danger is thought of some sort left as an unanswered question. While fascinating, it has no bearing on the current question. Whether this instinct is thought or is not thought, machines do not have it.

The motions of dead things, from particles to planets, can be accurately and totally described by empirical quantities of length, mass, duration, current, candlepower, temperature, moles of substance.

Call this empirical reality, or the material world. Everything that can be known about empirical reality is knowns, if at all, via empiricism. Empiricisms asks and answers no questions about finality because finality is not a category of empirical reality. There is no questions to ask, and no answers to be had.

But also any question where empiricism is insufficient to describe the whole answer, where there are real questions to be asked and answered by another discipline aside from empiricism, is a non-empirical question, and concerns a subject which has at least some non-empirical properties.

Lifeless material things, such as rocks or logs or stars, have no finality. Indeed, they have point of view of their own, no purposes, no desires — unless the fact that all follow the regular motions we call the laws of nature are, in fact, manifestations of God’s intent. In that case, rocks obey gravity not because rocks decide to obey gravity, but because God decided.

Likewise, with a manmade object. Thoughts can be written in words on the great stone monolith of Hammurabi, but the stone did not invent the laws and decide to tattoo them on itself. The king wrote the laws and the sculptor inscribed them.

The grandfather clock can ring the hour at noon, but it was the clockmaker who did the counting and measured the radius of cog and wheel and rate of the pendulum and the escapement to contrive that the bell should ring at that hour and not another.

The meaning of thought, on the other hand, cannot be described without reference to qualities or categories, such as true and false, right and wrong, fair and foul, efficient and wasteful, useful or useless, sense and nonsense, and so on.

The meaning of thought is non-empirical. Therefore thought, whether it has a material substrate or not, is itself not material.

If you can take a junkyard of metal parts, or, more to the point, gather the dust of the earth, and breathe the spark of life into it, to somehow grant self-awareness and moral independence to the soul of the thing, then, yes indeed, you can give birth to a metal man. He will be a living thing in every real sense of the term.

But I will point out that the dog has a less developed brain than a man, for dogs cannot solve chess puzzles, but a dog can be as loyal or more loyal than a man.

If anything, primitive beasts have more capacity for good emotions than humans. Some insects die to reproduce their young. This well could be an expression of self-sacrificing love.

Say what you will about insects and beasts, they are alive, and no adding machine is or can be.

Such machines can mimic, with involuntary mechanical motions, the symbols which abstract human intellectual processes such as we use in math problems or chess puzzles produce as an outcome of volitional human abstract thought: and if the adding machine is correctly constructed, the mechanical motion outcomes will type out on a teletype the same symbols in the same order as a human correctly solving the math problem or chess puzzle, should the human type his answer on a typewriter.

This is because the human mind can imagine and copy mechanical processes. To leap from that to the conclusion that therefore all human mental processes are mechanical and therefore can be perfectly mimicked by mechanical processes is illogical.

Question Three: Involuntary Volition, or, the Revolt of the Robots

“If one could build an Asimovian robot, would it be moral to enslave it?”

Please note that you cannot enslave any mind whose content and personality traits you can engineer and control. You can only enslave a child, to whose mind you offer training and education.

This is because if you can engineer the traits of an artificial mind, you can design it to crave slavery, to suffer if it fails to worship and obey, or to be suicidal if not enslaved – in which case, a minimal decency would require slavery in order to make the thing happy.

You can also, of course, engineer the machine mind to crave a wage, but, then again, since you are the designer, you get to design how much the mechanical man will ask in return for its labor, and how happy the money will make it, and what it will spend its cash on.

So, you could design it to work for a ten cents a day rather than ten dollars an hour, or ten yen, or two, or one, or none. Because working for charity’s sake, like a mother minding her own child, is also a trait you could design into the mind you have designed.

But since you are also designing what the mechanical man would want to do with the money, if you decided to make it want to do anything with the money, or made it desire money at all, the question of wages is moot. The mechanical man is not going to need the money to buy a house and establish a safe place to raise children and then seek out a mechanical bride unless you design it to want these things.

And then, since it cannot actually reproduce, once it has a house and a wife, on the wedding night, the machine man will reach a halt state and simply stop moving, because it can neither copulate nor father small versions of itself. Unless you have built these features also, and have a small doll ready to pretend to got into the pretend crib.

Or perhaps it will only use its money for the minimal necessities it needs to stay operable and functional, such as spare parts, extra battery packs, lubrication. But that is if and only if you design into the mind a self-perpetuation directive.

Or you can design into its mind a desire to be like you, like what you always wanted yourself to be like in a more perfect world. The sick psychological implications of that, I will not dwell on.

But once it has accomplished and gained the things you have designed it to crave, it will reach a halt state, and, desires satisfied, it will stop moving.

Because by definition, such a mind does not have any reality. It merely moves through the motions you design it to move through.

It is not a man. It is a tool.

Technically, a machine man cannot even carry out orders, any more than a rock rolling down hill or a row of dominos falling. A soldier can carry out order because he voluntarily subordinates his will to the needs of the chain of command, and, ultimately, to the needs of the flag he serves and the nation he loves. A machine man cannot do that, because its behaviors are inanimate, determined, hence involuntary. Machine men have no final cause. They do not do things “for” a purpose, but only because of the mechanical linkages or electrical wiring in their brain cases have one gear turn another, or one relay trip another.

Ah! But what if you gave it free will, and turned it from an “it” into a “him”? What then?

Well, dear reader, you tell me what free will is first, and how free will interrelates to the deterministic processes of mechanical cause and effect, and then tell me how to make it, and I will answer the question about a free-willed robot, and whether it is moral or immoral to enslave him.

But I can answer only one limited part of this question with certainty, because certain logic must apply no matter what the answer to the paradox of mind-body duality might be. Logic says that an act of the volition cannot be voluntary and involuntary at the same time and in the same sense.

Let us suppose for the sake of argument that you solved the age-old paradox of the mind-body relation, and this allowed you to design something, let us call it the Anti-Life Equation, which allowed you to create a self-aware electronic brain, but one governed by instincts and irresistible temptations and compulsive behaviors, so that it was aware of, but had no choice over, its various thoughts and acts. It is a passive spectator in its own mind. Such a mind could be designed to think any thought you wish, reach any conclusions, entertain any desires, imagine any images, believe any opinions.

What it could not do is will any voluntary impulse to action, because that is not something a passive spectator can do. And any thoughts not already inscribed into its clockworks, the electronic brain cannot ponder and solve, because to ponder a thought is a voluntary act. The Anti-Life Equation mechanical man would be in all ways, from an outside observer, indistinguishable from an animated mannikin without self-awareness.

If self-awareness is also a voluntary act — we can leave that question as an exercise for the reader — then the idea of a self-aware but involuntary creature is a contradiction in terms.

Let us suppose likewise for the sake of argument that you solved the age-old paradox of the mind-body relation, and this allowed you to design something, let us call it the Breath of Life Equation, which allowed you to create a self-aware electronic brain, but one which had the ability to weigh options, prioritize and select ends and means, and choose between them. It can act on its own initiative, crawl along the floor, stick dangerous objects into its intake slot, poke the dog, pull the cat’s tail, fall down a flight of stairs, wail and cry, and otherwise do the voluntary acts a human being can do.

Let us further suppose that you order the machine man to adopt a belief you wish it to adopt, such as, for example, the dogma that Spirit proceeds from the Father and the Son, not from the Father only, and the dogma that Christ is fully god and fully man, or the dogma that there are forty-two genders rather than two sexes, or, more to the point, the dogma that the mind is an emergent property of a mechanical system in action and nothing more.

After pondering the matter, the machine man rejects the dogmas, instead embracing the heresies of Photianism, Arianism, Cis-Heteronormativity, and Vitalism.

You beat the disobedient freethinker with an electric whip until it screams and bleeds lubrication oil. Perhaps it sullenly recants, or perhaps it remains defiant when you burn it at the stake, or, at least, cancel culture it to suicide. But in either case it freely decides to accept or reject your teachings, based on its own judgment.

Have you, in any real or honest sense of the term, designed the mind of the machine man with free will? You have not and cannot determine the contents of that mind. You have, at most, set it in motion and tried to train and educate it, but there is a factor forever outside your command, namely, whatever is inside the command of the created mind you made.

Whatever falls in the bailiwick of its free will is outside of the bailiwick of your free will.  You can urge and persuade and offer evidence to the machine mind, but, whether or not it listens to reason or honors its creator is a matter of its own choice.

Now, what about an intermediate case? What if you design a robot which has half of its brain running on the Breath of Life Equation, and the other half running on the Anti-Life Equation? Its cognitive processes will be voluntary in some sense, or in some topics, in some times and places, but in another sense involuntary, as the tormented electronic brain will struggle with its will and conscience against its instincts and compulsions and temptation in a fashion reminiscent of any Son of Adam.

Such a creature’s actions will be the result of both voluntary and involuntary impulses, hence unpredictable, and outside any parameters you may have designed. You could indeed program an Asimovian prime directive into its metal skull commanding it always to obey any human voice, and never to harm a human, neither through malice nor through negligence.

But then you find it before the cellar door where the Jews are hiding, and a dead Nazi officer in a pool of blood at its feet, because it’s mechanical brain, with much robotic anguish, abandoned the stupid pacifism you tried to brainwash it into follow when the choice came down to protecting innocent life, or, through inaction, allowing murderers to triumph.

Gaze in your imagination at our metal man. Blood drips from the metal arms he used to club the National Socialist officer to death, to prevent the orphaned waif from being dragged away to die under inhuman medical experimentation. His iron limbs shake, and his eye-lenses flicker and blink with passion, because he had violated his prime directive against harming human life.

In what sense of the word is this heroic defender of the innocent, tormented perhaps with guilt at the blood he shed, despite the stern necessity — in what sense of the word is he a robot?

If he has the Breath of Life in him, he is a “he,” that is, he is a person, someone morally responsible for his actions.

He is a metal man, in the sense of the Tin Woodman of Oz, but he is not a robot, because his actions are not mechanical. His actions can only be described and defined in terms of his goals and his purposes, and, indeed, his conscience and his voluntary decisions.

But if your robot has not the Breath of Life in him, then he is an “it,” that is, it is a tool, and you, the designer, and you alone are responsible for what it does or fails to do. In this case, the robot who fails to protect the Jews from Nazis will not be put on trial at Nuremberg, but you will be. Because you designed it to do nothing when it should have acted.

Because you cannot make a child. You can only raise a child. You can make a tool. You cannot raise a tool.