Blindsided by Blindsight

This is only in part a book review; in part it is a meditation on some of the topics raised by the books involved.

By some odd coincidence, I read BLINDSIGHT by Peter Watts (available on the web here http://www.rifters.com/real/Blindsight.htm ) the same day I read THE CUBE AND THE CATHEDRAL by George Weigel. The contrast between the two books, and the world views represented, could not be more clear.

SPOILER WARNINGS !!!

I discuss the surprise ending of BLINDSIGHT below, so for pity’s sake, if you mean to read this book, do not read this review.

 

Short version: Worst. Ending. Ever.

After a strong beginning, this book goes off the rails, crashes and burns, and the dazed reader, like passengers surviving a train wreck, numbly follows as the plot wanders out into the middle of a barren wasteland, where it dies. It is perhaps the most disappointing ending of any science fiction book I have ever read.

This book was nominated for a Hugo. It got a starred review from Publisher’s weekly. I am dumbfounded. There are some things in this book that I did not like for reasons personal to me; but there were other things wrong with the book, violations of the most basic rules of story-telling, that should have disappointed even a reader who shared not one of my personal tastes.

Long version:

 Let me discuss the book’s plot, characterization, its strong points, its weak points, and make a general comment about what is wrong with the whole worldview underpinning the book. 

Plot:

In the near future, neurotechnoogy allows for radical restructuring both of damaged and healthy brains. A handful of misfits are sent out on a suicidal mission to make First Contact with an alien race. The aliens have made no overt hostile actions, but the Earthmen are wary.

Aboard the Earth vessel are (1) the artificial intelligence called The Captain, who is incomprehensible and silent. The Captain only speaks once during the whole book. (2) a member of a rediscovered and genetically revived (Jurassic Park style) race of hominids who preyed on mainstream humanity, called vampires. Vampires are nocturnal and have an epileptic fit at the sight of right angles. (3) A linguist with multiple personality disorder, rendered orderly by means of neuro tech. (4) A soldier (5) a biologist with multiple enhanced senses able to interface with an astonishing number of probes and instruments (6) the main character, who is a sociopath, someone not able to feel any human empathy. His role in the mission is to observe and report back to Earth.

The vessel is sent out to the Oort cloud, where it encounters a superjovian planet, a failed star, the center of an immense magnetic field. Orbiting the superjovian planet is an alien artifact like a thorny crown. Some sort of large-scale planetary engineering is going on, and the thorny artifact seems to be growing.

The main character has several flashbacks. In his youth, he suffered a radical hemispherectomy: half his brain was cut out and replaced with neuro-tech wires. The main character either is (or has convinced himself that he is) unable to feel human emotions. He merely notes certain patterns of behavior, hears voices without understanding them, and reacts in whatever way the observed patterns seem to indicate. We see glimpses of his troubled childhood, his parent’s dysfunctional marriage, his cold and petty relationship with a girlfriend.

For example, when his mother downloads herself into a disembodied existence in a mainframe called “Heaven” the main character is bitterly angry about it, repulsed, but does not admit to himself that he has any emotion about it, or any emotions about anything. The future world is one where real life is not that interesting compared to the godlike Nirvana and Elysium of disembodied electronic existence. The author foreshadows that some day very soon there will not be enough people in the outer flesh-and-blood world to “keep the lights on”, that is, to keep the download nirvana running.

My compliments to the author here: this was a brilliant conceit, brilliantly executed.

But to return to the plot:

The team establishes communication with the aliens very early, but the linguist soon decides that the responses from the aliens are a “Chinese Room”, that is, responses dictated by a non-self-aware system, something that repeats back (in a highly sophisticated way) meaningless (to the alien) word-sounds, but so cleverly put together according to the rules of grammar that, to the humans, they seem like intelligent speech. The “Chinese Room” of the alien vocal system warns the humans in no uncertain terms to stay away from the artifact.

(For those of you who have not heard the term before, “Chinese Room” refers to a thought experiment by John Searle. It refers to a system that does not understand communication, but can imitate communication by rote. More on this below. )

The main character has some hallucinations of the aliens before meeting them, but nothing ever comes of this. Apparently he was able unconsciously to deduce what they looked like long before any human saw any of them, but nothing comes of this fact.

The Vampire is the second in command, and the only one allowed to talk to the captain. The vampire is smarter than baseline humans by several orders of magnitude, and so the sons of Adam merely take it on faith that the Vampire is acting in their best interest. The Vampire orders one provocation after another against the aliens: the crewmen break into the artifact, suffer horrific hallucinations (due to the intense magnetic fields passing through the area), accomplish nothing in particular, kidnap one alien, kill a few others, bring two aliens back to the ship for examination. The aliens are starfish like or squidlike beings.

The biologist concludes the aliens are nonselfaware. The linguist tortures them , trying to establish basic communication. When the aliens are asked to number the objects present in the cell, for example, the alien does not number itself among the things present. The vampire concludes that these starfish are merely biological machines, part of the overall alien structure, not their centers of consciousness. 

After an astonishing display of superintelligence, the nonselfaware two aliens break out of their holding cells; the artifact attacks the ship. The soldier prepares the ship to ram the artifact, kamekaze style, and blow both human and alien to kingdom come. There is no explanation given for this. It is not clear (to this reader, at least) why the humans continued to escalate their provocations against the aliens.  It was not clear why the aliens struck back or what they wanted.

For no particular reason, the Vampire attacks the main character, and wounds him in the hand. For no particular reason, this attack sends him into curl-up-in-a-ball weeping and feeling sorry for himself for a period of time. (Weeks or months, I was not clear on this point). Then, the Vampire announces that the reason why he attacked the main character was to shock him back into empathy with human beings. So, when the main character reads a letter from his father, he is able to cry. The Vampire says he needs the main character to return to Earth and make them understand what the First Contract team learned.

What the First Contact team learned is that self-awareness is not only not necessary for evolution, it is actually an impediment. There is no such thing as free will. We are all biological machines controlled by our random neurological programming. Consciousness is merely a hindrance. The aliens are a far superior race because they are nonselfaware. They are a ‘Chinese Room’, an empty system with no point of view. 

The aliens probed the humans because the human communication, which contains many self-referencing words like “I” and “me” had to be interpreted by the aliens as a form of attack. Or something. That point ws not very clear. For whatever reason, First Contact is impossible with these aliens, because no one can talk to them, since talking is a form (according to them) of aggression. On the other hand, it is also stated  in the same paragraph that the aliens can form alliances and mutually beneficial arrangements.

Main character, for no particular reason, has lost his ability to mimic human understanding by means of copying their formal rules. On the other hand, his newly found human empathy does not really allow him to empathize with the crew either.

The Vampire is killed by the soldier, except maybe not, because the soldier denies it. The ships’ AI speaks through the dead body of the vampire, tells the main character to depart in a side-boat. The ships’ AI says that the reason why the ship gave orders through the vampire was that humans would not have taken orders from a machine. Main character flies back toward earth.

At this point, the ship and the artifact destroy each other. For no reason. The main character announces, again for no reason, that the aliens will not retaliate or take any further warlike action against the humans. The aliens are controlled by a strictly logical “game theory” approach to life (or non-life, in their case) and the game theory says the aliens will not attack humans again.

I can only assume I totally and utterly misunderstood what the author was trying to say here, or maybe I mistook irony for some literal statement. I can only report what my understanding of the book was. Your mileage may vary.

According to my understanding of the book, it is stated (1) that the aliens are innately hostile to the human beings, because the humans talking to each other, when overheard by the aliens, will be interpreted by them as hostile (2) the aliens are not self-aware, possess no consciousnesses, and therefore do not interpret things (3) the aliens can talk, or, at least, play word-games with humans, sort of the same way a “Chinese Room” will react in what seems (to you, but not to it) a rational response to a rational question (4) the aliens, after being attacked in a suicide attack, will not retaliate (5) the main character has to rush home and tell everyone on Earth about this all-important point. Only he, with his human empathy, can make people understand this all-important point. What the all-important point was, or why it was important, was not clear. Maybe he was supposed to tell them that the aliens are unaware of the human beings and are non-self-aware, in which case they are no threat. Maybe he was supposed to tell them that the mere fact of human possessing consciousness provoked the aliens, so they were a threat. Maybe he was supposed to tell them how to approach the aliens, or to keep away, or not to keep away. 

If each of these five points mentioned in the last paragraph seems to you to contradict one or more of the other five points, then you have entered the same twilight zone of confusion that I have.

Anyway, just to make sure that this whole pointless plot is even more pointless, while on the trip home, the main character picks up radio messages.

The first is that the “lights have gone out.” For no reason having anything to do with the plot, or the aliens, or anything, it is simply the case that some disaster back on Earth crashed the electronic heaven, killing the main character’s mother, and countless billions of recorded souls. I guess we are supposed to say “too bad” except this was a big so-what moment, because it had nothing to do with anything in the plot.

Second, the radio reports that more and more people are returning to real life and that, for the first time in years, the population is growing rather than declining.  I guess we are supposed to say hurrah, except that this was a big so-what moment for the reader. It was disconnected to anything that happened.

Third, the main character hears reports of spaceships fleeing the earth, as humans are fleeing the vampires, who have finally risen in revolt against their creators, Frankenstein-style. Then, the vampires have won, and the human race is dead, and the main character continues floating in his coffin-ship, in suspended animation, toward earth, the last human alive. At this point, the reader can only yawn, or laugh, or shake his head, depending on how much imaginative effort he wants to put into trying to create, in his own mind, some sort of emotional reaction to a pointless off-stage disaster that overtakes a nameless population of people for no reason. Certainly the author, whose job it is to make the reader able to imagine the fear and power of such apocalyptic scenes, does not stir a finger to help us out. The decline and fall of the human race might have been an interesting book, or even an interesting trilogy: but it cannot possibly be an interesting sentence tacked without craft or passion onto a pointless ending of the plotless book.

And… the end!

Main character does not actually ever land on earth. The book is his diary that he recites in space. We don’t know what becomes of him and we do not care.

So, just to recap: the reason for the mission is hidden from the characters and the reader never finds out either. It must not have been to make first contact, because the humans provoke the aliens for no particular reason and commit kami-kaze for no good reason. The aliens are both said to be a threat and said to be no threat at all. This was not two characters debating the ambiguous evidence, it was just that the author either did not make up his mind or (more likely) the nihilistic world-view of the story would not allow for either possibility, since either peace or war is meaningful, and the author’s theme was meaningless. Nothing is accomplished in the mission, no communication is made with the aliens, but neither is a communicationless solution to the problem (whatever the problem is) found or even discussed. I would have been much more impressed had the humans, or the Ship’s AI  manipulated the “Chinese Room” of the aliens to reach a mutually beneficial trade. You do not have to make a contract with bees, for example, to feed them and get their honey.

Strong Points:

In part the reason why I was so disappointed with this pointless ending was that the beginning held so much promise.

First, all the characters are quirky in the fascinating and repellant fashion that make, say, ax-murderers fascinating. Everyone is either a mass-murderer (the vampire and the soldier) a traitor (the soldier and the vampire) a sociopath (the main character) a schizoid (the translator) or a freak of some sort or another. Any sort of story where the broken members of a suicide mission, the dirty-dozen misfits, learned to get together, heal their broken brains, and over come a problem together, find some sort of redemption could have been a stirring and moving tale. Well, that is not this tale, but the beginning held promise.

The author brilliantly adds little touches to his invented world, touches of realism: for example, most people are “real world virgins”, because they have sex in virtual reality with perfected computer versions of whoever and whatever happens to strike their fancy. The main character is puzzled and peeved with his girlfriend when she finds him cheating on her with a fantasy version of her: a computer version with none of her real-life annoying habits. She also cannot resist asking her boyfriend to undergo minor neuro-chemical tweaks, because she wants to domesticate and improve him, make him happier: a type of meddling interference he both regards as sinister, and regards as inevitably built in to the female nature.

The drollery of a girl being cheated on by a guy with an electronic fantasy version of her is good science fiction.

All budding science fiction would-be writers should read these scenes and study how Peter Watts adds these little touches and executes these effects. It is merely the properly chosen word here, a casual comment there, and the whole world opens up, dizzying and strange, to the reader’s inner eye. The new world is utterly unexpected and perfectly expected. I cannot compliment the artistry strongly enough.

Let me pause to say why this is good, because the craft and care shown by the author in these scenes is about the only thing I can compliment in the whole pointless, nihilistic book. Science Fiction has one unique property. There is one thing SF does that no other genre, not Westerns, not Romances, not even horror, can do. Science Fiction can create in the reader that feeling of wonder and disorientation you remember when you first learned that, despite all the appearances, the world was round, not flat, and that stars were not tiny dots, but distant suns, immense as or own or larger, immeasurably distant in space.

Science Fiction is all about a sensation of losing your bearings, shifting your paradigms. Imagine the disorientation when Darwin first hinted that man was descended, not from Adam and Eve, but from apes and monkeys.  Imagine the disorientation when Copernicus yanked the solid earth out from her place at the center of the universe and sent her spinning off in an orbit around the sun. That sensation of having the earth yanked out from underfoot is the unique Science Fictional sensation. The new paradigm is not just weird, it is also weirdly logical.

In a science fiction story, the reader is asked to accept a new world: what if telepathy were real? What is men could teleport? Then, in the midst of the weirdness, a weird logic. In the world were telepaths could solve crime, Alfred Bester tells us how a criminal could get away with it (THE DEMOLISHED MAN). In a world where criminals can teleport, Alfred Bester tells us how a criminal could be locked up (STARS MY DESTINATION). The true art of the science fiction writer is in the little, telling details. In TO LIVE FOREVER by Jack Vance, sex is not a taboo subject, but among the immortals and mortals who want to be immortal, telling jokes about death and dying is taboo.

When the writer does it well, you hear the little detail, and you go: of course. Of course it would be that way.

Peter Watts also has science fictional brilliance yes, brilliance, I say not just in small things but in large. That world-jerked-from-underfoot feeling is hard to accomplish in these jaded modern times. Mr. Watts has a large theme that is just such a paradigm shift. He asks us to accept the science fictional premise that human consciousness is an evolutionary mistake.

His idea goes like this: we notice that we humans are the most skilled at what we do when we think about it the least. An artist flies by inspiration, surprised by his own art. An athlete is “in the zone” his body acting faster and more expertly than his conscious mind could ever tell his hands and feet to move. Intuition gives up complex insights we could never reason our way to see in a step by step fashion. When a man is suffering from “hysterical blindness” he has what is called “blindsight”. He has no conscious awareness of anything he sees, but if you throw something toward his face, he will duck aside or raise a hand to catch it, all by reflex.

So, the next step in the idea is this: consciousness is a make-shift, and evolutionary mistake, a waste of precious brain cells, a waste of resources. The truly advanced and truly efficient alien races would all see by blindsight. They would all talk by rote, not aware of what they were saying. They would all act by instinct, and their instinct would allow them to maintain a level of super intelligence far above the slow, plodding, dull reasoning of creatures crippled by consciousness.

Now, no matter what you think of this position philosophically, we can say two things about it (1) it fulfills, and fulfills brilliantly, the basic requirement of the science fiction writer’s task. The writer has presented us with an astonishing new world, a daring new concept, and challenged the orthodox belief at a fundamental level. SF is about asking “What If?”. Well, the question, “What if self-awareness were an evolutionary dead end?” is a perfectly cromulent SF question. (2) it undermines any possible drama the story might have, leaving the reader cheated.

Now for the weak points:

I cannot speak for other readers. Me personally, I would say that the characters in this book were all people I would like to have a policeman shoot to death, and then I would put a revolver in the dead character’s hand, get his fingerprints on it, to make it look like self-defense. I think I could be persuaded to lie to a jury under oath to help cover up such an act of police brutality, and later invite the bad cop over for a beer and a chicken dinner.

Now, this is not necessarily a bad thing. Bad guys are part of literature, and you don’t have to like a character for the character to be interesting. But the flipside of that rule is, that if the bad guy is repellent, he still has to be interesting, and here is where the book falls entirely flat: readers with an intensely negative and nihilist world-view have a particular love of such ugly species of humanity, or of inhumanity, but an ordinary reader will soon grow bored.

Bored to tears. Because there is no character development. None. Nada. Zip. The Vampire is a cipher the whole story. We never find out what his motives are for anything. He does random things, and nothing in particular comes of them. The soldier betrayed her own men and had them killed by an enemy during a torture session in order to get information from the enemywho was being tortured. Nothing in particular comes of this either. The soldier never seems to learn better, or repent, or grow, or change, or anything. The psycho translator had half a dozen personalities, not a single one of which had any personality worthy of note: I cannot remember a single thing about any of them, not even their names. The biologist tried to befriend the sociopath main character, but to no avail. He dies. Whoop-de-freaking-do.

The most powerful stories are all stories of character development. A man who learns better. A criminal who reforms. A selfish soldier who learns to love his brothers-in-arms and throws himself on a hand-grenade. Anyone who overcomes a flaw. Anyone who redeems his past crimes. There is nothing even remotely like that here. The earth government who sent out this crew of lunatics would have accomplished exactly the same result by sending a missile to shoot down the alien artifact.

So the character development was a cheat. The plot was also a cheat. Here is why: In order for drama to be drama, it must be meaningful. A story is not like jazz music, an impromptu set of sounds soothing to the ear, with emotive but without cognitive content. The cognitive part of story telling involves dramarising action and falling action. The protagonist is out to accomplish some goal meaningful to him; the antagonist puts obstacles in the way. Conflict. The protagonist (in a comedy) succeeds or (in a tragedy) fails. But if he succeeds, he must succeed for a reason, usually a virtue, if only persistence. If he fails, he must fail for a reason, usually a character flaw, a tragic flaw.

But in order for there to be meaning in the plot, the characters have to inhabit a meaningful universe. Unfortunately, the universe postulated for BLINDSIGHT is a meaningless universe. It is a Chinese Room universe.

Now then, I think any reader, no matter his opinion on such high matters as necessity and free will, materialism or realism, will feel cheated by this book, merely because the basic rules of story telling are violated. There is no plot conflict, no resolution, nothing the main character does means anything, nothing anyone does means anything, the plot points contradict each other blatantly, stupid things happen for no reason, and nothing comes of them.

The basic rule of writing is known as the gunrack rule. If you show the readers that there is a gun in the gunrack in scene one, the gun must be fired by scene three. Otherwise, leave the gun out of the scene. It serves no point; it is distracting.

Peter Watts violates this over and over again.

  • Example: in the opening scene one of the character announces that mission control betrayed them: they’ve been suckered. Nothing comes of this. Nothing at all. It is not as if the characters thought the mission was meant for one thing and found out it was another. So why put in the line where the mission crew are told they’ve been “suckered”?
  • Example: There is a scene where one character hallucinates seeing a bone in the ship’s wiring. Nothing comes of this.
  • Example: there is a scene where an intense magnetic field makes another character think she is dead, even though she is still able to talk and move. Creepy, no? But nothing comes of this.
  • Example: the aliens threaten one particular crewmember with violent death. Nothing comes of it. Instead, another crewman dies. Nothing comes of it. Why put in the death-threat? What point did it serve?
  • Example: the main character deduces that the soldier is going to mutiny. When the vampire does die, the soldier claims to the innocent of mutiny, and the main character (I think) believes it. So, what was all that false foreshadowing for? Unless the soldier was also unaware of her own decision to commit mutiny? In which case, who cares?

When I say nothing comes of it, I mean specifically that if an Evil Editor had swapped that event with any other event, earlier or later, no reader would be able to tell. Each event was merely there for mood. If the meaningless bone hallucination had come before or after the meaningless death threat, there would have been no difference to the plot.

Now, in addition to these violations of basic rules, that might annoy any reader, there were particular things that will only annoy readers who share my personal tastes and opinions.

For example, I personally find radical materialism, the idea that our brains are empty of self-awareness and free will, an idea not worth dwelling on. It is worth exactly ten seconds of thought. In ten seconds, you can say, “A man who says ‘I have no free will’ if he is telling the truth, was compelled or programmed to say those words, and therefore he is not, in any meaningful sense, telling the truth. On the other hand, if he is lying, he is also not telling the truth. Therefore the statement is false.” In five seconds, you can say, “A man who says ‘I have no awareness’ is not aware of what he is saying. The statement is less than false.”

Now, as a science fiction premise, I don’t hold a grudge against radical materialism. Heck, I do not believe that people can actually read minds, but that does not mean I won’t read SLAN or GALACTIC PATROL or watch Mr. Spock do the Mind Meld; I do not believe in life on Barsoom, but I will still read A PRINCESS OF MARS. But, as a science fiction premise, I do take exception to an author who will not follow through with his premise.

Here is an example of a failure to follow through: if the aliens are defined as being a “Chinese Room” and they tell the humans (even if they don’t understand what they are saying) to stay away, then we have to assume that the meaningless words are connected to equally meaningless actions, but that those words are actually connected!

I mean, a sign that says BEWARE OF DOG is not written by the dog. The dog does not understand the sign. He is a dumb animal. The sign does not understand the words, It is an inanimate object. But the sign is warning you that the dog will bite you, and, if the sign is true, the dog will indeed bite you if and when you ignore the sign. In other words, just because you run across a piece of alien technology that it running on autopilot, and the autopilot has been programmed to warn you away, it simply does not follow, it simply makes no sense, for any characters to conclude that the autopilot is a liar. Whoever programmed the autopilot has done it for a reason.

Even if, in some incomprehensible way, the autopilot programmed itself, or is just operating on instinct, it still would have an incentive to tie its (to it) meaningless words to its operations, in order to operantly condition any intruders to behave as expected. It should shoot intruders when it says “Stop or I’ll shoot” in order to make the energy it took to utter those words be useful–even if it has no idea of what it is saying.

Just because you are talking to a Chinese Room, and the you hear it say, “Stop, or I will shoot” one cannot conclude that the room was not also programmed (at the same time it was programmed to talk) to shoot you if you don’t stop.

Let me explain the Chinese Room reference:

Robert Searle asks the following question: suppose you had a room that could pass the Turing Test. Written questions in Chinese are passed into the mail slot of a room, and, after a while, a written answer comes out, and the Chinese reader is satisfied that the answers are intelligent. Inside the Chinese room, however is nothing but a series of filing cabinets cards on which are written Chinese characters, and a notebook or set of notebooks with a set of rules. In the room is a man who does not read Chinese. The rules tell the man when he sees a note, and the first ideogram is a (to him) meaningless squiggle of a certain shape, to go to a specific cabinet, open a certain file, go to a certain page, copy the character written there, go to another page copy that character, and so on.  The rules can be as complicated as you like. The man sees the second ideogram of such-and-such a squiggle, he is to go not to file A but to file B, open folder 1, copy page 3, and so on. 

We can easily imagine the opening of any such a bit of “Chinese Room” dialog. If ideogram A means “How are you?” open file 1, page 1, where is written ideogram B, which means “I am fine; how are you?” To the man in the room, the conversation is without any understanding. Ideogram A provokes reaction B. That is all the dialog means to the man. To the Chinese speaker, however, the Chinese Room seems quite polite. When you ask it “How are you?” the empty room replies “I am fine, how are you?”

Does the man walking from file to file understand Chinese, no matter how intricately the rules are that he follows? The answer is no. Do the filing cabinets understand Chinese? No. Searle argued that a computer that could pass the Turing Test was nothing more or less than a Chinese Room, something that reacted but could not act, something that looked like it understood, but did not understand.

Now, much ink has been spilled over the meaning of the Seale thought-experiment, and, in my humble opinion, all of it wasted ink. Searle (and his supporters) say that the thought experiment proves that the man in the room need not understand Chinese in order to pass the Turing Test. This means that the Turing Test does not actually test for consciousness. Turing (and his supporters) say that the room “as a whole” (whatever that means) “understands” (whatever that means) the Chinese language, and that it means nothing in particular the man himself does not understand Chinese. Does one braincell in the brain of an English speaker understands English? Both are missing an obvious point. Both are arguing about whether a letter understands what is written in the letter. Whoever filled the filing cabinets and wrote the grammar rules for the Chinese Room understands Chinese. The letter-writer understands the letter, not the piece of paper.

Turing and his meditations on whether computers would be aware if they seemed to an observer to be aware never seems to rise above this crudest imaginable materialism: they never seem to contemplate that computers have to be programmed by someone. The Chinese Room is not “polite” if rule one is to answer meaningless squiggle in file A “How are you?” with meaningless squiggle in file B “I am fine; how are you?” : The only person who is polite is the Chinaman, whoever he is, who wrote the ideogram, not meaningless to him, that he placed carefully and deliberately in file B. If the Chinaman, without any notice to John Searle (or whoever the poor boob is trapped in the Chinese Room) had written instead, “I am fine; you are a swine!” then the “Room” would be impolite.

The real question about the Chinese Room is whether or not speech that is not rote speech can be reduced to an algorithm. The real question, in other words, is whether John Searle, trapped in the Chinese Room, merely by following even absurdly complex rules of sentence construction, could coin a new term, or use an old word in a poetical way that showed insight, a new meaning not present before. Now, neologisms can indeed be coined by rote. Children make such coinages, usually in the form of cute mistakes, all the time. There is no reason the Chinese Room could not put “Ize” in file 5, and establish rule 101 “add file five to any word X” where the rules of X include those words we want to turn into verbs from nouns. “Nounize” “Vulcanize” “Paragraphize” are all coined terms that I have here and now Turingized. You might be able to guess their meaning. I have meaningized them.

Poetry is a different question. The whole point of poetic expressions is that a new aspect of meaning has been brought out of an unusual use of a word, or out of a new phrase. If it is something you can reduce to an algorithms, it is not poetry. Indeed, the thing that makes you wince when I use the term “meaningize” is the very lack of poetry in that coinage; it is a mechanical, predictable, soulless.

Which brings us back to the problem of setting your story in a soulless Chinese Room sort of universe.

Having a main character who thinks that he is a Chinese Room is interesting. He is a man with a severe psychological problem, sociopathy, which he wrongly explains away with a delusional belief, the belief that he had no self-understanding and no free will. Making the main character a man who claims to have no self-understanding and no free will is a bold and amusing move. Then the character development rests on when and how the crazy main character breaks out of his delusion and realizes he is a human being, with free will, responsible for his actions, and able to change the plot and bring it to a conclusion.

In this book, this sort of happens, and then we are cheated. Instead of some sensible reason for the main character to snap out of his delusion, the author merely asserts that a vampire attack will snap you out of being sociopathic.  Okay; whatever. Once you are snapped out, something is supposed to happen. Instead, everybody dies, and so who cares? They do not even die for any particular reason.

To add insult to injury, if the author steps out from behind his Wizard-of-Oz curtain and announces (as Peter Watts does in an appendix) that the main character is right, and that there is no such thing as free will (his exact words “free will looks pretty silly”— albeit Watts admits that scientific opinion is divided on this point), then the author, in effect, tells you that his story has no drama.

When the character has a false opinion that the narrative in the story shows is false, we can assume the writer is trying an ‘unreliable narrator’ technique, one which such authors as Gene Wolfe expertly handle. But when the author has a false opinion that the narrative in the story shows is false, we can only wonder whether the author understood the point of his own story.

(In the present case, if the point of the novel is that running your mind on autopilot, seeing with Blindsight, is superior to having free will and human emotion, the logic of the story should demand that the demented main character not only NOT be snapped out of his emotionless detachment, but that he act and operate better in this sociopathic state than he acts when he has emotions.)

I have a second pet peeve with the determinist, fatalist, materialist, ‘Chinese Room’ view of the world.

Let us suppose for the sake of argument that Hobbes, Calvin, Lucretius and Peter Watts are correct, and that there is no free will. In the same way and for the same reason that a robot judge would have to be programmed to find a robot criminal guilty, even if neither of them had free will, so too would robot authors have to tell their robot audiences stories about characters with free will even science proved that free will is an illusion.

The reason: there is no drama, no tension, no human feeling, no story, if there is no free will.

If a nature documentary describes an inanimate natural process with vivid detail, and fascinates the viewer with the magnificence and mathematical intricacy of the inanimate world, no matter how good the documentary is, as a drama, it sucks. We can watch a crystal growing or a volcano forming, but even if the crystal breaks or the volcano erupts, there is no drama because there is nothing at stake. There is nothing to gain or lose.  Where there is no life, there is no pain. Where there is no free will, there are no moral quandaries.

What kind of character can live in a Chinese Room universe? Only someone who accomplishes nothing, whose life means nothing, whose death means nothing. Only a sociopath, a vampire, a traitor, a madwoman, or gibbering alien shapes who human-sounding words are meaningless.

I submit that an intensely nihilistic world view, a “Blindsight” world,  necessitates (1) intensely dislikable characters and (2) no plot and (3) no character development. Why? Because anything else will make a mockery of such a world. If the characters were likable, that would make their lives meaningful. If there was character development, that would make such themes as sin and redemption or heroism and sacrifice satisfying and meaningful. If there were a plot, that would make such themes as the little tailor who kills a giant, the self-made man who works hard and earns his just rewards, or the cunning detective who solves the crime and brings the wrongdoer to his just rewards, again, satisfying and meaningful.

Suppose you read a scene where a little girl cut her sandwich in half and offered half her lunch to the little black girl that everyone else in the nursery school was picking on. That small act is only meaningful if the little girl is not a meat machine programmed by cell-malfunctions to suffer an epiphenomenon of kindness. If the girl is a meat robot, then her little act of kindness is not only meaningless (a vending machine that accidentally gives you two candy bars is not being kind, it merely slipped a gear), it is actually pathetic, something to evoke scorn and pity, because the foolish girl performs and meaningless act to which she wrongly ascribes meaning, wrongly gives herself credit, and wrongly learns the meaning of kindness.

Any reader touched by a scene featuring a simple act of kindness will instructively rebel against any nihilistic theme present in the story around. Any artist might be wary of such a jarring note. So, in a nihilistic universe, the writer is better advised to have all his characters be pukes, and all their actions pointless.

Which is exactly what Peter Watts did in BLINDSIGHT.

Final thought:

At the same time I read BLINDSIGHT, I was also reading THE CUBE AND THE CATHEDRAL by  George Weigel. Weigel makes the argument that Europe, by rejecting Christianity, not only rejects its own past, and the fountainhead of its greatness, but undercuts the necessary foundations for the post-Christian Enlightened secular world view. In sum, the argument is that if Europe rejects Christianity in the name of tolerant equality extended to all races, it unwittingly rejects any good reason to embrace notions like toleration, equality, and universal brotherhood, because these things are unique artifacts of the Christian world view and make little or no sense outside them. 

The contrast was instructive. Weigel was talking about the moral atmosphere of an age. The Judo-Christian worldview, whether true or not, lends itself to the drama I mentioned as an example above, a schoolgirl sharing her sandwich with an outcast. Such acts of charity are of paramount importance to the Christian myth and Christian moral reasoning. On the other hand, the moral atmosphere that breathes out of the Chinese Room is deadly. Whether radical materialism is true or not, it is moral atmosphere conducive only to a remorseless Darwinian and Marxist power struggle, or to Nihilist emptiness. The utterly pointless and utterly self-destructive acts described above are the perfect example of the moral reasoning of creatures seeing with blindsight.

The world view that starts by rejecting the supernatural as superstition, ends by rejecting human nature, human feeling, human reason and all human matters as immaterial. There is no room for God — but there is also no room for Man. In the Chinese Room, there is no room for charity.

101 Comments

  1. Comment by jordan179:

    But both consciousness and kindness are quite reasonable products of evolution. The mind has evolved to model the outside world and thus aid survival, since the more intelligent the mind the better it can predict the consequences of its possible actions; the conscious mind is able to choose what it will try to model and do. The less conscious a mind, the less able it is to aid in survival.

    As for kindness, beings that are mutually kind — that cooperate — are more likely to survive than those which do not cooperate. This is so simple, being the step right above non-cooperation, that it doesn’t even touch the hard parts, which are the evolution of cheating, the detection of cheating, and outrage when cheating is detected.

    Non-conscious and emotionless beings would also be non-intelligent, and very unlikely to build spaceships, etc.

    • Comment by superversive:

      Actually, consciousness has so little survival value that it took upwards of half a billion years after the emergence of all the phyla of earthly life in the Cambrian Explosion before it began to appear at all. Neither the trilobites nor the bees nor the dinosaurs had any use for it. As a product of evolution, consciousness is not merely superfluous but downright bizarre.

      As for the business about non-conscious beings being non-intelligent and not building spaceships, that only matters to you because you happen to value intelligence and you want spaceships to be built. But it is only as an after-effect of consciousness that you are able to make such value-judgements at all. Non-conscious beings don’t build spaceships, but they also don’t fret over not having them.

      In fact, none of your reasons for supposing consciousness to be a useful thing supply any grounds for your obvious belief that it is also a good thing. They don’t even justify the position that utility itself is good. However consciousness arose, once it is with us it leads us into a domain of abstraction where we can no longer appeal to the mere material facts of nature, but have to contend with the laws that govern the abstractions — which we must then conceive of as having an existence as real as consciousness itself. That is the one difficulty that materialists can never get around.

      • Comment by arhyalon:

        >Actually, consciousness has so little survival value that it took upwards of half a billion years after the emergence of all the phyla of earthly life in the Cambrian Explosion before it began to appear at all.

        There is no necessary relationship between the time that a particular trait takes to appear and its usefulness for survival.

        Take warm-bloodedness. Clearly a useful survival trait, allows organisms that have it to exist in many places and under many conditions in which cold blooded creatures cannot survive — yet it came along way after trilobites.

        What matters is not how late the trait appears, but whether, once an organism evolves said trait, it provides that organism with a distinct advantage over organisms that lack it.

        • Comment by superversive:

          Even as to that, the cockroaches might be disinclined to agree that our intellect gives us an advantage over them.

          Trilobites were not warm-blooded for the same reason that no marine creatures except mammals are: it is much less advantageous in a marine environment, where the extremes of temperature found on land simply do not occur. Warm-bloodedness is a useful survival trait for animals occupying a particular range of niches in a particular range of habitats.

          As a survival device for a species, intelligence is much less effective than prolific breeding. Its chief value lies in increasing the species’ potential to avoid overspecialization and consequent vulnerability to relatively small ecological changes. Consciousness, in the human sense, has even less demonstrable utility than that.

          Homo sapiens did not gain a survival advantage when he became conscious; the advantage appeared considerably later, when that consciousness was put to use as a medium for a second kind of evolution, the evolution of technology. This was not an inevitable development — there are peoples on earth to this day that have largely rejected it — nor was it a biological development. We have borrowed our survival capacity from our cultures and our tools, neither of which is explicable in terms of biological evolution.

      • Comment by jordan179:

        Actually, consciousness has so little survival value that it took upwards of half a billion years after the emergence of all the phyla of earthly life in the Cambrian Explosion before it began to appear at all. Neither the trilobites nor the bees nor the dinosaurs had any use for it. As a product of evolution, consciousness is not merely superfluous but downright bizarre.

        I disagree. To begin with, consciousness requires a fairly large and complex brain, and hence could not evolve rapidly. Also, it carries a fairly large metabolic cost both to grow and to feed that brain. Yet, consciousness has appeared in two classes (Aves and Mammalia): hence it must be fairly useful.

        We don’t, incidentally, know when it appeared. The common ancestor of mammals and birds (which lived over 300 million years ago) almost certainly wasn’t conscious: the same goes for the mammal-like creatures of the Permian and Early Triassic. Mammals may have had some conscious representatives before the K-T; likewise dinosaurs, if they had the superior avian brain plan, may have attained consciousness (particularly the brainy troodontidae, which might have been as smart as the parrots).

        Today, sapient animals include Man, the other apes, possibly some other primates, probably the proboscidae and delphinidae, and at least some of the pscattidae and the corvidae. The more we do animal cognition studies, the more inteligence we find. So sapience or semi-sapience is a widespread trait in certain Orders.

        Why? Because the purpose of a brain, in utilitarian terms, is to provide a map or model of the environment (both internal and external) to enable the animal to respond appropriately to events in that environment. The smarter the brain, the better the model, and the better the responses produced from that model. A sentient brain generalizes the set of responses to create an identity, which provides better coordination of those responses; a sapient brain thinks about the act of thinking, which focuses thinking more flexibly and usefully and allows the solution of novel problems.

        As for the business about non-conscious beings being non-intelligent and not building spaceships, that only matters to you because you happen to value intelligence and you want spaceships to be built. But it is only as an after-effect of consciousness that you are able to make such value-judgements at all. Non-conscious beings don’t build spaceships, but they also don’t fret over not having them.

        Actually, I mentioned spaceships because Mr. Wright was discussing a novel in which beings claiming to be non-conscious were building them. Spaceships are of obvious survival value because they enable multi-cellular life (*) to spread beyond their worlds of origin, creating a broader set of habitats and hence staving off extinction longer. A non-conscious being might not “fret” over lacking something it needed to survive, but it would then perish. Lack of awareness of a danger does not negate the danger.

        Materialism does not deny the existence of patterns in the material, patterns which may be at least as important as the material itself. A book, for instance, is about more than just the physical processed wood pulp and ink; it is about the patterns that the ink makes on the paper, which convey the book’s meaning. So too do the patterns the neurons make in your brain convey your identity — dare I say your soul? Whether or not it has a transphysical component, it certainly either originates or is mirrored in the brain, else brain damage would have no effect on behavior.

        ===
        (*)
        Unicellular life, especially prokaryotes, may be able to spread across interplanetary, perhaps even interstellar, distances through Arrhenian dispersal.

    • Comment by davidns84:

      As for kindness, beings that are mutually kind — that cooperate — are more likely to survive than those which do not cooperate. This is so simple, being the step right above non-cooperation, that it doesn’t even touch the hard parts, which are the evolution of cheating, the detection of cheating, and outrage when cheating is detected.

      As Superversive pointed out with consciousness, kindness is not a prerequisite to cooperation and cooperation is not even a prerequisite for survival, as there are many species which make do without it. In fact, there are quite a few situations where kindness may be a hindrance to survival. Moreover, our preference for kindness, cooperation, and altruism, along with commonly professed reasons for it, over and above any supposed evolutionary advantage, is not explainable as a survival mechanism.

      • Comment by davidns84:

        To Add: I just wanted to share that thought, not get into a discussion, so if I don’t happen to respond to any later replies, please don’t take it as me being rude.

      • Comment by jordan179:

        … kindness is not a prerequisite to cooperation and cooperation is not even a prerequisite for survival, as there are many species which make do without it.

        Indeed. However, almost all mammals (like us) care for their young for at least a brief period, and hence need at least a maternal instinct; most large mammals (again, like us) care for their young over an extended period, and hence need a strong predisposition to maternal care. Finally, social animals (again, like us) need to have a complex repertoire of behaviors for dealing with other members of their species.

        The most efficient way of handling such a repertoire of behaviors is through “emotions” which can serve as registers of past interactions with other individuals. This implies at least “liking” or “disliking” (and perhaps “love” and “hate”) toward other individual animals, with the appropriate responses (liked individuals should be treated “kindly” and disliked individuals “cruelly”).

        Note how this facilitates the memetic (rather than merely genetic) playing of “games” (in the games-theory sense) with other individuals of the same species. Memetic gameplaying is almost always better than genetic gameplaying because its response time is much faster.

        In fact, there are quite a few situations where kindness may be a hindrance to survival.

        Indeed, and in those situations, other tendencies tend to dominate. Note however that “survival” means of the genetic pattern rather than the individual, allowing for altruism towards kin; also note that (in the wild) most individuals of the same species one interacts with are at least cousins of fairly close degree, by human standards.

        Moreover, our preference for kindness, cooperation, and altruism, along with commonly professed reasons for it, over and above any supposed evolutionary advantage, is not explainable as a survival mechanism.

        Sure it is! The great apes are the most strongly k-oriented childraisers of the primates, and humans are the most strongly k-oriented childraisers of the great apes. Each human being (in a Paleolithic hunter-gatherer group) represents a tremendous investment of the time and energy of other humans (in some sense, of all the humans in the group) during his long childhood; and hence his life has value to other humans — letting him die because of some trivially temporary problem which could be easily remedied by a little kindness would be an incredibly inefficient allocation of energy (from the POV of gene-selection). Hence, evolution selects humans for kindness.

        Does this mean that we are always kind? No, we can be quite cruel. And the situation in which we are most likely to be cruel is when we deal with people outside our “group,” who in a Paleolithic society would also be people more distantly related to us and to whose survival as children our efforts have not been bent. Cruelty, in this case, acts as a check on our kindness, preventing us from wasting our efforts on those not members of the group.

        Being smart, of course, there is a strong memetic (cultural) in addition to genetic (biological) component to our behavior. We develop host-customs, war-customs, pacts, alliances, treaties, and taboos to govern just when and to what extent we should be cruel or kind. This is also adaptive, though sometimes it can grow non-adaptive variants (which are eventually selected against and lost).

        This can be seen in other sapient animals, including the other great apes, the elephants, and some parrots and crows. Heck, it can even be seen in some merely sentient animals, including of course wolves (which is why dogs fit into our culture so well). Cooperation and kindness are the glue that binds animal societies together — as they do our own.

        • Comment by John C Wright:

          Evolution of anti-Evolutionary passions

          How do you explain kindness toward strangers, particularly ones with whom we have no kin in common? This is not a natural feeling in children, but one that must be inculcated with moral training.

          • Comment by jordan179:

            Re: Evolution of anti-Evolutionary passions

            How do you explain kindness toward strangers, particularly ones with whom we have no kin in common? This is not a natural feeling in children, but one that must be inculcated with moral training.

            To some extent, it is because the emotions we evolved in the days when we almost always interacted with kin remain today, when we almost always interact with non-kin. But to a larger extent, this is cultural, and in humans cultural evolution has largely superseded biological evolution.

            Culturally speaking, there are obvious advantages in being able to develop trust beyond kinship boundaries. Note that not all human cultures have this as a default state: generally, the more primitive the society, the less trust automatically extended between non-kin (there are usually guest- and host-customs to enable such trust when desired by both parties).

            Cultural-evolutionary pressure to enlarge the trust group and hence the “social brain” (see Robert Wright’s Non-Zero for a detailed examination of this process in both cultural and biological evolution) is very strong. The history of the human race is largely a history of increased effective societal sizes and growing trust, for exactly this reason.

            This can clearly be seen today: the most advanced human societies (the nations of the modern West) have very large trust-groups, to the extent that most people behave trustingly toward complete strangers in stores or on the streets; the less advanced human societies (such as those of the Middle East or Africa) see extreme treachery between strangers (and even disfavored members of one’s own family) as routine.

            Benevolence and trust aren’t anti-evolutionary; that’s a misunderstanding of evolutionary selection. What is anti-evolutionary is blind benevolence and trust, but then not even the modern West is favorable to that. Along with benevolence and trust evolved vindictiveness and outrage, to deal with cheating on the social contract.

            Intelligent trust increases the size of the social group, and hence greatly enhances survival both for the group and for the individual member of that group, on the average.

          • Comment by razorsmile:

            Re: Evolution of anti-Evolutionary passions

            This is not a natural feeling in children, but one that must be inculcated with moral training.

            As with just about everything else, there are exceptions to this rule.

    • Comment by m_francis:

      But both consciousness and kindness are quite reasonable products of evolution.

      So are unconsciousness and carelessness.
      + + +
      The less conscious a mind, the less able it is to aid in survival.

      There are creatures lacking any mind whatever that have survived for eons.

      As for kindness, beings that are mutually kind — that cooperate — are more likely to survive than those which do not cooperate.

      Except when it works the other way. The problem with ex post facto just so stories is that they can justify any outcome whatsoever.

      • Comment by jordan179:

        But both consciousness and kindness are quite reasonable products of evolution.

        So are unconsciousness and carelessness.

        “Unconsciousness,” save as a means of riding out a hostile environment, is generally the precursor to either “death” or “recovery.” As for “carelessness,” what you’re talking about is the r-centered reproduction strategy, which looks “stupid” to us because we are the ultimate k-centered reproducers, right up there with gorillas and elephants.

        The less conscious a mind, the less able it is to aid in survival.

        There are creatures lacking any mind whatever that have survived for eons.

        You are assuming that a “mind” can only inhere in a cerebral neural network. Bacteria, etc. do have a “mind,” but it is a genetic massmind which exchanges data through lateral gene transfer. It’s a very alien mind to our own.

        The problem with ex post facto just so stories is that they can justify any outcome whatsoever.

        It’s hardly a “just so” story — it’s a pattern that we see across all of Nature, and one which allows the generation of many testable predictions. Barring time travel, all we can do to understand how animals appeared and developed behavior patterns is to look at fossils and at how living animals with similar anatomies or lifeways behave and reason from there.

        • Comment by m_francis:

          But both consciousness and kindness are quite reasonable products of evolution.

          So are unconsciousness and carelessness.

          jordan:
          “Unconsciousness,” save as a means of riding out a hostile environment, is generally the precursor to either “death” or “recovery.”

          m_frank:
          “Unconsciousness” is the opposite of “consciousness” as in the context of Mr. Wright’s review of BLINDSIGHT, which apparently extols the virtues of the animal soul over the human soul. To understand the difference, consider some time when you were engaged in a routine task and your mind wanders into thinking of something else. Say, you are walking up the block from the corner drycleaner’s. Then, at your door you come to yourself with no conscious memory of having walked the block. Take away the part that was “thinking about something else” and you have the unconscious life.

          Most of life has done quite well without consciousness.

          jordan:
          As for “carelessness,” what you’re talking about is the r-centered reproduction strategy, which looks “stupid” to us because we are the ultimate k-centered reproducers, right up there with gorillas and elephants.

          m_frank:
          No, I was talking about the opposite of “kindness.” My point was that if kindness is a result of evolution, so is the lack of kindness; if consciousness is the result of evolution, so is the lack of consciousness. In each case, you can tell a story about how it was evolutionarily advantageous to develop (not develop) kindness or to develop (not develop) consciousness.
          + + +

          The less conscious a mind, the less able it is to aid in survival.

          There are creatures lacking any mind whatever that have survived for eons.

          jordan:
          You are assuming that a “mind” can only inhere in a cerebral neural network. Bacteria, etc. do have a “mind,” but it is a genetic massmind which exchanges data through lateral gene transfer. It’s a very alien mind to our own.

          m_frank
          My water glass has a mind, if I can redefine “mind” to include whatever it is that my waterglass has.

          It is really very hard for conscious beings like ourselves to imagine unconscious life a la BLINDSIGHT. Methinks many of Mr. Wright’s complaints are due to the book’s failure to remain consistent with the “unconscious” premise.

  2. Comment by sursumcordasong:

    In C. S. Lewis’ That Hideous Strength, the main character’s training in “objectivity” involves the repetition of apparently pointless actions and meaningless gestures in a room in which the objects — indeed the very walls and ceiling — are irregular, incongruous, and without pattern, driving the mind to attempt to find meaning where there is none — and eventually to accept and embrace the very irrationality as reality.

    Meaninglessness is the meaning. But Mark is saved through a simple vision of the “normal” — fried eggs and rooks cawing and sunlight — just as, perhaps, a reader may be saved through a simple desire for a well-told story.

    • Comment by persephone_kore:

      In C. S. Lewis’ That Hideous Strength, the main character’s training in “objectivity” involves the repetition of apparently pointless actions and meaningless gestures in a room in which the objects — indeed the very walls and ceiling — are irregular, incongruous, and without pattern, driving the mind to attempt to find meaning where there is none — and eventually to accept and embrace the very irrationality as reality.

      This description reminds me of “The Yellow Room,” which I recently had occasion to read (reread? I think it wasn’t my first encounter, but am not sure).

  3. Comment by lordbrand:

    I read Blindsight and had the same reaction. I also read another book by him and was similarly confused. Misfits in the deep sea accidentally unleashing some kind of RNA virus.

  4. Comment by dirigibletrance:

    Ahah!

    So I guess right! I suspected from the minute that I read that hostile, anti-theist rant on Peter Watt’s blog that any book written by him would pull a Pullman, so to speak.

    From every indication you’ve given, my suspicion was justified and I’ve done right in ignoring every person who insisted that I give Blindsight a chance.

    There’s few things that make me madder than feeling like I’ve wasted my time reading a book. You have to understand how much I love reading to realize that for me to hate a book, or want the time I spent reading it back, it has to really suck.

    Anyways, I won’t be checking out any of Watt’s books. Thanks for the vicarious learning-experience, John!

  5. Comment by vitruvian23:

    “the argument is that if Europe rejects Christianity in the name of tolerant equality extended to all races,”

    Does one need to reject Christianity to treat all races equally? I wasn’t aware of that…

  6. Comment by randallsquared:

    Let me say, first, that I didn’t particularly enjoy _Blindsight_. However, I did find it the concepts interesting, and I thought that the novel was a powerful statement of those concepts. The plotting was lacking, I’ll agree, but this was not a plot-driven book; it was driven entirely by the central idea, and both the plot and the other details of most scenes were fluff. Just my opinion, of course — Watts might not have intended it that way.

    You complain that events happen that don’t seem to have any relevance to the plot, and that characters who are more intelligent than humans don’t seem to have explicable motives, but these things I just chalked up to excessive realism. This is especially the case because we have an unreliable narrator — he makes statements (as you note) which don’t jibe with his own account of events. Does this make the novel less fun? It did for me. Did it serve some other purpose? Well, maybe the excessive realism was intended to focus the reader’s mind on the central idea of the novel, that consciousness is an evolutionary dead end. It did have that effect for me, to some degree.

    “The decline and fall of the human race might have been an interesting book, or even an interesting trilogy: but it cannot possibly be an interesting sentence tacked without craft or passion onto a pointless ending of the plotless book.”

    It was interesting, I thought, as a reconfirmation of the central idea: (1) we learned that vampires are more intelligent than other humans with approximately the same amount of brain, due to not wasting neurons on consciousness; (2) we learned that the first expansive aliens that humans meet aren’t conscious; (3) we learned that consciousness is ultimately leading to the death of the human race as people prefer Heaven to the meatspace world; and then, capping it off, (4) we learn that the vampires have shed their bonds, leading inevitably to their win over the rest of the humans. Each of these is a separate arrow pointing at the conclusion of the novel, so even though they’re not all directly casual of each other (though some are: 1 => 4, it seems), they’re all caused by the relative ineffectiveness of conscious beings if computing power is held equal, in the universe of the novel.

    “A man who says ‘I have no free will’ if he is telling the truth, we compelled or programmed to say those words, and therefore he is not, in any meaningful sense, telling the truth. On the other hand, if he is lying, he is also not telling the truth. Therefore the statement is false.”

    This confuses “telling the truth” with “emitting a true statement”. For a man X, you’ve asserted that if X was compelled to say “I have no free will”, he is not making a free choice to tell the truth. However, that doesn’t make the statement *false*.

    • Comment by deiseach:

      Why do they care?

      If the vampires are not conscious, why do they care about being enslaved? If the individual does not possess a sense of “I-ness” (pardon the clumsy expression!) then why or indeed how can he feel that he is being maltreated? Humiliated?

      If there is no “I” but only a “we”, then are we talking about some kind of alien and/or vampire ‘hive-mind’ notion? Individuals are only ‘cells’ and the ‘organism’ resents being fettered by puny inferior humans!

      Then again, if the non-conscious aliens won’t attack humans, despite being attacked, why should the non-conscious vampires lift a finger? Let the humans all retreat into the electronic dream-worlds, then pull the plug.

      It sounds like an interesting novel, but with just enough “Argh!” moments to not make it worth the bother of reading, since beating one’s head against a desk or other hard surface can be painful :-)

      • Comment by randallsquared:

        Re: Why do they care?

        “If the individual does not possess a sense of “I-ness” (pardon the clumsy expression!) then why or indeed how can he feel that he is being maltreated? Humiliated?”

        We only actually see one vampire on-screen, and I don’t remember him portrayed as maltreated or humiliated. My cat isn’t conscious (I assume), but if I tried to hold her in one spot too long she’d try to escape my grasp.

        “Then again, if the non-conscious aliens won’t attack humans, despite being attacked, why should the non-conscious vampires lift a finger? Let the humans all retreat into the electronic dream-worlds, then pull the plug.”

        The aliens and both the vampires and other humans are expansionist; it’s only a matter of time before there’s conflict over resources (sooner for vampires vs other humans, since the closest resources are already in use by non-vampire humans). So while the aliens don’t hate or want to kill humans “just because”, they’ll eventually take the humans’ resources, because they’re more intelligent for the same amount of computing power, since in the novel’s universe, consciousness itself uses up brainpower that could otherwise be used for problem-solving. The vampires don’t have this problem, since there’s no “I” there; only intelligence.

        It is an interesting novel, and Watts is pretty easy to read, but I’ll agree that it’s not interesting because we care about the characters.

        • Comment by deiseach:

          *head-scratching*

          Well, I suppose I’m just not sophitimicated enough in my tastes, since I don’t get it.

          If individual vampires don’t have a sense of self to feel personal injury at being enslaved (even in a kindly, not maltreated servitude), then the only reason for a rebellion is the Overmind wanting to wipe out puny inferior humans.

          Which don’t make sense to me, since if the vampires live on human blood, then that’s killing off their food source. Wouldn’t it make more sense to do a Matrix-style keeping the humans alive in battery-chicken fashion, their minds lost in the electronic dream-worlds and their bodies as handy dinner portions for the vampires?

          As a side-note: “Vampires…have an epileptic fit at the sight of right angles.”? Lamest ever hand-wavium for ‘explaining’ why vampires don’t like crosses. You can have supernatural vampires unaffected by religious symbols (Anne Rice managed this); you can have materialist vampires unaffected by relgious symbols or only affected psychosomatically (both Terry Pratchett in “Carpe Jugulum” and “Buffy the Vampire Slayer” managed this – remember the sub-plot with the demon golem teaching the other demons to overcome the conditioning regarding religious symbols?); you cannot have, and expect to be treated with anything less than eye-rolling, pointing and jeering, materialist vampires affected by religious symbols for reasons of extreme and utter lameness.

          Right angles give them the pip. Suuure… so in the future world, where the genetically-retrived vampires are serfs, there are no right angles? Completely eliminated? Good old R’ylehian non-Euclidian geometry rules in architecture? Otherwise, every time they so much as saw the page of a book (right angles!) they’d keel over. Not much use if your serfs keep toppling over in fits every five minutes, is it?

          Anyways, back to the plot(!). The only thing that makes sense to me is that (1) this is actually a ‘had I but known then what I know now’ (the book being the main character’s diary as he floats in cryostasis or whatever) (2)the mission was set up by the superior vampire over-mind (this is why the crew complains about being ‘suckered by mission control’) (3) the vampires knew their rebellion would be successful and they wanted the humans out of the way (4) which was achieved by crashing the electronic dream-worlds and forcing humans into the real world (5) which furthermore meant the humans would now be expanding off-world by going into space and leaving the earth to the vampires (6) which meant that the humans had to be reassured that the aliens were no threat to them, otherwise they’d continue to hang around earth (7) which is why the vampire got the main character to be in touch with his emotions and all the rest of the goings-on to ensure the success of the mission against the aliens (first contact my backside! it was a military mission to frighten off the aliens) and that the main character would return to tell the humans everything’s tickety-boo, the aliens are no threat, come on out in your spaceships and leave earth to the vampires.

          Yeah. Bit convoluted, but possibly workable. Still think I’ll give it a miss (if I want my brains melted, I’ll stick to Gene Wolfe).

          • Comment by randallsquared:

            Re: *head-scratching*

            If you’re not getting it, perhaps some of the problem is that you are only reading my poor explanations, rather than the book itself, which I did find interesting, if not really entertaining in the well-plotted-novel sense. It’s a novel about an idea, and everything in the book is either about the idea, or is filler which need only be skimmed to get to the next bit about the idea. Or perhaps I’m being unfair. I didn’t find any of the characters particularly likable, but it may be that some others did (not John Wright, either, it would seem…).

            “Which don’t make sense to me, since if the vampires live on human blood, then that’s killing off their food source.”

            1) whatever proteins it is that they can’t make for themselves (don’t remember if there was an elaborate explanation) can be synthesized.

            2) I don’t remember any mention of completely wiping out humans. Presumably it’s the humans that have more of a problem with vampires than vampires with humans. But this is all off-screen.

            “Lamest ever hand-wavium for ‘explaining’ why vampires don’t like crosses.”

            To be clear, vampires had mostly died out well before crosses were common (as someone else corrected me about in another post).

            “Suuure… so in the future world, where the genetically-retrived vampires are serfs, there are no right angles?”

            There’s a drug or something that suppresses the response for a short time.

            Your projected plot doesn’t really bear much resemblance to the actual plot. The book is free, and it din’t mel my braaahhahhhhhhh

            • Comment by deiseach:

              Okay, from what this sounds like, the humans deserve it

              Being stupid enough to bring back a predator race smarter than us, give that predator race the cure for their vulnerabilities, and then keep them around in (presumably) sufficient numbers to enable a rising-up-and-taking-over on their part – that is so dumb then indeed, if that’s Mr. Watts’s view of us, we deserve to be crushed between the Scylla and Charbydis of the aliens and the vampires.

              I can believe that humans would be stupid enough to try this, since we’ve proven we can do really stupid things, but it’s a bit too much to have the double threat going on – either the vampire revolt plot or the alien contact plot would do, but both together is over-egging the pudding.

              Perhaps the trouble is, as you say, it’s a novel of A Big Idea and everything else gets thrown away to serve that idea.

              I have to admit, though, that I’m laughing out loud at the notion of the vampires being done in by architecture. Huzzah for the unsung hero of the plumb-bob!

      • Comment by John C Wright:

        Re: Why do they care?

        “I-ness” (pardon the clumsy expression!)

        This proves you are not a Chinese Room! No algorithm could have meaningized the expression I-ness!

    • Comment by John C Wright:

      “A man who says ‘I have no free will’ if he is telling the truth, we compelled or programmed to say those words, and therefore he is not, in any meaningful sense, telling the truth. On the other hand, if he is lying, he is also not telling the truth. Therefore the statement is false.”

      This confuses “telling the truth” with “emitting a true statement”. For a man X, you’ve asserted that if X was compelled to say “I have no free will”, he is not making a free choice to tell the truth. However, that doesn’t make the statement *false*.

      Forgive me if I am unclear. The phrase I used was “He is not, in any meaningful sense, telling the truth.” I did not say, “He was uttering an lie.” What I mean by “in any meaningful sense” is exactly what you have said. The statement he is compelled to utter might just so happen to correspond to reality, but it is not meant (by him) to correspond to reality, since (by his own admission) he is not a being capable of communication.

      To be precise, what I should have said was, “A man who says ‘I have no free will’ if he is telling the truth, we compelled or programmed to say those words, and therefore his statement has no true value. It is not, in any meaningful sense, either true or false. It is merely an air-vibration to which no meaning can be attached. In which case, not only is it not a true statement, it is not a “statement” properly so called, at all.”

      So, while you are correct, you miss my point. A meaningless statement is neither true nor false. A true statement conveys a picture of reality, more or less accurately. A false statements conveys a picture of reality that is false in its essential features: reality is not the way the statement says it is. A meaningless statement conveys no picture at all.

    • Comment by hootuckeye:

      “(1) we learned that vampires are more intelligent than other humans with approximately the same amount of brain, due to not wasting neurons on consciousness; (2) we learned that the first expansive aliens that humans meet aren’t conscious; (3) we learned that consciousness is ultimately leading to the death of the human race as people prefer Heaven to the meatspace world; and then, capping it off, (4) we learn that the vampires have shed their bonds, leading inevitably to their win over the rest of the humans”

      I have not read the book, while reading Mr. Wright’s review I suspected that this line of reasoning was, in fact, the author’s intention. However, I have a question for those of us who did read it: If these vampires are the paragons of Darwinian evolution, then how does Watts explain why they died out to begin with, necessitating their “Jurassic Park-style” genetic rejuvenation? If human consciousness were an evolutionary dead end then certainly the vampires would have completely killed homo sapiens off back in the day.

      • Comment by randallsquared:

        The explanation for vampires dying out was architecture. As a side effect of really fast and good pattern recognition, vampires have a problem with seeing certain patterns (like epileptics), and one of those patterns is “anything with a clear right angle”. So when humans start building walls and doors and windows with pane supports in them, vampires can’t really come inside, and when humans developed Christianity, with that cross everywhere… It’s vaguely plausible. I liked that part. :)

        • Comment by bdunbar:

          when humans developed Christianity, with that cross everywhere.

          I believe that in that novel vampires died out long before Christians started using the cross – 20 or 30,000 years B.C.?

          Unless you were making a funzie …

          • Comment by randallsquared:

            Maybe I misremember, or maybe it was in the FizerPharm presentation ( http://rifters.com/blindsight/vampires.htm ). :)

            I think it only makes sense that a few survived (through hibernation and/or living in out-of-the-way places) until recently, given the myths about vampires. But, as I say, perhaps that was just what I thought should have been in the universe of the novel, rather than what was. It’s been a while since I read it.

            • Comment by deiseach:

              Still sounds unconvincing to me

              If we are to the vampires as cattle are to us, then when was the last time we were blindsided by bovine architecture?

              By which I mean, surely the vampires could have maintained humans as a livestock source, and severely curtailed any attempts at doing things with right angles.

              I suppose the sleeping-by-day bit didn’t help them with the mobs of torch-and-pitchfork wielding peasants, but by the same token, if they’ve been around since both our species were grunting hominids (so to speak) and they’re ever so much smarter than us, then why aren’t we in the position of the Neanderthals as far as the vampires are concerned?

              Undone by Og and Throg Associates, designers to the discerning and architects of the new design craze – right-angled door jambs?

              • Comment by razorsmile:

                Wattsian Vampirology 101.

                It sounds unconvincing because you don’t have all the facts. I won’t exhort you to read the book; if the premise interested you, you’d be reading it already (It’s available online and free on the author’s website).

                As you already know, these are not supernatural vampires. The only reason the sun bothers them is that it hurts their hypersensitive eyes. Garlic does nothing, stakes kill ‘em for the same reasons they kill us: shock, exsanguination, internal bleeding etc. They feed on human flesh, not necessarily blood because homo sapiens can synthesize protocadherin-Y and they can’t. The right-angle thing is inextricably tied to the neurological reasons for their superintelligence.

                Now, as the rest of the animal kingdom shows us, the population ratio of predators-to-prey must always be drastically in favour of the prey or neither species will survive very long. Vampires, like homo sapiens are mammals; taller on average, longer-limbed but pretty much look just like us. More importantly, they breed at the same pace and have, if anything, greater metabolic demands. How do they compensate for this? They hibernate. Insert coffin-myth here.

                • Comment by razorsmile:

                  Re: Wattsian Vampirology 101.

                  Sorry, I posted before I was done.

                  Re: sleeping by day, they didn’t. They hibernate for years, even decades at a time. This allows human populations to grow and gives them time to become campfire tales told by the grandparents instead of an immediate threat.

                  Human architecture took them by surprise because they had no society and, you know, were sleeping most of the time. They are solitary predators who tended to compete rather than cooperate. Humans do the same but in larger groups.

                  I hope this clarifies matters somewhat. If not, you can always … wait, no, I said I wouldn’t say that.

                  • Comment by deiseach:

                    Re: Wattsian Vampirology 101.

                    No, no, go ahead and say it.

                    My quibble is that it’s so darn silly. I have no beef with non-supernatural vampires (or, more like ghouls, if flesh-eaters) but to account for their susceptibility to the Cross symbol by explaining it away as not because of any deity but that right-angles give them the screaming ab-dabs…

                    … duh. Sorry, but it’s so flamin’ silly. Why bother dragging in the cross? Simply ignore any religious connotations and give them some better kind of vulnerability. Why not Kryptonite? Why not the influence of the Greater Fortune? Why not the colour octarine?

                    Let’s face it: Friday night down the Vampire Social Club, these guys walk in and sit at the same table as Empusa, Count Dracula, Countess Bathory, Lord Ruthven, and Prince Mamuwalde (a.k.a. Blacula) and when discussing their great victories and defeats, Mr. Watt’s vampires have to hold their hands up to ‘being taken by surprise by architecture’.

                    Dude, that’s just *embarrassing* :-)

        • Comment by John C Wright:

          “It’s vaguely plausible. I liked that part. “

          I liked it too. Space vampires are up there with space ninjas and space pirates as laaved in awesomesause in my book.

          The idea that there is an undiscovered subspecies of Neanderthal which forms the ancestral source of all our racial nightmares I thought was one of the way-coolest things in the book.

          • Comment by deiseach:

            To be serious

            It’d be interesting to see what happened the vampires after a century or two, if they did manage to drive away most/all of the humans.

            If they have no society, I imagine they’d better develop one dang fast. After all, they need someone to maintain and build the machines to synthesise the artificial proteins they require (no more tasty human bar-b-q) and the drug to repress the “Ahhhh! 90 degree nastiness! Swoon!” reflex.

            If they are solitary predators that have a habit of prolonged hibernation, who is going to work out the shifts that say “Right, No.29, No. 156 and No. 8872, you are on repair duty while the rest of us have a nice long kip. See you in fifty years or so”? The ‘lights going out’ demonstrate that relying on machines to keep everything going while the vampires hibernate is not that dependable an option.

            It’s one thing to be a bunch of (can’t say lions preying on zebras since that’s a pack) solitary predators, each with a defined territory, preying on the herd within. When the herd is gone, what are they going to do?

            Unless they turn the tables and enslave a human population to tend to their needs.

            Which is my point: they could have done this in the first place, so…

            I’m harping on this, because vampires in fiction/vampire fiction (which I realise this is not a sample of) are my thing. I suppose I object most vehemently to the ‘Cross = right angles’ rationalisation because it just strikes me as lame.

            Supernatural vampires who fear the Cross because of God and the Devil – fine. Non-supernatural vampires who are unaffected by the cross or any other religious symbol because it’s as much use against them as against our vampire bat mammalian cousins – also fine. Non-supernatural vampires who avoid the Cross because of right angles – come on now, matey.

            So the terror whispered of around the campfire topples over in a fit as soon as he claps eyes on the picture frame on your wall? Oh, yeah, I’m really scared of that superior being who will supplant us dead-end consciousness bearers!

  7. Comment by jimhenry:

    It’s been too long since I’ve read it to comment on your review in detail, since I’ve forgotten many of the details of the plot. One thing I remember pretty clearly, though, was the reasoning behind the aliens construing human communication as an attack.

    Human communication is meaningful to conscious beings, but would be ultimately meaningless to nonconscious beings. However, it looks complex and regular enough that they might “think” (in some sense) that it’s meaningful, and spend a large amount of computing/”thinking” resources on deciphering it, all of which would ultimately be wasted. When they finally figure out that it’s meaningless, since they have no conception of consciousness, the only conclusion they can come to is that other nonconscious beings are attacking them by tricking them into wasting much of their computational resources on a useless task.

    Although I agree with many of your points, and find Watts’ nihilistic worldview repellent, I still enjoyed Blindsight and ranked it pretty high on my Hugo ballot last year. The revelations at the ending, and some of the apparently pointless events you mention, don’t connect with the main plot of the book, but they do contribute to the development of the themes.

    • Comment by arhyalon:

      >Human communication is meaningful to conscious beings, but would be ultimately meaningless to nonconscious beings.

      What does he consider “conscious”? Is a dog conscious? Human communication is meaningful to dogs…not words per se, though some border collies can respond to up to 1000 word commands.

      Can we communicate with trees? Well, that depends upon your definition of the word, but we can get them to respond in ways that we desire, planting, pruning, feeding, watering, breeding, etc. But no one claims trees are intelligent.

      So far as I can tell, this author defined certain things in ways that we have no evidence are, or even could be, the case and then drew logical conclusions from his false to fact associations. An interesting exercise in logic, perhaps, but not related to the real world in a meaningful way.

      What does intelligence mean without the ability to communicate? In the animal world, we see that the more intelligent the animal, the more we are able to communicate with them…dolphins, dogs, and apes are all creatures we can communicate with rather effeciently. By what does one measure intelligence, if not by the ability to conprehend what is in one’s environment?

      • Comment by jordan179:

        So far as I can tell, this author defined certain things in ways that we have no evidence are, or even could be, the case and then drew logical conclusions from his false to fact associations. An interesting exercise in logic, perhaps, but not related to the real world in a meaningful way.

        From the report, the author in fact defined things in an internally-contradictory manner. If the aliens were non-conscious, they could not be conscious of attempts to trick them, either (though they might have an evolved response to “cracking” or “phishing” analogous to a coded entry system shutting you out if too many tries were made). Furthermore, if a system behaves consciously then it is conscious: the system of Chinese Box, occupant and translation cards would be conscious even if none of the components were, just as we are conscious even though none of our individual neurons are. For the aliens to go “We are non-conscious” postulates that they have a consciousness to arrive at this conclusion.

        By what does one measure intelligence, if not by the ability to conprehend what is in one’s environment?

        Exactly. A system which comprehends its environment sufficiently well is conscious (since its own mind and those of its fellows are part of its “environment”) — if it claims to be non-conscious then it is lying, or deluded.

        • Comment by arhyalon:

          >Furthermore, if a system behaves consciously then it is conscious: the system of Chinese Box, occupant and translation cards would be conscious even if none of the components were,

          That was my thought exactly!

    • Comment by dirigibletrance:

      Words like “conception” and “figure out” and “tricking” shouldn’t be used to describe beings that don’t have any thoughts.

      The idea of non-conscious intelligent beings is self-refuting.

    • Comment by John C Wright:

      “Human communication is meaningful to conscious beings, but would be ultimately meaningless to nonconscious beings. However, it looks complex and regular enough that they might “think” (in some sense) that it’s meaningful, and spend a large amount of computing/”thinking” resources on deciphering it, all of which would ultimately be wasted. When they finally figure out that it’s meaningless, since they have no conception of consciousness, the only conclusion they can come to is that other nonconscious beings are attacking them by tricking them into wasting much of their computational resources on a useless task.”

      It still made no real sense to me. As a bit of handwavium, it seemed likely enough, and I thought the idea of creatures who would interpret even attempts to be friendly as hostile a sufficiently creepy little bit of business not to be too critical.

      But, honestly, I could not buy it. In the exact same paragraph where Mr. Watts gives this explanation, he mentions that the aliens cooperate with other species and find allies. Do these allies, one and all, every single one in the galaxy, use the exact same language and format? I cannot even get my Apple computer to talk to my Microsoft PC without a glitch.

      In any communication, there is a chance of garble, of misunderstanding, of sunspots drowning out your radio message. The aliens surely have a “concept” (if they have concepts) or at least a protocol for dealing with meaningless communication. They would ignore it.

      Interpreting it as hostile is not an efficient means of dealing with the “virus” of self-aware speech. In this story, for example, the aliens lost one humnormous terraforming plant, thousands of crewmen, endless time and resources. THAT is supposed to be more efficient than, say, generating one self-aware alien to act as a spokesman, warning intruders away, or offering either a contract or a treaty?

      At a really basic level of how the world works, it does not make sense. These superintelligent creatures operated entirely by rote. By instinct. How or where could they have developed the routine or the instinct to attack intelligent, self-aware creatures? I am not even going to ask about whether such things could evolve naturally.

  8. Comment by jeffr23:

    My own take on the aliens’ use of language was that they had no problem with orders and instructions; language used to manipulate other units into taking particular actions. It’s only where language is used to talk able abstract concepts or other words or used to entertain (==distract) that they viewed it as a hostile effort to spread a damaging virus (such as, but possibly not limited to self-awareness) into their systems.

    • Comment by deiseach:

      Didn’t Samuel Delaney do this one already?

      “language is used…as a hostile effort to spread a damaging virus” in “Babel-17″?

      Another book that fried my brain, but in a much more satisfying way than “Blindsight” sounds :-)

    • Comment by razorsmile:

      If I recall correctly, the eponymous language in Babel-17 pretty much gave Rydra Wong and The Butcher superpowers or at least supersmarts. Hardly a damaging virus. The rest was an epiphenomenon :)

      • Comment by deiseach:

        That was the sugar-coating on the pill

        The side-effects were so seductive that you weren’t supposed to notice (or at least not until it was too late) what it was really doing to you.

        I enjoyed Delany’s playing with language in that book, and the way he represented how someone unable to use “I” while speaking of himself (because it was impossible for him to apply the concept to his own being, if I recollect correctly) could forge a form of mutually intelligble communication with another. I don’t mind writers who are smarter than me showing it, so long as they don’t slap me in the face with something really lame.

        So sometimes I know exactly how Watson felt while tagging after Holmes :-)

  9. Comment by mrmandias:

    Saved! From reading a bad book.

    I hate books that start out strong and then end with total lameness.

  10. Comment by timrmortiss:

    Mr. Wright wrote: “Turing (and his supporters) say that the room “as a whole” (whatever that means) “understands” (whatever that means) the Chinese language, and that it means nothing in particular the man himself does not understand Chinese. Does one braincell in the brain of an English speaker understands English? Both are missing an obvious point. Both are arguing about whether a letter understands what is written in the letter. Whoever filled the filing cabinets and wrote the grammar rules for the Chinese Room understands Chinese. The letter-writer understands the letter, not the piece of paper.”

    The Chinese room is basically a Turing machine, a computer executing a program (the fact that there is a person inside is basically irrelevant). Let’s consider these two scenarios:

    1. The person who wrote the rules for the Chinese room doesn’t know Chinese himself, but encodes within the rules a program that allows the Chinese room to learn new languages.

    2. The person who wrote the rules doesn’t know Chinese. In fact, he doesn’t even know how brains work. He just uses the Chinese room to execute a sophisticated genetic algorithm whose end product is an artificial intelligence who is able to learn and speak Chinese.

    In both cases, we can’t say that the “letter writer” understands everything that is written in the “letter”.

    (Myself, I think the “systems reply” to the Chinese room argument is pretty convincing. The system as a whole understands Chinese, even if the person inside does not.)

    • Comment by bibliophile112:

      Niether the room nor the person understand chinese. Turing machines simply act according to physical processes so arranged that they represent ideas being manipulated. The Moon doesn’t understand Newtonian physics, does it?

    • Comment by John C Wright:

      “The person who wrote the rules doesn’t know Chinese.”

      Tell me, please, even in theory, how you get a single piece of paper, much less a room full of filing cabinets full of files of paper, covered with Chinese ideograms able to make intelligent sentences in Chinese, if no one who speaks Chinese at any time wrote them out and put them in the room.

      “He just uses the Chinese room to execute a sophisticated genetic algorithm whose end product is an artificial intelligence who is able to learn and speak Chinese.”

      Who wrote the algorithm? If the intelligence is artificial, who is the artificer?

      No, I am sorry, saying “the system” understands Chinese, when you have to do complicated mental backflips to avoid the glaringly obvious point that a Chinaman who speaks and understands Chinese has to set up the system, you are straining at gnats and swallowing camels.

      In my model of the universe, a letter written in Chinese is written by a Chinaman who speaks and understands what he wrote. I am talking about everyday, ordinary things, things like men writing letter, that everybody understands.

      In your model of the universe a letter written in Chinese understands Chinese because the paper is part of a “system” of understanding, where the word “system” and the word “understanding” has no meaning. You are straining to talk about something that makes no sense, and using obscure and ear-dazzling words to do it, in order to support a patently absurd conclusion, either that machines think or that men do not, or both.

      The problem is one of epistemology. Once we reject wisdom and understanding as sources of knowledge, and only count sense-impressions as a source of knowledge, any object not open the sense impressions (such as, for example, an object like understanding) leaves us with no sensible account of the object or its properties.

      As if a musician insisted on talking about harpstrings, their length and weight and vibration characteristics, but who never once mentions notes, chords, harmonies, melodies, rhythm, or any other non-physical aspects of playing the harp.

      • Comment by m_francis:

        Once we reject wisdom and understanding as sources of knowledge, and only count sense-impressions as a source of knowledge, any object not open the sense impressions (such as, for example, an object like understanding) leaves us with no sensible account of the object or its properties.

        Pretty much by definition!

      • Comment by timrmortiss:

        * Tell me, please, even in theory, how you get a single piece of paper, much less a room full of filing cabinets full of files of paper, covered with Chinese ideograms able to make intelligent sentences in Chinese, if no one who speaks Chinese at any time wrote them out and put them in the room. *

        But, I thought that’s what I did earlier! The programmer, even if he doesn’t speak Chinese, may write a program that is able to learn the language by interacting with other Chinese speakers. How can the children of non-Chinese parents ever come to learn Chinese?

        * Who wrote the algorithm? If the intelligence is artificial, who is the artificer? *

        The point is that writing the genetic algorithm has nothing to do with knowing Chinese.

        * In my model of the universe, a letter written in Chinese is written by a Chinaman who speaks and understands what he wrote. I am talking about everyday, ordinary things, things like men writing letter, that everybody understands. *

        The letter analogy is misleading, because letters can’t change by themselves once they are written. Computer programs (and the Chinese room is really a computer running a program) can do that, which opens the possibility of a program learning things that its programmer never bothered to know.

        * In your model of the universe a letter written in Chinese understands Chinese because the paper is part of a “system” of understanding, where the word “system” and the word “understanding” has no meaning. *

        I use “system” in the sense of “One of my neurons, considered individually, is not conscious. But the whole system, the assemblage of neurons, is”. I use “understanding” in the same sense as in “I understand English”.

        * You are straining to talk about something that makes no sense, and using obscure and ear-dazzling words to do it, in order to support a patently absurd conclusion, either that machines think or that men do not, or both.” *

        I am of the opinion that there can be machines who think and are conscious, at least in principle. Note that I’m not saying that every machine thinks. Or that building such a machine is possible in practice. The human brain is orders of magnitude more complex and intricate than the most powerful computer currently available. But neurons are not qualitatively different from logic gates.

        (While writing this comment, I have mentioned this discussion to my girlfriend. She totally agrees with you.)

        • Comment by John C Wright:

          me: Tell me, please, even in theory, how you get a single piece of paper, much less a room full of filing cabinets full of files of paper, covered with Chinese ideograms able to make intelligent sentences in Chinese, if no one who speaks Chinese at any time wrote them out and put them in the room.

          you: But, I thought that’s what I did earlier! The programmer, even if he doesn’t speak Chinese, may write a program that is able to learn the language by interacting with other Chinese speakers. How can the children of non-Chinese parents ever come to learn Chinese?

          Friend, a child learns through the faculty of understanding. I hope you understand what understanding means. You are using the word “interacting” here to cover up a blind spot. A child does not “interact” nor is a child a “system”. A child is a child. He understands as he learns. A Chinese Room is inanimate. A Chinese Room can interact, but it cannot understand.

          Now we are merely arguing in a circle. You are saying that since a child can learn Chinese, an empty room filled with filing cabinets can learn Chinese, even if the filing cabinets have no Chinese writing in them to start with. Your only evidence to support this strange assertion is to say that an empty room is like a child, and so since a child can learn Chinese, an empty room can learn Chinese.

          And in any case, this only removes my question to one remove. Instead of a Chinese speaker writing out instructions on how to answer in Chinese so as to pass for a Chinese speaker, we now have a Magician who speaks the Universal Syntax, writing out instructions on how to learn Chinese so well that even an empty room can pass for a Chinese Speaker.

          The programmer who sets up the Chinese Room, who writes out the rules and instructions, now not only understands Chinese, but understands the Universal Syntax and the Philosopher’s Language, that mythical set of rules that underpins all other languages, so that that man, whoever he is, not only speaks and reads all languages, but can teach a room full of filing cabinets how to pass the Turing Test in Chinese.

          This is nonsense. No such ur-language or meta-language exists. If it did exist, that is not what the argument is about.

          In any case, the Turing Test was not about how to pass for a baby who had not yet learned to speak; the Searle argument concerned a man set up in a Chinese Room where the rules for Chinese were given him in sufficient complexity that, without knowing Chinese himself, he could pass for a Chinese speaker. No one proposed a hypothetical where neither the man in the room, nor anyone stocking and supplying the room, spoke Chinese. To have the rules of Chinese grammar written out by someone who does not speak Chinese is unimaginable, and has nothing to do with the hypothetical.

          “(While writing this comment, I have mentioned this discussion to my girlfriend. She totally agrees with you.)”

          Then I need say nothing more! As a happily marriage man, I can assure you, that when you are arguing with the female of the species, THE WOMAN IS ALWAYS RIGHT!

          Even Adam listened to his wife.

          • Comment by timrmortiss:

            You wrote: “Friend, a child learns through the faculty of understanding. I hope you understand what understanding means. You are using the word ‘interacting’ here to cover up a blind spot. A child does not ‘interact’ nor is a child a ‘system’. A child is a child. He understands as he learns. A Chinese Room is inanimate. A Chinese Room can interact, but it cannot understand. “

            “The faculty of understanding” is a high-level description of the complex process of learning through interaction with the environment. It can be applied to children or to (hypothetical) sufficiently advanced machines.

            Analogously, “the faculty of seeing” is a high level description for the complex processing of visual information taking place in our retinas and brains.

            I have just met the Chinese Room (he speaks English as well as Chinese) and he has told me that you are being unfairly dismissive of him. “I understand things as well as a person can!” he said, and continued “Mr. Wright has hurt my feelings. If you tickle my sensory inputs, do I not write ‘He, he’ in the perforated tape? If you shut me down, do I not die? True, my mental machinery (all those pencils, the pieces of paper with notes scribbled on them, the overworked clerk) is a bit more exposed to the prying eyes of skeptics than that of yours, decorously tucked inside your cranium. But ultimately this is not a significant distinction.”

            “I’m telling you I am conscious. Why Mr. Wright thinks a children is conscious and I’m not is anybody’s guess. My logic gates work as well as the children’s neurons at producing consciousness. Here, look as this poem I wrote: [poem omitted for the sake of brevity]”

            (By the way, the main baddies in R. A. Lafferty’s novel “Past Master” profess to lack consciousness, and are Satan-worshippers to boot. It’s an interesting novel, if a little weird.)

            • Comment by rlbell:

              The Chinese room, as classically described is incapable of initiating a conversation. Therefore, it cannot be an agent in any interaction. Sure the clerk can occassionally write symbols on a piece of paper and slip it through the OUT slot, but the responses it gets are not going to help it learn chinese. The friendly people putting any message in the IN box, are likely to limit themselves to “Please stop littering”. The Chinese room will not be able to apply its genetic learning algorithm to learn the language of the passerby; unless, it can convince the person to try teaching it.

              I am not sure that you can teach a language to the chinese room; unless it has windows. However, this violates the chinese room paradigm, as it adds a context not included in the symbol exchange.

          • Comment by mindstalk:


            Friend, a child learns through the faculty of understanding

            That sounds like “a bird flies through the faculty of flight”.

            We know computers can learn how to do things the programmer doesn’t, since it’s been done. And Chomsky’s view of a Universal Grammar underneath human language hasn’t carried the day among linguistics; statistical learning is still a competitive hypothesis.

            • Comment by m_francis:

              1. Friend, a child learns through the faculty of understanding

              2. That sounds like “a bird flies through the faculty of flight”.

              No, because a great deal can be learned without understanding anything at all. Operant conditioning, for example. Rote memorization is another. Physical training. Many people can drive a car without understanding what is going on under the hood.

              • Comment by rlbell:

                While you do not need to understand how various systems of the car work, you do have to understand how the various pedals, levers, and wheel interact to control the vehicle’s speed and direction. You cannot drive a car if you do not understand the difference between P and D (or why that lever has to not be halfway between throws).

                • Comment by m_francis:

                  rlbell:
                  While you do not need to understand how various systems of the car work, you do have to understand how the various pedals, levers, and wheel interact to control the vehicle’s speed and direction. You cannot drive a car if you do not understand the difference between P and D (or why that lever has to not be halfway between throws).

                  m_frank
                  Hmm. Perhaps we ought to have a clear distinction between “knowing” something and “understanding” something. The wonderful dancing bear knows how to dance, but does not understand dancing. The child who memorizes the multiplication table knows the product of 5×3, but does not understand multiplication. Granted, knowing is a prerequisite for understanding – we can’t understand what we do not know – but I suspect the difference goes to the difference between perceiving and conceiving.

            • Comment by John C Wright:

              me:
              “Friend, a child learns through the faculty of understanding”

              you:
              That sounds like “a bird flies through the faculty of flight.

              me again:
              It only sounds that way to someone who has no understanding.

              You use your understanding to understand my words, in precisely the same way a solipsist understands that he is not the only person in the world, even though he says he does not know he is not the only person in the world.

              Not to make too elaborate an analogy, but one cannot argue with a solipsist, if the solipsist is unwilling or unable to question his axiom of radical empiricism. Normal, non-nuts Empiricism says that our sense impressions tell us sensible information about the sensible world. Radical nutcase empiricism say that only our sense impressions give us information about the world, and that there is no world aside from the sensible world, and no knowledge aside from empirical knowledge. You then ask the nutcase by what sense impression he can to know this universal positive assertion of epistemology (by the bye, a non-empirical branch of learning) and he makes nonsense noises.

              Likewise, a solipsist, trapped deep in the dead end of radical empiricism, reasons as follows 1. Only sense impression information is true or verifiable 2. that other men have minds or souls or points of view, awareness or self-awareness is not open to the sense impressions. 3. Therefore I do not know and it cannot be known whether or not other men have minds are souls or points of view, awareness or self-awareness. You then ask the solipsist to whom he is talking or from whom he learned this doctrine, and all he can do is sputter and utter nonsense. Our understanding that other people are real is an axiom; it is a category of thinking we must tacitly assume before we talk to another person.

              If you talk to another person and find out it is a manikin, or if you talk to a voice on the phone for a moment before you realize you are talking to a recording, it should not create an insurmountable epistemological crisis in you. You do not doubt that the category “real people” exist, you only doubt whether, due to incomplete information, you parked the entity “guy on the phone” into the wrong category, “real person” rather than “recording of a real person” or “machine built by real person.”

              Now, likewise here. Someone can pretend that he does not understand what is meant by the word “understanding” and someone can steadfastly deny that the human understand exists or that it can obtain true knowledge on which we humans act. But the cost for so doing is that one must reject that which is patently obvious and metaphysically inescapable (such as a basic category of human thought, “I think therefore I am”) in order to embrace that which is obscure and paradoxical, if not downright self-contradictory (“All thought is unconscious matter in motion, therefore I do not think, therefore I do not exist.”)

              If I were talking to someone who had never learned anything, who had no understanding, who did not have any faculty in his mind of awareness or self-awareness — in other words, if I were talking to a Chinese Room — I might have an insurmountable difficulty in describing or explaining what the faculty of understanding was, or how it was humans understand things. But since anyone who understand the comment I write to him must a fortiori have an ability to understand, he himself is a better witness of what understanding is that any words I say.

              • Comment by timrmortiss:

                * Someone can pretend that he does not understand what is meant by the word “understanding” and someone can steadfastly deny that the human understand exists *

                We are lucky that no such person posted in this thread!

                * or that it can obtain true knowledge on which we humans act. *

                When I go out for a walk, before I decide to carry my umbrella along I must have a 100% assured analytical certainty that is going to rain. I mean a positively Platonic one; it is worthless to act on anything less than that.

                * “All thought is unconscious matter in motion, therefore I do not think, therefore I do not exist.” *

                If a thing doesn’t have the privileged ontological status you want it to have, then that thing doesn’t exist. You’re taking your ontological Scattergories, and going home!

  11. Comment by bibliophile112:

    I have not read this book, but, on the assumption that I may read it in the future, have not read your review.

    The world view that starts by rejecting the supernatural as superstition, end by rejecting human nature, human feeling, human reason and all human matters as immaterial. There is no room for God — but there is also no room for Man. In the Chinese Room, there is no room for charity.

    I would agree that denying free will would tend to lead to nilhism and to a lesser extant hedonism, but where I disagree is that atheism inherently denies free will or self awareness. I think physical law is distinct from moral law. It’s a cold ugly darwinian universe, but that doesn’t mean society has to be.

  12. Comment by 0star:

    I sometimes think how awful it must be for those who hold an ideology or world-view (like Socialism or Materialism) only justified through half-truths. Deep down inside many of them, dimly at least, must realize the contradictions and logical errors in their beliefs. No wonder why they must attack and attempt to denigrate things like Capitalism and Christianity/most other religions, whether it be in novels, the media or private conversations.

    • Comment by randallsquared:

      Ah, yes. How sad it must be for my opponent, who actually knows I’m right, and has some bizarre reason of his own for arguing against me.

      Good luck with that. :)

    • Comment by 0star:

      Let me clarify – this was not meant to be sarcastic, snarky or aimed at anyone specific. Of course this type of behavior is not limited to Socialists or Materialists.

    • Comment by dirigibletrance:

      You know, it’s often said that those who hate others are often just picking out what they hate, and deny, about themselves. I wonder how many radical athiests are actually closet thiests in denial.

      Incidentally, I wonder how many televangelists are closet athiests.

      It always seems to me those who are really secure in what they think don’t get stirred up, or get their cage rattled, by those of opposing views.

      • Comment by m_francis:

        Hawking once said that those who were really intelligent did not go around talking about how intelligent they are. “Only losers boast about their intelligence.”

      • Comment by mindstalk:

        Until those of opposing views compete for public policy decisions. Abortion, gay marriage, blue laws, birth control, assisted suicide, genetic engineering, sex education content, cloning, biology education, official school prayers, official endorsements of Christianity or theism… there’s lots of legal and judicial conflict rooted in the differences between materialism and theism, or Christianity and other religions.

        Lots of radical atheists hate Christianity because they were raised Christian, and resent what they perceive it as having done to their minds, e.g. warped attitudes about sex, Catholic guilt, whatever cult issues the ex-Jehovah’s Witness has to deal with. They’re not denying something about themselves, but they do have a judge.

        I think atheists raised that way tend to be more laid back, until the aforementioned policy conflicts come up.

  13. Comment by lordbrand:

    I think this is all a bunch of sleight of hand.

    The vampire was not self-aware or conscious? I vaguely recall this but assumed I misread. Ignoring the idea of him being controlled by Captain CPU, how does he react to new situations? Who filed his Chinese cabinets?

    So he was a machine? Is Watts arguing that the only difference between human consciousness and vampire processing is merely the awareness of one’s mental processing? So a human is merely a computer with a GUI interface and monitor and a vampire is the same computer, but running faster, because it needs no video card, monitor, or GUI software OS?

    The analogy fails because the only reason to have a monitor is to observe. If there is no one to observe, obviously there is no use for a monitor.

    There is also no use for essentially anything. One could argue that, without consciousness, humans – and all creatures – are merely a flesh and blood DNA virus, mutating and propagating, chunks of matter that move and react to stimuli with no more meaning than a leaf reacting to a gust of wind.

    Pleasure, pain, love, hate would all be gone. Meaning would be gone. Morality would be gone. Rights would be gone. Torture and depravity mean nothing. That victim is only bleating because its pain receptors are triggering rote reaction. It’s no different to than it not bleating, not being in pain, just merely a different burst of motion and noise, signifying nothing.

    Close your eyes, sink into the abyss, for it is no different than this world. You won’t – you can’t – know the difference.

    I don’t think of consciousness as 1/0, either or, binary. I think it’s likely more similar to how our consciousness operates – it’s variable. Drugged or drunk or dreaming, I am less conscious, but may react and flail. A cat is probably a reduction of that state. A bug, consciousness collapsed to near nothing with very little range or flexibility. But I can’t assert that it *is* nothing. I do not know whether the bug CPU comes with an inward-facing GUI and monitor – a Mach 64 to my GeForce 8800. Who knows? But it matters.

    • Comment by dirigibletrance:

      Re: I think this is all a bunch of sleight of hand.

      What I’m puzzled by is that guys like this, Eliminative Materialists who want us to believe that consciousness and thought are somehow illusionary and epiphenomena, are simply playing word games and redefining what consciousness and thought are. They’re not really saying or accomplishing anything.

      Nothing changes with how I act, or think, no matter how they define or don’t define it. I’m still the same person. They’re just wasting their time.

      • Comment by lordbrand:

        Re: I think this is all a bunch of sleight of hand.

        Exactly. I think that was my original point when I titled the post, but the post got away from me.

        This is where Wright’s books shine – he gets this. It seems a small point to get, but boy does it ever matter! Especially when dealing with transhumanism and identity dilemmas!

        “Eliminative Materialists” I like that. Beware of those who would define you away! I think, actually, I fear the consciously evil man far less. He might be reasoned with. He might repent. He may have goals I understand. He might be shot with a reasoned missile. The Definitional Villain expunges with a thought, yet is thoughtless! To shoot him, I must load a bullet backwards and hand him his gun. My consciousness to him a mere sabot!

    • Comment by razorsmile:

      Re: I think this is all a bunch of sleight of hand.

      Pleasure, pain, love, hate would all be gone. Meaning would be gone. Morality would be gone. Rights would be gone. Torture and depravity mean nothing. That victim is only bleating because its pain receptors are triggering rote reaction. It’s no different to than it not bleating, not being in pain, just merely a different burst of motion and noise, signifying nothing.

      A flower blooming with no one to look at it is a flower blooming. A flower blooming with someone to look at it becomes a poem or a painting — or a food source, if you’re a bee. Meaning exists because we do.

  14. Comment by razorsmile:

    Warning: If you are not John C. Wright and intend to read the book, Here There Be Spoilers.

    I will preface this by saying that I am a huge fan of both Peter Watts and yourself (The Golden Age trilogy is pure unadulterated wonder in textual form and the Children of Chaos sequence is tremendously fun – among other things).

    On the other hand, my icon is a direct quote from Blindsight.

    ————–

    So, Blindsight. Your personal objections to it all come down to a fundamental disagreement between your respective worldviews. There is no room for détente between materialist atheism and Christianity, as this lengthy and very interesting writeup amply proves. I won’t get into that. This, on the other hand:

    The basic rule of writing is known as the gunrack rule. If you show the readers that there is a gun in the gunrack in scene one, the gun must be fired by scene three. Otherwise, leave the gun out of the scene. It serves no point; it is distracting.

    I will oppose.

    Peter Watts violates this over and over again.

    Example: in the opening scene one of the character announces that mission control betrayed them: they’ve been suckered. Nothing comes of this. Nothing at all. It is not as if the characters thought the mission was meant for one thing and found out it was another. So why put in the line where the mission crew are told they’ve been “suckered”?

    Ah. That was cleared up fairly early in the tale (Siri’s conversation with his dad – a flashback). Their original mission at the time they went to sleep was to go to the Kuiper Belt and investigate a weird object dubbed Burns-Caulfield. By the time they got close, the first two waves of observer-drones had come and gone and Burns-Caulfield had vanished. The ship, having access to that information much sooner than they, opted to change course and head directly for the location it had been narrow-casting to. They (the crew/cast) didn’t know about Burns-Caulfield vanishing until they woke up and looked it up in the ships records. Hence, suckered. They weren’t where they thought they were going to be; their plans had not survived contact with the enemy.

    Example: There is a scene where one character hallucinates seeing a bone in the ship’s wiring. Nothing comes of this.

    By the end of the book, we’re told and shown that Siri Keeton’s Synthesist talents had given him a rough extrapolation of what the Scramblers actually looked like long before anyone actually saw them. He didn’t consciously know this, his unconscious mind had done all the heavy lifting and was attempting to get that info front and center. What was the point? The point was that Siri had been underestimating and thus hamstringing himself the whole time. To paraphrase Sarasti the Vampire, he was more than just mass.

    Example: there is a scene where an intense magnetic field makes another character think she is dead, even though she is still able to talk and move. Creepy, no? But nothing comes of this.

    Nothing was needed to come of it. It’s creepy in and of itself (more so, when you realize this is a real-life albeit rare neurological condition) and makes a thematic point rather than a plot-related one.

    Example: the aliens threaten one particular crewmember with violent death. Nothing comes of it. Instead, another crewman dies. Nothing comes of it. Why put in the death-threat? What point did it serve?

    The Scramblers were threatening the person they were speaking to. They killed who was available when the time came. Part of the point of the book was that the Scramblers themselves were incredibly smart in some ways and incredibly dumb in others (remember the saccades invisibility scene? It only works on one observer. They also made the same mistake in space.)

    Example: the main character deduces that the soldier is going to mutiny. When the vampire does die, the soldier claims to the innocent of mutiny, and the main character (I think) believes it. So, what was all that false foreshadowing for? Unless the soldier was also unaware of her own decision to commit mutiny? In which case, who cares?

    Unreliable narrator speaking to unreliable actors. Maybe she’s lying, maybe she’s telling the truth, maybe he‘s lying. We don’t know for certain and probably never will. What, it’s only okay when Gene Wolfe does it? :D

    Well, I’m done.

    • Comment by dirigibletrance:

      Re: Warning: If you are not John C. Wright and intend to read the book, Here There Be Spoilers.

      If you break the gunrack rule, you’ve left a dangling thread.

      Possibly, sometimes this is ok. If you’re planning on writing a sequel. Or if this is an episode of something and the gun’s going to be fired later in the season. Or whatever.

      But, still, dangling threads make for unhappy readers. Unless you very clearly imply that the story isn’t finished yet, even if it means actually writing “TO BE CONTINUED” on the last page, a dangling thread just sucks.

    • Comment by John C Wright:

      Let us Talk about Gunrack Rule

      Me: ” in the opening scene one of the character announces that mission control betrayed them: they’ve been suckered. Nothing comes of this. “

      You: “The ship, having access to that information much sooner than they, opted to change course and head directly for ….”

      I understood what the story said happened. I simply do not see in what way this constitutes a betrayal by the high command, or by the ship. In a military situation, a commanding officer gives a command, and the crew obeys it. This is not a betrayal in any sense of the word.

      What was this leading to? It had no effect on anything the crew did later, as far as I could tell. If they mistrusted the ship, no decisions were made based on this.

      Me: “There is a scene where one character hallucinates seeing a bone in the ship’s wiring. Nothing comes of this.”

      You: ” Siri Keeton’s Synthesist talents had given him a rough extrapolation of what the Scramblers actually looked like long before anyone actually saw them. “

      Again, I understood this. What was this leading to? The precognitive subconscious idea of what the aliens looked like did not help Siri or hurt him. It had no plot function. No decision was made based on it. Why was it there? Atmosphere?

      In any case, the hallucination he had of the starship-shape was his precog vision of the scrambler. The bone stuck in the wiring was a different vision, and the aliens did not look like this.

      Me: “an intense magnetic field makes another character think she is dead. Creepy, no? But nothing comes of this.”

      You: “Nothing was needed to come of it… a thematic point rather than a plot-related one.”

      That is the center of my complaint. Too many thematic points, not enough plot points. If it is all theme and no plot, it is not a novel.

      Me: “Example: the aliens threaten one particular crewmember with violent death. Instead, another crewman dies…”

      You: “The Scramblers were threatening the person they were speaking to. They killed who was available when the time came. Part of the point of the book was that the Scramblers themselves were incredibly smart in some ways and incredibly dumb in others…”

      I do not understand how your comment excuses or explains the mistake in writing. I understood the plot point that the death threat was unintentional and therefore meaningless. I understood that the crewman they killed, they killed unintentionally, since no intentional action can be attributed to non-self-aware aliens. I did not understand why the reader was supposed to perk up instead of yawn when the autopilot aliens do one random act after another. The threat was random, hence meaningless; the death was random, hence meaningless.

      A story where one character gets a death threat, but another character is run over by an avalanche, would be much the same thing.

      Me: ” Unless the soldier was also unaware of her own decision to commit mutiny? In which case, who cares?”

      You: “Unreliable narrator speaking to unreliable actors. Maybe she’s lying, maybe she’s telling the truth, maybe he’s lying. We don’t know for certain and probably never will.”

      If we do not know what happened, then we cannot know what the story was. If your narrator is so unreliable that he cannot tell a story, there is no story.

    • Comment by John C Wright:

      Let us talk about the World View

      “Your personal objections to it all come down to a fundamental disagreement between your respective worldviews.”

      Whether this is true in the general case or not, it is simply not true in my case. I once was an atheist of very strict freethinking principles. Once of the many forms of mysticism and woolly-headed thinking I rejected, back in my atheist days, was mysticism for materialism. I did not worship God, but I did not worship Matter either. The concept, as a I said above, cannot hold the attention of the logical thinker for more than ten seconds.

      A radical materialist does not accept the concept that he himself, in any real sense of the word, exists. He thinks, “I think, therefore, I am not.” One need not be religious to reach the conclusion that self-contradictions are errant nonsense. One need only be reasonable.

      My personal objections to Mr. Watt’s world view is due to the conflict between logic and illogic, not due to conflict between religion and atheism.

      Here is the point: it is certainly a creepy theme to have alien monsters so alien and so monstrous that they do not even possess consciousness. Fine idea. Wish I had written it. But, there are problems with the execution. You cannot have drama without plot, you cannot have plot without characters, you cannot have characters without free will, good and evil, moral decisions, all that stuff.

      Once you establish that the aliens are nothing more than wind-up clockwork set in motion by nothing and for no reason, then nothing the aliens do has any meaning. They kill one character and spare another. So what? They are not bad guys, any more than an avalanche is a bad guy. They cannot be reasoned with, any more than a wind-blown tree limb that breaks a window can be reasoned with. They are arbitrary matter in motion in an arbitrary universe.

      There is no drama in a nihilistic universe. To have drama, you need meaning.

      • Comment by dirigibletrance:

        Re: Let us talk about the World View

        It occurs to me that the story would have been more compelling had the antagonist been a different stort of non-conscious, unthinking process. Say, a Storm, rather than some aliens.

        A story about some sailors trying to survive on a ship in a terrible storm, that could be some compelling stuff. Their antagonist is just the blind forces of nature. What about this makes it more dramatic and compelling than Mr. Watt’s story about some blind, unconscious aliens?

        Why would I rather read “A Perfect Storm” than “Blindsight”?

        • Comment by deiseach:

          That would be more interesting

          The aliens as a blind force of nature. You can’t talk to them, you can’t even establish that you and they are the same kind of thing, you can’t reason or bargain or make alliances with them or even pay danegeld.

          They’re out there, they’re smarter and better-equipped, and you can’t get around, over or under them. What are you going to do?

          As it stands, the aliens are some kind of looming threat, but it sounds like they don’t do anything in the end. Perhaps they are going to be competitors for resources, as a previous commenter suggested, and being superior competitors will be a threat to us – but that’s a long way off, it seems. Unless humans are going to be immediately squeezed for resources between the vampires and the aliens, and unless humans can’t figure out a way to avoid the aliens or go around them somehow, that’s not very threatening of a threat.

          Well, chacun a son gout.

          • Comment by ekbell:

            Re: That would be more interesting

            I remember a reading a science fiction story a long time age (and the story was older then I) where Our Heroes discover that the aliens they were trying to talk with were as self-aware as insects (the comparison used in the story) and that their programming, to use a modern term, was causing them to be hostile to Our Heroes.

            Our Heroes then figure out how to use the aliens’ programming in such a way as to neutralize the threat.

            There was also much heavy-handed propaganda on the dangers of communism (the aliens *obviously* devolved to the level of communal insects by adopting a *communist lifestyle*!). So not a great story, but the central problem was interesting and it was solved.

            • Comment by John C Wright:

              Re: That would be more interesting

              “Our Heroes then figure out how to use the aliens’ programming in such a way as to neutralize the threat.”

              That was what I was expecting from BLINDSIGHT.

              While it makes for an eerie idea that a man suffering hysterical blindness can actually see, the fact is I would much rather have a guy with normal, slow, self-awareness, free will and 20-20 vision on my side in a sharpshooting contest than even the quickest clockwork vampire who only saw by instinct.

              I just don’t buy that premise that blindsight is somehow more efficient than sight-sight.

              Even if it were more efficient, why not use your neuro-tech (like they had in this book) to let one part of your brain see and work that way, all instinctive and vampire-like, and leave your reason in tact to deal with novel situations, such as, let us say, First Contact?

              The aliens in the book lost one expensive terraforming station and numberless starfish-biomachines because they were too blind to talk to the humans and calm them down, make a deal, or say, “We come in peace, take us to your leader!” and too stupid to interpret human talking among ourselves as anything but an attack. Come on. These guys were supposed to be MORE advanced than us. They cannot make a simple cost-benefit assessment?

  15. Comment by annafirtree:

    Major Plot Spoilers Part I

    Mr. Wright,

    I’ve just finished reading the book, and I have to say I think there is more overall plot than you give the book credit for. But let me respond to some of your specific objections first so that I can make the point.

    in the opening scene one of the character announces that mission control betrayed them: they’ve been suckered.

    The character didn’t say that mission control suckered them; he just said they’ve been suckered. And then it turns out that the character was right – they WERE suckered – by the aliens. Burns-Caulfield was a pose.

    There is a scene where one character hallucinates seeing a bone in the ship’s wiring. Nothing comes of this.

    This was Keeton, and I took it as a pre-cursor to his full-fledged hallucination of a scrambler onboard. Whatever intuition he has that is trying to break through starts right there.

    Example: there is a scene where an intense magnetic field makes another character think she is dead, even though she is still able to talk and move. Creepy, no? But nothing comes of this.

    I wouldn’t say nothing at all comes of this, although you might call it a theme point instead of a plot point? What comes of it is basically that it plays into Keeton’s attack-induced revelation about the nature of intelligence versus consciousness; part of that revelation is that “I” gets in the way, so that saying “I am dead” would be freedom for the underlying processes of the brain.

    the aliens threaten one particular crewmember with violent death. Nothing comes of it. Instead, another crewman dies. Nothing comes of it. Why put in the death-threat? What point did it serve?

    I think the death-threat was to show a conflict between the intellectual conclusion that the aliens were just a Chinese room and therefore didn’t mean the threat and the emotional acceptance of that conclusion; that is, it scares us even if we think we know they didn’t mean it. The death of the other crewman set up a situation that I think was a miniature of the overall plot: more on that later.

    It is not clear (to this reader, at least) why the humans continued to escalate their provocations against the aliens.

    Keeton (the main character) gives a bit of explanation for the escalating provocations: the aliens are clearly going to develop into something that Earth won’t be able to stop; now is the only chance to stop a potential threat; therefore they need to do what it takes to ensure Earth’s safety. Since the aliens won’t talk to them in a meaningful way, the only thing they can think to do is to take progressively aggressive actions to learn enough about them to stop them. The aliens struck back because that is what instinctive animals do when attacked.

    Ok, now for the plotline and character development. Keeton starts off being portrayed as little more than an automaton, incapable of feeling emotion or relating to others. He describes himself as a Chinese room. He is compared to the man who cannot actually feel his legs, and so must watch his legs in order to walk (in order to pretend to be normal). As the book progresses, we get more and more hints that this is a lie he tells himself; that in fact he has all sorts of emotions but simply refuses to interact with people in a meaningful way. Now he is on a ship with a bunch of strange folk, on a mission for First Contact, to determine the hostility or lack thereof of the aliens that took a picture of Earth. The aliens refuse to communicate in any meaningful way, so the Earth people take progressively aggressive actions to find out what they like, up to kidnapping some and torturing them to learn how they communicate. Among other things going on, Keeton thinks the soldier is planning to mutiny against the vampire. …

    • Comment by annafirtree:

      Major Plot Spoilers Part II

      With complete surprise to Keeton (and the reader), the vampire calls him up to his quarters and attacks him. This attack drives Keeton into himself and rips away his illusions, leaving him as helpless and scared as a woman who has just been raped. After he starts to recover, he is a changed man. Just as a man who learned how to walk by watching his legs will be disoriented and lose his balance if he suddenly begins to have feeling in his legs, so Keeton’s new awareness of himself and of feelings causes him to lose his ability to understand others in the disconnected way he used to. He begins to relate to people: he relates to the second biologist for the first time by having an honest (if angry) exchange with him; he volunteers to take the place of the scared linguist on an EVA when his mind is telling him not to get involved. The vampire at some point explains that Keeton needed to have the truth come on him all at once, because otherwise he would have built up more lying justifications to explain it away. The vampire also explains that Keeton had been projecting his own emotions onto others in order to avoid them; it was not the soldier, but Keeton, who wanted to mutiny against the vampire.

      In the meantime, the aliens have been provoked enough to try to destroy the Earth ship. The vampire tells Keeton that he is sending him home to warn Earth; Earth needs to know that consciousness is not an evolutionary survival trait, and that self-aware language will be perceived as a threat, so that it can prepare itself to deal with any future waves of non-self-aware aliens that should come to the Solar System. Keeton realizes that the vampire is planning to kamikaze the ship to destroy the immediate threat of the aliens already here.

      Everything goes haywire: Keeton realizes that the aliens have implanted a new personality that has taken over the linguist (quite an accomplishment for a species that doesn’t have conscious personality); the linguist, in addition to flying the ship within reach of the aliens, presumably was the one who spiked the drugs that allowed the vampire to look at right-angles. The vampire goes into an epileptic fit and then one of the soldier’s robots kills him. Keeton briefly thinks that the soldier must have been planning a mutiny after all, but when the soldier denies it, he believes her. Then the ship admits to having killed the vampire, because the epileptic fit caused the ship to lose control of him.

      Keeton escapes and the ship attacks the alien vessel with all it has. Keeton does not think the aliens will retaliate, because he thinks that the two ships successfully annihilated each other. Game-theory dictates that far-off aliens will be no more likely to retaliate for the death of this bunch than a dandelion is likely to retaliate for the death of its wind-blown seeds. But more seeds will come, from one source or another, so Earth needs to be warned of the Universe’s opposition to conscious self-awareness.

      However, on his way home, radio signals from Earth show that it is too late. First, Heaven – the bastion of consciousness for its own sake – is destroyed. This is not a random disaster, but the first step of the destruction of consciousness on Earth, from within. The sudden renewal of birth rates is another warning; in the far-off past, the restoration of the human population is what allowed the vampires to come back from their long sleeps. And then that disaster strikes; the not-quite-self-aware vampires, the opponents of human consciousness, take over.

      So, Keeton, as he has finally come to appreciate consciousness, especially his own, is faced with the destruction of consciousness in the universe. It is as if Romeo spent the whole book refusing to acknowledge his love for Juliet; and then, when he finally realizes his love for her, she gets hit by a car and dies. (The death of the first biologist was a miniature of this: Keeton did not interact with him in a meaningful way; only after the guy’s death did he finally realize that he was a friend).

  16. Comment by genesiscount:

    Consciousness as “evolutionary dead end”

    You know, it occurs to me that this might be a perfectly accurate description of consciousness as a phenomenon. Isn’t the entire point of consciousness that it enables us to transcend evolution? — i.e., we become capable of deliberately altering the environmental factors that would self-select for fitness by survival, to replace them with factors of our own creation that self-select for fitness by criteria we determine?

    To pick an immediate example, I’m very badly nearsighted, and have been since childhood. As a non-conscious animal member of a tribe I would most likely have gotten et by some beastie or other before spawning and perpetuating my poor-sight genes. As a conscious member of a civilization capable of creating contact lenses, I’m a valuable worker in an information economy and become more likely to marry and propagate than the perfectly-sighted but maladjusted jock who has no social skills or ambition beyond his high school football games.

    Consciousness is not a dead end in evolution; it’s the end of evolution — and that’s its entire point.

Leave a Reply