The Parable of the Adding Machine

I suppose we science fiction writers are to blame for the modern phenomenon of people who think computers think, that adding machines add, and so on. I have never seen a version of Pinocchio done where the puppet was never brought to life by the fairy, but Geppetto merely was convinced by BF Skinner or Karl Marx or Lucretius that the puppet was alive 0n the grounds that it moved when it strings were pulled.

Like this crazy version of Geppetto, there are some men these days who are convinced that since computers move numbers around, therefore they think, therefore humans (who think) are nothing but computers.

But even if the logic were sound, the premise is wrong. A computer does not literally move numbers around. That expression is merely a metaphor.

What it the computer is literally doing is moving around (in an adding machine) gears (and in an electronic calculator) electrons. The numbers are symbols whose meaning we assign to them.

Suppose I were to make a simple adding machine that only performed one operation. If I write a straight line on one face or cog of a wheel, and do this again for a second wheel and for a third, and I moreover cunningly place a fourth wheel next to them connected by linkages so that turning the first three wheels to the straight line pulls the fourth wheel so that the face or cog showing a symbol that looks like a sideways trident is showing, why, then, I have a “calculator” that can perform one operation: 1+1+1=3.

But there is no number “one” anywhere in the wheels. That number is something I contemplate with my mind and which I (and everyone else who decides to use Arabic numerals) assign or attribute to the straight line. Again, the numeral III is one that I (and everyone else) assign to the trident-looking squiggle.

Please note that there is no addition sign nor equals sign in my example. The man using the simple adding machine assigns those things to the positions of the wheels, using ‘position’ as a symbol or sign the same way the squiggles are symbols or signs.

Now, again, suppose I take a second set of wheels, cunningly interconnected to turn another face or cog when the faces of the first set is moved into certain positions, and so perform a second operation; suppose again that I make a third set of wheels, or as many wheels as there are entries in a multiplication table. Provided my wheels do not slip any gears, I have an adding machine which can help me calculate.

If I am particularly ambitious, I can add hooks or pins or punch cards to the faces of my machine of many wheels, and have the wheels act like the tumblers of a lock, so that certain combinations of wheel turnings will set in motion other appliances connected to the machine, such as alarm clocks, photographs, typewriters, telegraphs, telephones, gramophones, whistles and bells and even the post box. But no matter how elaborate, the thing is still a clockwork. For reasons of saving space, I can do that exact same thing with a cellphone, using electrons rather than wheels and gears: but the nature of the machine is the same.

Now before we wax poetical about how machines think and it is only a matter of a technical tactic to get them to have free will, let us contemplate that a computer is something more complex but otherwise no different than my simple calculation machine of four wheels. Let us not pretending the four wheels understand the abstractions we call numbers, or know their values, or add the values together to deduce the answer.

Nothing like that is going on. Having a dozen four-wheeled machines, or a million, or a googleplex will not change the nature of the process, which is a mechanical motion unrelated to thinking in any way.

66 Comments

  1. Comment by Ed:

    I have never seen a version of Pinocchio done where the puppet was never brought to life by the fairy, but Geppetto merely was convinced by BF Skinner or Karl Marx or Lucretius that the puppet was alive 0n the grounds that it moved when it strings were pulled.

    You must never have seen A. I.. I envy you.

    Ed.

  2. Comment by Malcolm Smith:

    Of course, machines will never be able to think! They may imitate thinking. But the fact that a machine reaches the same conclusion as a man does not mean that it gets there by thinking, any more than the fact that it reaches the same destination as a man means that it necessarily got there by walking. Just stop and contemplate what thinking entails. Normally, if you are not focussed on anything in particular, your mind wanders around, following the trail of one idea to the other, until you make a decision to think about something specific. Would it ever be possible to write that sort of program into a computer? Would anyone want to?
    Apart from that, it is plain common sense that any brain which is constructed of nuts and bolts, silicon chips, or positronic interactors, must, in the final analysis, operate differently from one constructed of flesh and blood.

  3. Comment by Tyrrell McAllister:

    It’s true that you assign meanings to the states of your machine. But it’s important to note that you do not have complete freedom over which meanings you assign. The following allegory conveys the sense in which your freedom is constrained.

    Suppose that you stumble across a mechanical contraption one day. You turn the dials of the contraption in a manner to which you ascribe the meaning “What is the sum of two and two?” The contraption proceeds to grind through a physical process, which concludes with the display of a certain arrangement of strokes that looks like this: 4. You then ascribe meaning to this physical state — namely, “The answer to the question posed is four.”

    When you did this, you were using a particular method of ascribing meanings to (1) the physical arrangement of the contraption’s dials, and to (2) the strokes displayed following that arrangement of the dials. But does this method of ascribing meanings always work? After trying many other sums, you eventually conclude that, yes, this method of ascribing meaning to the contraption’s states actually works. That is, for any two integers m and n, if you turn the dials accordingly, then the strokes displayed mean (to you) that number which is in fact the sum of m and n.

    So, that particular method of ascribing meanings works. What about other methods? After some more experimentation, you discover that a few other methods also work. For example, if you interpret the input as “Which power of 2 is the product of the mth power of 2 and the nth power of 2?” then the meaning that you assign to the output always gives you the correct answer.

    But most methods of mapping meanings don’t work. For example, you try to interpret the input to mean “How many lines are in Act m of the nth play that Shakespeare wrote?” But the meanings that you ascribe to the outputs of the contraption never (or very rarely) match the true answer to your question.

    After many experiments of this kind, you conclude that, while you have some freedom over which meanings to assign, that freedom is highly constrained. Most meaning-assignments result in uninteresting interpretations of the machine’s behavior. Only a very few meaning-assignments are of any use (e.g., by helping you to use the machine to solve arithmetic problems.) Only a very few meaning-assignments are workable.

    End of allegory.

    When I say that computers may someday become intelligences who choose actions to achieve their goals, I am reasoning as follows. First, as the above allegory demonstrates, a computer’s behavior may be amenable to only a few workable interpretations. It is therefore conceivable that, someday, a computer will behave in such a way that essentially only one interpretation of its behavior works. Second, it is at least conceivable that this interpretation will amount to saying “This computer is the physical realization of an agent who is choosing actions A, B, and C to accomplish goals X, Y, and Z.”

    • Comment by Gigalith:

      There are an infinite number of possible interpretations of any such system, even one where all the possible inputs and matching outputs are known.

      Consider a machine that takes a previous state and produces a new state. It could be counting forward, counting backward, counting through the alphabet, counting seconds, or counting nothing. With compression, or a very strange but unknown symbol table, it could be simulating fluids, simulating a brain, simulating an entire universe or simulating nothing. A machine doing any of these could be doing the exact same physical operations, but there is no way to know which it is doing.

      So does an entire universe pass in and out of existence every time I count to ten?

      • Comment by Tyrrell McAllister:

        Consider a machine that takes a previous state and produces a new state. It could be counting forward, counting backward, counting through the alphabet, counting seconds, or counting nothing. With compression, or a very strange but unknown symbol table, it could be simulating fluids, simulating a brain, simulating an entire universe or simulating nothing.

        It’s true that infinitely many interpretations exist as abstract mathematical maps between meanings and states. This is why I emphasized the workability and usefulness of the interpretations.

        The successive arrangements of the molecules in a bowl of soup on my desk over the next ten minutes may be in one-to-one correspondence with the states of the stock market over the next ten years. If I could only interpret those states in my bowl of soup as future stock prices, I could be a rich! But in fact I can’t read those states in that way. Assuming for the moment that I could identify where the molecules in the soup are at each moment of time, I still wouldn’t know how to read true future stock prices off of them.

        So, while one might say that such an interpretation (soup states as true future stock prices) exists in an abstract mathematical sense, this interpretation is not in fact available to me in any useful or workable sense. Interpretations like this therefore aren’t relevant to when I would consider a computer to be implementing a mind.

        • Comment by Patrick:

          “This is why I emphasized the workability and usefulness of the interpretations.”

          Workability and usefulness are design features, though, not principals of calculation, less so facts about things. They are evaluations, thoughts about other principles than the given inputs and output.

          In fact, it does not matter how many gears and gadgets a given design requires. The operator is the one finding two to be the sum of one and one – as well as evaluating the efficiency of the box, the reliability of it’s output, etc.

          • Comment by Tyrrell McAllister:

            In fact, it does not matter how many gears and gadgets a given design requires. The operator is the one finding two to be the sum of one and one – as well as evaluating the efficiency of the box, the reliability of it’s output, etc.

            The operator may evaluate the efficiency, but he doesn’t have arbitrary power to choose what that efficiency is. (Just try to evaluate a bowl of soup as an efficient predictor of stock prices.) You could decline to make the evaluation altogether, but that may be impractical. Perhaps you find it very important, for your purposes, to know whether you can use some arrangement of atoms to solve arithmetic problems easily. Similarly, you may one day find it important to understand some computer as an agent intelligently pursuing its goals, goals which you cannot usefully understand as goals of the computer’s maker or operator.

            • Comment by The OFloinn:

              That is the behavioral error.

              The characteristic mistake in the study of consciousness is to ignore its essential subjectivity and to try to treat it as if it were an objective third person phenomenon. Instead of recognizing that consciousness is essentially a subjective, qualitative phenomenon, many people mistakenly suppose that its essence is that of a control mechanism or a certain kind of set of dispositions to behavior or a computer program. The two most common mistakes about consciousness are to suppose that it can be analysed behavioristically or computationally. The Turing test disposes us to make precisely these two mistakes, the mistake of behaviorism and the mistake of computationalism. It leads us to suppose that for a system to be conscious, it is both necessary and sufficient that it has the right computer program or set of programs with the right inputs and outputs. I think you have only to state this position clearly to enable you to see that it must be mistaken. A traditional objection to behaviorism was that behaviorism could not be right because a system could behave as if it were conscious without actually being conscious. There is no logical connection, no necessary connection between inner, subjective, qualitative mental states and external, publicly observable behavior. Of course, in actual fact, conscious states characteristically cause behavior. But the behavior that they cause has to be distinguished from the states themselves. The same mistake is repeated by computational accounts of consciousness. Just as behavior by itself is not sufficient for consciousness, so computational models of consciousness are not sufficient by themselves for consciousness. The computational model of consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.
              – John Searle, “The Problem of Consciousness”
              http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.prob.html

              • Comment by Sylvie D. Rousseau:

                Thanks a lot for that quote. I found this statement particularly useful: “The Turing test disposes us to make precisely these two mistakes, the mistake of behaviorism and the mistake of computationalism.”

              • Comment by Tyrrell McAllister:

                That is the behavioral error.

                I don’t think so.

                First, I’m not just talking about systems that behave as if they were conscious. I am talking about (the possibility of) systems that behave in such a way that I am unable to understand them intelligibly except by thinking of them as self-aware agents seeking goals. This is a stronger condition.

                Second, I’m not even saying that every system meeting this stronger condition must, by definition, be conscious. (And I’m certainly not saying that my interpretation of it as conscious is what makes it be conscious.) I admit the possibility of unconscious systems that behave in the way that I’ve described.

                What I am saying is that I will have little choice but to attribute consciousness to such a system, because, ex hypothesi, that will be the only workable interpretation of its behavior that will be available to me.

                • Comment by Patrick:

                  “..that will be the only workable interpretation of its behavior that will be available to me.”

                  This isn’t necessarily true. Personhood (and, inter alia, that prized intersubjectivity quotient that you propose some behavior is indicative of) isn’t, can’t be, merely product of behavior. There is no reason to assume that, even in theory, you have the ability to determine exhaustively what is and isn’t subjectively conscious among all the things that are. Or even that you can determine at what point (or at what rate) a given thing becomes conscious, without an appeal to normalizations from your own experience.

                  • Comment by Tyrrell McAllister:

                    There is no reason to assume that, even in theory, you have the ability to determine exhaustively what is and isn’t subjectively conscious among all the things that are. Or even that you can determine at what point (or at what rate) a given thing becomes conscious, without an appeal to normalizations from your own experience.

                    I’m pretty sure that I haven’t assumed this. At any rate, I certainly don’t have this ability in practice, and my argument has been all about what I can do in practice.

                    • Comment by Patrick:

                      Could you describe the process more then? I agree that there IS a reason not to interpret soup as AI, but I don’t view that analysis as being based on an evaluation of the efficacy of it’s goal-seeking behavior, or its computational efficacy, etc.

    • Comment by docrampage:

      I don’t believe that you can come up with any non-subjective definition of “assignment that works”. Any assignment of values to the possible input and output states will have the mechanism implementing some finite function. And furthermore, any such finite function will be the finite restriction of an infinite number of other functions, and you can view the mechanism as partially implementing any one of that infinite number of functions, giving the mechanism an infinite number of interpretations.

      I don’t think that you can come up with an objective criterion for preferring any of those interpretations –or at least no such criterion that would be plausible as a component in a law of physics. Keep in mind that since you are claiming that some interpretation has real physical effects (namely that it produces the physical phenomena of consciousness), that you have to come up with a way of picking out that interpretation that might plausibly have physical effects.

      As with most proponents of algorithmic theories of intelligence, you are simply confusing your perceptions of a machine with the physical properties of the machine.

      • Comment by Tyrrell McAllister:

        I don’t believe that you can come up with any non-subjective definition of “assignment that works”. Any assignment of values to the possible input and output states will have the mechanism implementing some finite function. And furthermore, any such finite function will be the finite restriction of an infinite number of other functions, and you can view the mechanism as partially implementing any one of that infinite number of functions, giving the mechanism an infinite number of interpretations.

        Not an infinite number of workable interpretations. Workable interpretations have to be finite. For example, I couldn’t use a calculator if it took me an infinite amount of time to decide how to interpret the strokes in the calculator’s display.

        I don’t think that you can come up with an objective criterion for preferring any of those interpretations –or at least no such criterion that would be plausible as a component in a law of physics.

        Something is keeping you from interpreting your desktop computer as an accurate stock-market predictor, even though its states might be in one-to-one correspondence with future states of the stock market. Do you not consider that “something” to be an “objective criterion”? Do you not consider the “something” to be a physical property of your computer, of how its atoms are arranged? If not, what’s stopping you from “interpreting” yourself rich?

        Keep in mind that since you are claiming that some interpretation has real physical effects (namely that it produces the physical phenomena of consciousness), that you have to come up with a way of picking out that interpretation that might plausibly have physical effects.

        I should clarify something here. Take, for example, my intrepretation of you as conscious. I’m not saying that my interpretation of your behavior as the actions of a conscious person is what makes you conscious. So I’m not “claiming that some interpretation has real physical effects (namely that it produces the physical phenomena of consciousness).” That, I think we agree, would be silly.

        I have the causation running the other way. One of the effects of the physical phenomena in your body is that I am unable, in practice, to understand them as anything other than the actions of a conscious agent. That is, the physical phenomena compel my interpretation (practically speaking), not the other way around. Meanwhile, those same physical phenomena also cause your own consciousness. That is my view.

        • Comment by docrampage:

          Something is keeping you from interpreting your desktop computer as an accurate stock-market predictor, even though its states might be in one-to-one correspondence with future states of the stock market. Do you not consider that “something” to be an “objective criterion”?

          Obviously not. In fact, its states are in one-to-one correspondence with future states of the stock market. What keeps me from interpreting it that way is that I don’t know the interpretation.

          Do you not consider the “something” to be a physical property of your computer, of how its atoms are arranged? If not, what’s stopping you from “interpreting” yourself rich?

          My lack of knowledge is a subjective fact about me. It is not a fact about the computer or about how its atoms are arranged. What’s stopping me from interpreting myself rich is that I don’t know the interpretation, obviously.

          • Comment by Tyrrell McAllister:

            Obviously not. In fact, its states are in one-to-one correspondence with future states of the stock market. What keeps me from interpreting it that way is that I don’t know the interpretation.

            Yes, but why don’t you know the interpretation?

            Suppose that your computer were replaced with a stock ticker that printed marks on a piece of paper that, when interpreted in the manner conventionally used by English-speakers, read, “The following are the prices of stocks over the next ten years…” Then there would be no barrier to your knowing the right interpretation.

            But we accomplished this by changing only physical facts about the situation. Hence, something about the original physical set-up constituted a barrier to your knowing the right interpretation.

            ETA: Let me try another tack. Do we agree that you (1) don’t know the right interpretation, (2) don’t know how to figure out the right interpretation, (3) don’t know how to figure out how to figure out the right interpretation, (4)… and so on ad infinitum?

            I don’t really care whether you want to call the conjunction of (1), (2), (3), etc. an “objective criterion”. Regardless, it is a constraint of a kind that compels you to interpret your computer as something other than a stock market predictor. That suffices for the purposes of my argument.

            • Comment by The OFloinn:

              when interpreted in the manner conventionally used by English-speakers

              It would seem that semantics can always be had from syntax if semantics is imposed on the syntax behind the curtain.

              • Comment by Tyrrell McAllister:

                It would seem that semantics can always be had from syntax if semantics is imposed on the syntax behind the curtain.

                Indeed. That is why I’ve always tried in this conversation to put the origin of the semantics squarely in front of the curtain — as in, for example, the phrase you quoted.

            • Comment by Patrick:

              “Then there would be no barrier to your knowing the right interpretation.”

              The difference between a computer, a weather pattern or a bowl of soup is a matter of subjective interpretation, if you grant me that ‘subjects’ are naturally what can know, and knowledge is at least more than a correspondence of verity re: status between objects.*

              Lets say your soup bowl really will, of natural necessity and as a matter of unalloyed fact, output 10 years worth of stock prices, based solely on input from the microwave.

              Can we say that it’s a computer, a crystal ball, or lunch?

              Is your creamy tomato bisque calculating the future, determined by future states by some physical means undiscovered, or is it what it is?

              Lets say we live in a post-economic world where there is only one stock, and it never changes value. However, you’ve discovered that your soup is uncanny – it’s always right about the price! Does that change what you think re: it’s predictive / computational / cognitive powers? If so, why? If not, might you be concluding this hastily?

              You brought up physical facts about the body being evidence of consciousness – does the following count as a counter-example? Anybody measuring your heartbeat wouldn’t fail to be amazed at it’s regularity, but nobody would attribute consciousness to you based on the fact alone. Actually, looking for things that resemble ourselves, we’d be DIS-inclined to include you among the conscious. Consciousness as we call it means finding something we expect – but merely getting what we expect doesn’t mean we’ve discovered consciousness in an object, and certainly, not getting what we expect doesn’t falsify the consciousness condition of something that may have it.

              • Comment by Tyrrell McAllister:

                Consciousness as we call it means finding something we expect – but merely getting what we expect doesn’t mean we’ve discovered consciousness in an object, and certainly, not getting what we expect doesn’t falsify the consciousness condition of something that may have it.

                I’m having trouble following you here. In what way does consciousness mean “finding something we expect”?

                (Of course I don’t think that every individual physical fact about the human body is indicative of consciousness. But I’m not sure what your point is beyond that.)

                • Comment by Patrick:

                  “finding something we expect” = trivially, I think we commonly suppose that something is conscious when it replies to some species of our inputs within expected bounds. But if you are listening for thought and hear a heartbeat, you haven’t falsified consciousness in an object. And to restate it another way, if reading your soup gives you the right stock prices every time, it the canniness of its success won’t, all by itself, tell you if computer, clairvoyant or Campbell’s.

                  I may be wrong, but I wonder if in principle, we could ever grant an object’s claim on subjectivity merely by repeatedly applying that kind of test. When you simplify the inputs and outputs (the degree of canniness involved) to the degree I mentioned, I tend to think the criteria of consciousness as a band of expected responses doesn’t rise above the level of numerology.

                  So if efficacy or that heuristic method alluded to above or some back-of-the-envelope determination based on whether an interpretation of soup outputs are workable won’t reliably give you consciousness or falsify it, subjectivity, in order to be available to analysis at all (and maybe it isn’t, or if you’re a materialist, sure it is, but that’s a whole other set of problems), must be something else in principle.

                  You mentioned elsewhere that I may be misunderstanding you though. It’s likely. Could you give us more of what you think on this topic?

                  • Comment by Tyrrell McAllister:

                    … subjectivity, in order to be available to analysis at all (and maybe it isn’t, or if you’re a materialist, sure it is, but that’s a whole other set of problems), must be something else in principle.

                    There is a subtle, but important, distinction between (1) saying what subjectivity is, and (2) saying what it would take for me to conclude that you (or a computer) have subjectivity.

                    I don’t claim to be able to do (1). I accept that it is a problem. I don’t dismiss it as meaningless (on behaviorist grounds, say), but it’s not a question that I’m claiming to address directly.

                    Rather, I’m trying to see what is different about the way in which I experience those entities to whom I attribute subjectivity, versus those I don’t. The difference appears to be the way in which I am able to gain some degree of understanding of them.

                    For some entities, it never crosses my mind to think of them as goal-seeking agents. I get by just fine without thinking of them that way, and it doesn’t look like I’d get any additional mileage out of trying to think of them that way. Other entities behave in such a way that I am unable to understand them except as goal-seeking agents.

                    Then I ask myself, “Could it ever happen that the way I in which I am able to understand some computer’s behavior also makes me attribute subjectivity to it?” As far as I can tell, the answer is “Yes, this is possible.” I don’t see anything that would rule it out.

    • Comment by John C Wright:

      “It’s true that you assign meanings to the states of your machine. But it’s important to note that you do not have complete freedom over which meanings you assign.”

      Forgive me, but I do not see any relevance to the argument. Even if the assignment was so constrained that there was one and only one MEANINGFUL assignment any observer could make to the machine states, nonetheless that assignment, and that meaning, would be a mental act and not a physical object.

      The argument is not whether human thought is unpredictable. The argument is whether “checkmate” or “twice two is four” or “all abstraction can be reduced to physical properties” is an abstract concept and therefore not a object like a stone, with mass, extension, duration, temperature, and other physical properties.

      • Comment by SFAN:

        One might as well ask why minds need brains at all.

        A blessed Lent to all.

        (P.S.: oh, now there’s an editing feature… :) )

      • Comment by Patrick:

        Tangentially, I think it bears mentioning that it’s only minds that may give incorrigibly physical things their semiotic renditions to begin with. A tree is ‘just’ a tree until a tree may be a symbol of grandeur – then a tree is a tree, or a grand tree. I suppose that it’s only when a king-piece is a ‘about’ a king that a checkmate can refer to something.

        I’m not saying that calling it something else would change the meaning of the game – I’m saying that the meaning elegantly obtains while the mind commodifies two things that are not related (little bits of wood, rules of engagement) with a symbolic scheme apt to describe every possible relationship, complete and deserving of a name itself. The nomenclature makes evident the potentialities. The premise of chess is assumed in order to apply the judgement ‘checkmate’ to a given arrangement. So ‘chess’ can be a portable analogy of arrangements or implications-about – a symbol, call it – we can find in places where there is no board, pieces, etc. But ‘chess’, played or alluded to, at any scale or in any mode of abstraction, has no substance; bits are not chess, and rules are not chess, players are not pieces, the names of chess-things are not the things themselves, the names of chess-things are not arbitrary, chess-calculations are not philosophy on chess, and so on.

      • Comment by Tyrrell McAllister:

        Forgive me, but I do not see any relevance to the argument.

        Perhaps the mistake is mine. I took this post to be an argument for this claim from the previous post:

        Despite the eagerness with which modern materialists confuse the objects on which symbols are inscribed, or to which symbolic meaning is attributed, with the material object itself, in their eagerness to pretend we are all Tin Woodmen, in reality even the most advanced of computers neither reflects nor cogitates nor acts of its own volition, no, not even so much as an amoeba acts.

        In that post, you used this claim to argue for the following thesis:

        As far as real science is concerned, we are as likely to create C3PO, or any other self-aware, talking, thinking and acting computer, as we are to create the Tin Woodman of Oz by the process of chopping off one body part at a time until all are replaced by the tinsmith (as described by L Frank Baum, the Royal Historian of Oz, in a grisliness odd for a children’s book.)

        I am trying to show why the argument in the present post does not suffice to establish the thesis from the previous post. Perhaps I should have replied to that post.

        Here is why my argument is relevant:

        First, once you call a thing a computer, you are already adding your own interpretation of its behavior to your conception of it. You are already, in its very name, understanding it not just as an arrangement of atoms, but as something that performs computations upon inputs to produce outputs. Thus, once you are thinking of a computer, you are thinking of a particular arrangement of physical stuff together with a particular way to give meaning to what it does.

        Second, although you have provided your interpretation of the behavior of the physical stuff, you were highly constrained in how you could interpret it. It could one day happen that you have essentially only one workable interpretation available to you.

        Third, this interpretation could amount to understanding the computer as “self-aware, talking, thinking and acting”, contrary to your previous post.

        Put more briefly: We can create computing computers. As you point out in this post, that way of talking presumes an assignment of meaning on our part. Very well, let us make this assignment of meaning. Now we can understand the computer as computing. And maybe someday we will be able to — or, for practical purposes have to — understand a computer as thinking and acting.

        Now, maybe you wouldn’t want to call this approach “materialism”. After all, I haven’t given a material account of where these “meanings” come from. But I’m not particularly interested in whether the label “materialism” applies to my view. I only want to explain why it’s reasonable to think that we may one day make thinking and acting computers.

        • Comment by Robert Mitchell Jr:

          Well, no. You sort of jump the shark there. A waterclock is a computer that we can use to determine the time. But that doesn’t imply that the water in a waterclock bowl is different from other water, and that one day it might be aware….

          • Comment by Tyrrell McAllister:

            A waterclock is a computer that we can use to determine the time. But that doesn’t imply that the water in a waterclock bowl is different from other water, and that one day it might be aware….

            Indeed. I entirely agree. But you will have to spell out for me why my agreement compels me to reject any claim in my previous comment.

            • Comment by Robert Mitchell Jr:

              Well, it’s sort of like the underwear gnomes, you seemed to have overlooked step two. Step one. Computer. Step Three. Thinking and Acting! How does the water in the bowl change?

              • Comment by Tyrrell McAllister:

                How does the water in the bowl change?

                As I said, I agree that the water in the bowl doesn’t change…

                Here is how your argument appears to me right now. Step 1: The water doesn’t change. Step 3: Therefore, computers could never be conscious. Where is Step 2?

                • Comment by Robert Mitchell Jr:

                  Well, I’m not giving an argument, I’m just trying to figure out how you got from unchanging water to aware and acting. If it was my argument, two steps are all I need…..

                  • Comment by Tyrrell McAllister:

                    You can interpret the water in a water-clock as telling time without supposing that the water has changed somehow (right?).

                    Suppose that some giant water-computer behaved in a way that I could only interpret (usefully) as an agent making choices. Why would this interpretation commit me to supposing that the water has changed somehow?

                    I’m just not seeing why you’re focused on the condition of the water.

                    • Comment by Robert Mitchell Jr:

                      Because the “intelligence” is not about what you interpret, it’s about the “Computer” being able to interpret. The water clock is a useful example, because we all understand what it is about, even though it is a working (albeit a very simple one) computer for telling time. Make it harder to get fooled by the “Chinese box”….

        • Comment by Daniel A. Duran:

          Let me decode what you’re saying and put it as starkly as possible:

          1-‘Mr. Wright is coming with biases and prejudices as to what computers are.

          2-It is possible one day his definition of what computers are is proven to be false.

          3- And then we will have to accept computers as sentient and perhaps even self-aware.’

          Is that the gist of your post?

          • Comment by Tyrrell McAllister:

            Let me decode what you’re saying and put it as starkly as possible:

            1-‘Mr. Wright is coming with biases and prejudices as to what computers are.

            “Bias” and “prejudice” seem like strange words to use in this context. I’m not accusing anyone of subjecting computers to bigoted oppression.

            2-It is possible one day his definition of what computers are is proven to be false.

            I wouldn’t be interested in trying to prove a definition false. I’m not even sure what that would mean here.

            I am agreeing with Mr Wright that we attribute meaning to the inputs and outputs of computers. But I also think that this attribution of meaning, which we provide, may one day have to be an attribution of willful goal-seeking action to a computer.

            • Comment by Daniel A. Duran:

              “Bias” and “prejudice” seem like strange words to use in this context. I’m not accusing anyone of subjecting computers to bigoted oppression.

              Bias and prejudice is not identical to bigotry. No one is saying that you called Mr. Wright a bigot.

              I wouldn’t be interested in trying to prove a definition false. I’m not even sure what that would mean here.

              Read again what I wrote; I never said you were trying to prove his definition to be wrong.

              I am agreeing with Mr. Wright that we attribute meaning to the inputs and outputs of computers. But I also think that this attribution of meaning, which we provide, may one day have to be an attribution of willful goal-seeking action to a computer.

              Now you are saying that his definition of what computers are and can be is wrong. Mr. Wright has said that seeking goals, assigning meaning, and consciousness are not and cannot be intrinsic properties of computers (since machines lack formal and final causes). Any “goal seeking” computers display is extrinsic to them; humans assign those properties from the outside. Mr. Wright is making a non-contingent proposition of what computers are and can be.

              • Comment by Tyrrell McAllister:

                Bias and prejudice is not identical to bigotry. No one is saying that you called Mr. Wright a bigot.

                Then I’m not sure what you mean by those words in this context, so I don’t know whether they accurately paraphrase of my position.

                I wouldn’t be interested in trying to prove a definition false. I’m not even sure what that would mean here.

                Read again what I wrote; I never said you were trying to prove his definition to be wrong.

                As I said, it was unclear to me what you even meant to suggest by “proving a definition false”. What would it mean for such a thing to happen, whether I did or whether it simply happened in the fullness of time?

                Now you are saying that his definition of what computers are and can be is wrong. Mr. Wright has said that seeking goals, assigning meaning, and consciousness are not and cannot be intrinsic properties of computers (since machines lack formal and final causes). Any “goal seeking” computers display is extrinsic to them; humans assign those properties from the outside. Mr. Wright is making a non-contingent proposition of what computers are and can be.

                Not every case of saying something false about computers is a case of using a wrong definition of computers, even when you are falsely asserting something to be non-contingent. A false non-contingent claim may be an error of inference from a good definition, rather than a case of a bad definition.

                • Comment by Daniel A. Duran:

                  Then I’m not sure what you mean by those words in this context, so I don’t know whether they accurately paraphrase of my position.

                  This what I meant : ”once you call a thing a computer, you are already adding your own interpretation of its behavior to your conception of it. You are already, in its very name, understanding it not just as an arrangement of atoms, but as something that performs computations upon inputs to produce outputs.”

                  Bottom line from the above, you’re coming with your own bias and preconceptions; Mr Wright (and all those who agree with him).

                  I would recommend you to re-read your own post before you make me remind you what you said; this is not something I should be doing.

                  “As I said, it was unclear to me what you even meant to suggest by “proving a definition false”. What would it mean for such a thing to happen, whether I did or whether it simply happened in the fullness of time?”

                  Mr. Wright said over and over that computers are machines or tools. Tools, by definition, cannot have intrinsic meaning, purpose or goals. Since you believe computers can be sentient, then you think that we are wrong in defining computers as only tools or machines.

                  “Not every case of saying something false about computers is a case of using a wrong definition of computers, even when you are falsely asserting something to be non-contingent. A false non-contingent claim may be an error of inference from a good definition, rather than a case of a bad definition.”

                  Mitch, do not try to play games with us. The passage above does not address anything specific Mr. Wright or me said; it is just you trying to pass off vague statements as a response.

                  Anybody can say, “Oh, an erroneous inference can be drawn from a good definition.”

                  Instead, step up to the plate like a big boy and say “The inference X that you you derived from the definition is wrong because of Y reason.” And if you don’t have anything useful to say, please, be quiet.

                  • Comment by Tyrrell McAllister:

                    I would recommend you to re-read your own post before you make me remind you what you said; this is not something I should be doing.

                    Nonetheless, I appreciate that you have attempted to paraphrase my argument before criticizing it.

                    In this case, your attempts reveal that there is a fundamental failure of communication here. The following remarks may help you to triangulate on my intended meaning.

                    This what I meant : “once you call a thing a computer, you are already adding your own interpretation of its behavior to your conception of it. You are already, in its very name, understanding it not just as an arrangement of atoms, but as something that performs computations upon inputs to produce outputs.”

                    Bottom line from the above, you’re coming with your own bias and preconceptions; Mr Wright (and all those who agree with him).

                    I do not recognize this “bottom line” as meaning anything like what I meant.

                    First, when I used the pronoun “you”, I did not mean any person in particular (such as Mr Wright), nor any particular school of thought. Rather, I was using the generic you. I could as well have written “one” or “anyone”.

                    Second, the “interpretation” I was talking about was not some “biased” or “prejudiced” definition of “computer”. Rather, I was referring to precisely the same unavoidable act of interpretation that Mr Wright was talking about when he wrote,

                    But there is no number “one” anywhere in the wheels. That number is something I contemplate with my mind and which I (and everyone else who decides to use Arabic numerals) assign or attribute to the straight line. Again, the numeral III is one that I (and everyone else) assign to the trident-looking squiggle.

                    That is, when we read the “trident-looking squiggle” on his machine as “three”, it is we who are providing this interpretation to the squiggle. This is the act of interpretation to which I was referring. I was acknowledging its occurrence. I was therefore, to that extent, agreeing with Mr Wright, not accusing him of a biased definition.

                    • Comment by Daniel A. Duran:

                      First, when I used the pronoun “you”, I did not mean any person in particular (such as Mr Wright), nor any particular school of thought. Rather, I was using the generic you. I could as well have written “one” or “anyone”.

                      Why is this relevant? Does it show I said some falsehood?
                      Perhaps you will waste our time next by teaching us the basics of modal logic.

                      Second, the “interpretation” I was talking about was not some “biased” or “prejudiced” definition of “computer”.

                      The passage I quoted in the previous post has nothing nothing with assumptions about computers…right.

                      You’re not worth replying to since you keep playing games. :thumbsup:

                    • Comment by Tyrrell McAllister:

                      Perhaps you will waste our time next by teaching us the basics of modal logic.

                      You have succeeded in convincing me that I am unable to make myself understood by you. So that I may express my gratitude for this information, please allow me to solve your time-wasting problem: I hereby grant you permission to skip my comments.

                    • Comment by Daniel A. Duran:

                      You have succeeded in convincing me that I am unable to make myself understood by you. So that I may express my gratitude for this information, please allow me to solve your time-wasting problem: *I hereby grant you permission to skip my comments.*

                      Alright, that’s pretty childish even for you.

                      I understood what you said about “generic you” and “Mr. Wright and those who agree with him”, and you know what? It makes no difference whether I use one or the other. And more importantly it is irrelevant to anything being said here. You certainly haven’t us given us any reason as to why it matters.

                      And yes, since you keep digressing into grammar classes you might as well throw the basics of modal logic for good measure.

                      I thought you were saying something about computers being sentient, no? why do you keep digressing into this silly, irrelevant matters?

        • Comment by John C Wright:

          You point out, as if it is a startling fact, that I interpret computers to be machines, no different in essence from clockworks. I do not indeed make the assumptions that they have a particular way of being interpreted in their outputs. The point of my comment was that even to call them outputs assumes that there is an observer, an operator, a programmer, who gives a meaning (I say it is an arbitrary meaning) to their machine-motions.

          You next say that I am ‘constrained’ in my interpretation of the mechanical behavior, and then make the leap to say it one day could maybe perhaps somehow gosh I dunno its not impossible mayhap my interpretation would only allow for one interpretation, that the machine was self-aware, talking, thinking and acting.

          Well, gosh, when my wife has a baby, she takes a single cell, grows it into a cell cluster, which pops out of her womb and starts crying almost immediately. Is that what you are talking about? If so, I agree. But remember I am the one in the conversation who is not an eliminative materialist, who says all mental actions and forms can be reduced to mechanical actions.

          As far as I am concerned, I think a self-aware form of life made of electronic engrams in a computer matrix can be grown. I just don’t think it can be made. To design a thing with free will is a contradiction in terms, because design implies determining the behavior, whereas free will implies self-determined behavior. I do believe you can educate the Tin Man, once you give birth to him. But to design him to be educated is a self contradiction.

          And such things may not be possible at all. I remain skeptical, partly because I am an SF writer, and we made the idea up, so we know the idea is make believe.

          We here in Virginia have a saying: I am from Missouri. Show me.

          • Comment by Sylvie D. Rousseau:

            I do believe you can educate the Tin Man, once you give birth to him. But to design him to be educated is a self contradiction.

            If you admit, hypothetically, that such a sophisticated robot could be designed and “educated”, shouldn’t it be programmed to be educated? Isn’t it essential that the programs and ranges of responses might be expanded, or better, expand themselves, in a fashion compatible with what has been previously programmed?

            In SF, the conceit plays on man’s desire to be god. But, in the end, the creature turns against its creator because it does not want to be reprogrammed, eliminated, or simply unplugged, like if it had free will and the will to survive. The will to survive is what bugs me the most, even if it is thought only as a program. I see it as a metaphor of the will to live as long as possible in those who do not believe in the immortality of the soul.

          • Comment by Tyrrell McAllister:

            As far as I am concerned, I think a self-aware form of life made of electronic engrams in a computer matrix can be grown. I just don’t think it can be made. To design a thing with free will is a contradiction in terms, because design implies determining the behavior, whereas free will implies self-determined behavior. I do believe you can educate the Tin Man, once you give birth to him. But to design him to be educated is a self contradiction.

            Okay. I don’t think that any of my objections above apply to this formulation of your views.

  4. Comment by Malcolm Smith:

    Of course computers don’t think – and never will be able to think. They may mimic thinking to some degree. However, the fact that a machine reaches the same conclusion as a man does not mean it does so by thinking, any more than the fact that it reaches the same destination means it (necessarily) does so by walking.
    And the bottom line is: any “brain” constructed of nuts and bolts, silicon chips, positrons, or whatever must, in the last analysis, operate differently to one constructed of flesh and blood.

  5. Comment by Gian:

    I could never understand why people who swear by empiricism and Bayes’ theorem refuse to accept a very reasonable prior that
    ” The world is divided into Agents and Objects”
    They rather stick to the dogma that Agents do not exist.

    PS Agents include animals as well.

    • Comment by Tyrrell McAllister:

      They rather stick to the dogma that Agents do not exist.

      Who are these Bayesian empiricists who deny the existence of agents? They tend to think that the future will contain more agents, not that the present contains none.

      • Comment by Gian:

        Perhaps I did not make the concept of Agent clearer.
        An Agent is a being that contains its own principle of motion. It acts and not merely reacts. That is, it is living–bacteria, plant, animal, man and is fundamentally distinct from non-living matter.

        Materialists believe that it is all atoms (or quantum fields) in motion and living and non-living are distinguished only by the complexity. But this is nowise supported by empirical evidence and is just materialist dogma.

    • Comment by Sylvie D. Rousseau:

      “The world is divided into Agents and Objects”

      The problem there is that every agent can be an object (at least an object of thought), and many objects can be agents, but not all. Agent and object are states of being in motion. Beings that can move, think, act by themselves are agents, either active or passive in any particular action. But beings that cannot move, think or act by themselves, can only react following their nature to the laws of physics. In the case of animals and other organisms, they can move by themselves but only following the sensitive appetite, like the E. coli in Nostreculsus’ example below.

      Computers, however complex and powerful, fall in the category of beings that can only react following the nature of their components to the laws of physics, and following their programs to the laws of logic embedded in them by their programmers.

  6. Comment by joetexx:

    “But there is no number “one” anywhere in the wheels. That number is something I contemplate with my mind and which I (and everyone else who decides to use Arabic numerals) assign or attribute to the straight line…Please note that there is no addition sign nor equals sign in my example. The man using the simple adding machine assigns those things to the positions of the wheels, using ‘position’ as a symbol or sign the same way the squiggles are symbols or signs.”

    Tersely put insight.

    Been reading The Metaphysical Principles of the Infinitesimal Calculus (Collected Works of Rene Guenon) which has some good insights on the meaning of number.

    You might find this very brief essay interesting.

    Is Mathematics Constitutional?

    “…a ringng defense of mathematical Platonism against the pretensions of cognitive neuroscience. Remember, if everything you know is a product of brain structure, then neuroscience is just in your head, too”

    http://www.johnreilly.info/imc.htm

  7. Comment by Nostreculsus:

    Can bacteria think? Take some dirty water, swimming with E. coli, drop in a sugar cube and the E. coli swim over next to the sugar where they can feast on the higher concentration of sugar. There are two ways to describe this.

    Mechanistic description. The germs have a tail consisting of several spiral shaped flagellae. When these turn counterclockwise, E coli swims straight ahead. When they turn clockwise, the flagellae splay out and the germ tumbles about in place, picking a random new direction.

    Now, as he swims about, the E. coli keeps track of how much sugar he is eating. The more sugar, the more methyl groups he adds to certain proteins. As long as the number of methyl groups is increasing, the motor that drives the flagellae turns counterclockwise and E coli swims straight forward. If the number of methyl markers starts to drop, the rotor turns clockwise, and the bacterium tumbles in place.

    Teleological description. E coli enjoy sugar. They have a goal, to swim about in the most sugar-rich water they can find. They have a memory: the number of methyl groups is a symbolic representation of how sweet their environment was, a few seconds ago. They make choices: when they realize things are going swimmingly (methyl groups increasing), they know the sugar gradient is increasing and they keep to the same direction; when they realize they are moving away from the quarry, they choose a new direction at random. They will keep trying until sugar levels again start to rise.

    In short, their algorithm resembles the behaviour of voters in a democracy. Keep changing administrations until economic growth resumes.

    Now, if you find the second description of the E. coli useful, if you believe bacteria can think, why not extent the metaphor to simpler and simpler systems? A bit of computer code can guide red dots (SimBacteria) on a computer screen to find and eat randomly diffusing blue dots (SimSugar). So, can computer code think?

    A thermostat senses its environment’s temperature and then chooses a response. An atom of silver in a Stern-Gerlach experiment senses its magnetic environment and then chooses its response. Can atoms think?

    We should make very precise just what we accept as “thinking” before we state whether or not computers can ever think. If we define thinking as an internal symbolic representation which guides choices, then it seems to apply to a great deal of nature.

    • Comment by Tom Simon:

      The trouble with your argument in re E. coli is that the terms used do not match the premises. Methyl groups are not a symbolic representation of the bacterium’s environment; they are not symbolic in any sense. You might as well say that a suntan is a symbol of yesterday’s sunny weather, or that my not being hungry is a symbol of the burrito I had for dinner. But these things are not symbols; they are merely physical effects.

      The essential property of symbols is that they are arbitrary; they have no intrinsic relationship to the things they symbolize. This is the fundamental principle of semiotics, first (as far as I know) formulated by St. Augustine over 1500 years ago, and I think it fair to call it the pons asinorum of that subject. If you do not grasp that the relationship between symbol and referent is essentially arbitrary, you will never be able to make any correct inferences about the nature or uses of symbols.

      • Comment by The OFloinn:

        But what is a symbol? A symbol does not direct our attention to something else, as a sign does. It does not direct at all. It “means” something else. It somehow comes to contain within itself the thing it means. The word ball is a sign to my dog and a symbol to you. If I say ball to my dog, he will respond like a good Pavlovian organism and look under the sofa and fetch it. But if I say ball to you, you will simply look at me and, if you are patient, finally say, “What about it?” The dog responds to the word by looking for the thing: you conceive the ball through the word ball.

        IOW, you expect a meaningful statement about balls or a ball (a coupling), not simply the sound “ball.”
        – Walker Percy, The Message in the Bottle, p.153

        • Comment by Gian:

          Edward Feser writes in “Aquinas” that the casual processes that take place when snake digests a meal are not reducible to those kinds of casual processes that take place when snake crawls over earth or casual processes of clouds and rain e.g.
          Page 137.

          This is making quite a claim and one that absolutely discordant with materialistic assumptions and expectations.

          The digestive processes are perfective: they fulfill some final end of the snake and not of the meal, a rat for example.

    • Comment by Patrick:

      “they choose a new direction at random.”

      They are not making rational choices – or at least, not assessing options and choosing among them. Their motility is on the order of a response, not a reason.

  8. Comment by Tom Simon:

    Oddly enough, Edward Feser, via his blog, has just called my attention to a paper of his that sheds light on exactly the same point. It’s called ‘Hayek, Popper, and the Causal Theory of the Mind’, and it was published in a book called Hayek in Mind: Hayek’s Philosophical Psychology — pp. 73 ff. (I have a link to the paper on Google Books, but it’s only by sheer luck that the entire paper happens to be available in the sample now offered there. As I understand it, that could change at any moment, so I shan’t trouble to reproduce the link.)

    Here is the relevant bit:

    Consider also that we are able not only to have individual meaningful thought episodes but also to infer to further thoughts, to go from one thought to another in a rational way. This is not merely a matter of one thought causing another; a lunatic might be caused to conclude that mobsters are trying to kill him every time he judges that it is sunny outside, but such a thought process would not be rational. Rather, we are able to go from one thought to another in accordance with the laws of logic. Now, it might seem that the robot of our example, and computers generally, can do the same thing insofar as we can program them to carry out mathematical operations and the like. But of course, we have had to program them to do this. We have had to assign a certain interpretation to the otherwise meaningless symbolic representations we have decided to count as the “premises” and “conclusion” of a given inference the machine is to carry out, and we have had to design its internal processes in such a way that there is an isomorphism between them and the patterns of reasoning studied by logicians. But no one has to assign meaning to our mental processes for them to count as logical. So, what accounts for the difference? How are we able to go from one thought to another in accordance, not just with physical causal laws, but in accordance with the laws of logic? That is the problem of rationality.

  9. Comment by The OFloinn:

    John Searle
    But if all computation is observer-relative, (with the small exception of the limited number of actual conscious computations carried out by human beings), then you cannot discover computation in nature. It is not a natural feature of the world. This means that the question, “Is the brain a digital computer?” is not a meaningful question. If it asks, “Is the brain intrinsically observer-independently a digital computer?” the answer is, nothing is intrinsically a digital computer. Something is a digital computer only relative to interpretation. If the question asks, “Can we give a computation interpretation to the brain?” the answer is, we can give a computational interpretation to anything. So the problem with Strong AI is not that it is false; it doesn’t get up to the level of being false. It is incoherent.

    This is part of a longer interview that starts here: http://machineslikeus.com/interviews/machines-us-interviews-john-searle-0
    and which addresses some of the issues being discussed here on page 7.

  10. Comment by Daniel A. Duran:

    Then I’m not sure what you mean by those words in this context, so I don’t know whether they accurately paraphrase of my position.

    This what I meant : ”once you call a thing a computer, you are already adding your own interpretation of its behavior to your conception of it. You are already, in its very name, understanding it not just as an arrangement of atoms, but as something that performs computations upon inputs to produce outputs.”

    Bottom line from the above; you’re coming with your own bias and preconceptions Mr. Wright.

    I would recommend you to re-read your own post before you make me remind you what you said; this is not something I should be doing.

    “As I said, it was unclear to me what you even meant to suggest by “proving a definition false”. What would it mean for such a thing to happen, whether I did or whether it simply happened in the fullness of time?”

    Mr. Wright said over and over that computers are machines or tools. Tools, by definition, cannot have intrinsic meaning, purpose or goals. Since you believe computers can be sentient, then you think that we are wrong in defining computers as only tools or machines.

    “Not every case of saying something false about computers is a case of using a wrong definition of computers, even when you are falsely asserting something to be non-contingent. A false non-contingent claim may be an error of inference from a good definition, rather than a case of a bad definition.”

    Mitch, do not try to play games with us. The passage above does not address anything specific Mr. Wright or me said; it is just you trying to pass off vague statements as a response.

    Anybody can say, “Oh, an erroneous inference can be drawn from a good definition.”

    Instead, step up to the plate like a big boy and say “your inference X derived from the definition is wrong because B and Y.” And if you don’t have anything useful to say, please, be quiet.

  11. Comment by DonJindra:

    “A computer does not literally move numbers around.”

    And neither does the brain. So what?

    “But no matter how elaborate, the thing is still a clockwork. For reasons of saving space, I can do that exact same thing with a cellphone, using electrons rather than wheels and gears: but the nature of the machine is the same.”

    Well, no they are not the same. The nature of your mechanical gears is a different sort of thing than the computer in that cellphone. And the brain also just moves electrons around.

    You might “contemplate that a computer is something more complex but otherwise no different than my simple calculation machine of four wheels” but you might as well contemplate a wheelbarrow is of the same nature as a computer because it moves dirt around like computers move electrons. Now wheelbarrows and piles of dirt may not be able to comprehend themselves, and neither can computers at this point in time. But until you can answer exactly what “comprehention” or “understanding” are, you cannot begin to claim computers will not someday be able to do the same.

    • Comment by John C Wright:

      Comprehension is the mind contemplating ideas, including that particular set of ideas called sense impressions. As you say, since the brain, a physical object, does not literally physically move a physical object called “a number” through physical space, we can conclude that when I add twice two and get four, I am thinking. When an adding machine is designed so that one wheel turned another wheel when certain keys are depressed, it is not thinking: when I press the key labeled 2 twice and the wheel turns so that the face labeled with a 4 is uppermost, this is a mechanical motion, not a thought process.

      Your argument tacitly admits what you are arguing against, which is that mechanics are not rationcination, matter in motion is not the same as thought. The distinction you make between telephones and adding machines is meaningless: electrons do not have spiritual or mental properties of self awareness Babbage machines lack.

      And the argument that I must define comprehension in order to understand that a wheelbarrow or block of wood or any other inanimate object lacks the property of comprehension is an argument from ignorance. I am the skeptic here, not you. YOU have to convince ME that the Tin Man from Oz is real, and that metal things can be taught to speak and sing and dance and think. I don’t have to convince you that it is impossible.

      I am not even claiming it is impossible. I am saying that the arguments given me so far to convince me that it is possible are circular, or contain other logical flaws.

Leave a Reply