Quote of the Day

The quote of the day for today comes from the eminently quotable Tom Simon:

Then there are the things that are supposed to manifest some kind of ‘artificial intelligence’. Many of these are based on ChatGPT or another large language model. Superstitious folk, such as tech journalists, have a belief that LLMs can actually understand language; very superstitious folk, such as professional linguists, even believe that language can be ‘understood’ merely by knowing how to manipulate words. This is an error, as Searle showed with his ‘Chinese room’ thought-experiment. A chat bot only ‘understands’, for instance, that the symbol rose frequently occurs in the neighbourhood of such other symbols as red, white, bouquet, thorn, or even love; and these are frequently juxtaposed with still other symbols, and by exploiting these apparent coincidences, the bot can squirt out sentences that will appear to a human to make sense.

But it is only an appearance. We are all, I hope, familiar with the cases where superstitious lawyers have relied on bots to generate legal briefs; and the bots, which could easily imitate the form of a brief, knew nothing of its content, and their output was larded with bogus citations of nonexistent cases. For the bot does not know what a law is, or what a case is, or what a brief is actually for. You will, I am afraid, best understand what an LLM is, if you think of it as an enormously well-read parrot. Most of the wisdom of the ages, and all of the stupidity, is to be found in the texts on which the parrot was trained; and some of it is bound to come up in the composite texts which the parrot faithfully regurgitates. But none of it has any meaning to the parrot, and it is to your credit, not the parrot’s, if any of it happens to have meaning to you. At any rate the parrot is not to be trusted.

From his essay Uninteresting Things. Which should be read.