Friday, April 08, 2016

Can Computers Think?


Many of my worked-up correspondents are outraged because, they claim, I deny the possibility that computers can think. But I have never denied that possibility, and have explicitly said I don't deny it in the past. In fact, as that post notes, I keep denying it, and yet my critics simply ignore my repeated denials, and accuse me of a bias in favor of "meat machines" over "silicon machines." Let me say it again: Maybe Big Blue actually realizes it is playing chess, and actually knows that it has an opponent. Maybe it actually feels triumphant when it beats a grandmaster!

Can thermostats think?

Maybe. Maybe a thermostat knows when it is cold in the living room, and knows that the furnace must be kicked on.

Can electrons think?

Maybe. Maybe an electron knows it ought to orbit a nucleus in a certain orbital.

What has puzzled me throughout this discussion is why AI enthusiasts want to deny thought to simple physical mechanisms, but at some (seemingly arbitrary point) award the moniker "thinking" to more complex physical mechanisms. Why is a simple circuit like a thermostat or an NOR gate not thinking, but some built up complex of them suddenly supposed to be thinking?! This is the magical aspect of AI advocacy that I have been critiquing, and the aspect not one of my critics has even tried to address.


  1. I was recently thinking about the arbitrary threshold after which some claim a machine would be intelligent. The Turing test seems somewhat relevant here.

    If a machine can fool a few dull-witted people into thinking it is human, most people would agree that the machine is not actually intelligent.

    It's a bit trickier if the machine can fool EVERYONE into thinking it is human. I think many of us would still claim the machine is not actually intelligent if we understand the algorithm used to produce its behavior.

    But if the machine fools everyone, and the algorithm used to produce its behavior is beyond our comprehension (say, a multilevel perceptron with a trillion nodes), I'm not sure what basis we have left to claim it doesn't actually posses intelligence, other than faith that there is some spirit in us that can't possibly be present in a machine we created out of inorganic material. This quandary is of course a theme in many works of science fiction.

  2. "Why is a simple circuit like a thermostat or an NOR gate not thinking, but some built up complex of them suddenly supposed to be thinking?!"

    When AI people pose the question "can we build machines that thinks" they actually are asking a much more tightly defined question "Can we build machines that manifest the same kind of intelligent behavior as humans?" (Within the AI framework they would probably attempt to answer this question in relation to the machine's ability to pass a Turing-type test for intelligent behavior, but that is irrelevant here)

    So to answer your question: I am actually not aware that many people in the AI world use the word "thinking" in the way you suggest. But if they did then as long as they use it consistently to mean "manifesting intelligent behavior indistinguishable from humans under test conditions" then this provides a well-defined and non-arbitrary cutoff point between thinking and non-thinking.

    Neither a thermostat nor a modern chess machine qualify as "thinking" based on this definition.

  3. I suppose if a NOR gate said "I'm not really a NOR gate that's just a logical construct I am really a NAND gate and you need to pay to have my output reversed" I might believe it was thinking. I would also assume someone wasn't giving it enough attention at the factory. Now if you could arrange a large number of these malfunctioning circuits together you might get some interesting results but you wouldn't believe it was thinking.



"If your approach to mathematics is mechanical not mystical, you're not going to go anywhere." -- Nassim Nicholas Taleb