A stunning misinterpreatation of a point I have posted again and again...

In response to post like this one, I have been asked again and again questions like "Why do you deny the possibility that machines might be able to think?"

This is rather stunning to me, as I have posted numerous times, on this very same blog, my affidavit that I do not now, and never have, denied this possibility for even a second! I have posted the example of a thermostat turning up the heat because it "feels cold," not because I deny the possibility that the thermostat feels cold, but because I want to see if my AI enthusiast correspondents are willing to be consistent, and admit that their thermostat might truly feel cold, just as their Turing-test-passing machine might truly be conversing.

I am, in fact, perfectly willing to contemplate the idea that atoms are held together because electrons "feel attracted" to protons: such geniuses as Gottfried Leibniz, Alfred North Whitehead, C.S. Peirce, and, more recently, my friend David Chalmers, have put forward positions that broadly fall under the heading of panpsychism, a position I take very seriously indeed.

So I do not deny the possibility that, say, when I type "ls" at a Unix system prompt, my Unix computer "knows" that I want it to list the files in my current directory. What I do deny is that the fact that a more involved program than "ls," simply because what it does is more complicated and harder to follow than "ls," should suddenly deemed to be thinking, while the "ls" command is not so deemed. It seems to me that such people are simply punting, and once a program gets too complicated for them to understand why it does what it does, revert to cargo-cultism, and declare "Ooh, magic!"

So, people who have objected to my recent Turing posts: let me put forward a (hypothetical) metaphysical position in a series of assertions:

1) Electrons orbit around atomic nuclei because they feel very attracted to protons.
2) Hydrogen atoms unite with oxygen atoms because they know they will achieve a lower energy state by doing so.
3) Microorganisms understand that if they move away from toxins they will survive better.
4) The Unix "ls" program knows that I want it to list the files in my current directory.
5) The latest IBM chess program understands how to checkmate its opponents.

I have never denied that 5) might be true. But what I do deny is that any of my critics can formulate any reason to reject 1-4 while asserting 5. And the fact that they keep asserting that I think 5 is "impossible," I suggest, means that they have an ideological attachment to asserting proposition 5, even while denying propositions 1-4.



Comments

  1. I have never denied that 5) might be true. But what I do deny is that any of my critics can formulate any reason to reject 1-4 while asserting 5.

    Assume evolution is true. Then, thinking animals must have evolved from non-thinking animals. At an advanced enough level, machines must be able to think. Mr. Data was a "sentient being" -- though that was sometimes challenged (perhaps by crotchety bloggers).

    ReplyDelete
    Replies
    1. 1) "Assume evolution is true."

      Granted.

      2) "Then, thinking animals must have evolved from non-thinking animals."

      Completely unwarranted assumption. Perhaps, as panpsychists assert, thinking pervades the cosmos.

      3) "At an advanced enough level, machines must be able to think."

      And this does not follow at all from 1 and 2! Perhaps there is something about organic evolution that "produces" thinking that simply is not replicated by mechanical tinkering.

      Delete
    2. 2 is woo.
      3 is exactly the point at issue in the Turing test. This is the "calling the bluff" aspect someone mentioned. In the case where finally people cannot distinguish, what do you say? Four legs good, two plugs bad?

      Delete
    3. If you want to say microorganisms understand this, fine.

      Why is 2 "woo"? They *behave* as if they understand this! And that is all Turing allows us to look at.

      Delete
  2. You could define "knows" , "feels" and understands in such a way that 1-5 become true.

    Even if you stick to more conventional definitions you could never really tell if 1-5 were true or not.

    But this seems beside the point as far as AI and the Turin test is concerned. The Turing test according to Wikipedia is "a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human."

    It sets the bar quite high and none of the entities in 1-5 meet that criteria (though of course they may all have rich inner lives , but not be very good at communicating).

    ReplyDelete
    Replies
    1. Your evading of the central issue is becoming just a wee bit off-putting here, rob. And the fact that you think the problem might be that I don't know what the Turing test is is even more so.

      The fact that you think my 1-5 are "beside the point" is a symptom of the evasion. The issue is, "If X *appears* to be doing Y, do we have to concede it is doing Y?"

      Turing's claim is "If a machine *gives the appearance* of thinking, then one MUST admit it IS thinking."

      I am asking if you are willing to be consistent here. If a thermostat *appears* to be chilly (it turned the heat up!) are you willing to say that we MUST admit it IS chilly? Your bringing up that the thermostat did not pass the Turing test is a complete red herring and another evasion: I am not asking if it is as intelligent as a human being, just whether it feels chilly!

      Delete
    2. "Even if you stick to more conventional definitions you could never really tell if 1-5 were true or not."

      Right: so why don't you take this same agnostic attitude with the Turing test: even if a machine passes it, we cannot really tell if it is thinking or not? That, in fact, would be correct.

      Delete
    3. And rob, you have missed the fact that my 5 IS the Turing test in the context of chess!

      So my question, which you are evading: why would you assert that five must be true while denying 1-4 must be true?

      Delete
    4. "why would you assert that five must be true while denying 1-4 must be true?"

      I don't think I do. I said that the answer depends upon the definition of terms used.

      You paraphrase Turing's view as "If a machine *gives the appearance* of thinking, then one MUST admit it IS thinking.".

      If you define "thinking" to be merely the act of running some sort of algorithm the way that a thermostat does then yes, a thermostat is thinking. But I do not think that would be a very useful definition. AI is normally interested in higher level of intelligence which is why a definition of thinking as"running an algorithm that simulates human thinking" would (in my view) be more useful. And defined in this way I would argue even 5 fails.

      You could define thinking as "thinking exactly like a human thinks" and then even passing the Turing test would not prove that a machine thinks, and you would have to take the agnostic view that you refer to.


      "If a thermostat *appears* to be chilly (it turned the heat up!) are you willing to say that we MUST admit it IS chilly?"

      Yes I would. But my thermostat doesn't appear to me to be capable of feeling chilliness. My definition of "chilly" would be that it is a feeling only applicable to sentient beings , and I do not think mine would pass whatever the equivalent of the Turing test is for sentience.

      (BTW: I appear to have a writing style and tone that annoy you. It is rarely intentional, and certainly isn't here).

      Delete
    5. Rob, it is not your writing style that is at issue! That is another evasion!

      " But my thermostat doesn't appear to me to be capable of feeling chilliness."

      But it passes the "Turing test" for chilliness: it turned up the heat!

      If we are only allowed to look at surface appearances, per Turing, then the thermostat is chilly! You are doing the sort of analysis that Turing forbids!

      Delete
    6. 1. A machine running an algorithm that leads it to give responses that can persuade a human that is truly thinking can be said to be really thinking.

      2. A machine running an algorithm that leads it to turn up the temperature when it drops to a certain configured value can be said to capable of feeling chilly.

      Statement 1 seems to me to say something of importance about the nature of thinking and arguably helps us arrive at the best definition of "thinking" we can come up with.

      Statement 2 on the other hand defines "feeling chilly" in a way totally outside of the normal usage. Its main role seems to be to try and deflect attention away from the insights of statement 1.

      Delete
    7. Good God, rob, statement 1 defines thinking in a way totally outside of thenormal usage! It defines "thinking" as "appearing to think," not as, you know, actually thinking!

      Your last ten or so comments simply evade this issue again and again. So just stop, please.

      Delete
  3. I will stop, but please can you publish this last comment ?

    The main question from your post I was trying to address was:

    "If X *appears* to be doing Y, do we have to concede it is doing Y?"

    I answered this in 3 different ways

    1. yes, as long as we define our terms to be consistent with the answer being yes. You rejected this as an evasion (and didn't like my dragging in the Turin test stuff).

    2. yes, but that doesn't mean my thermostats feels chilly because in my subjective view (and using a conventional view of feeling chilly) my thermostat doesn't appear to be feeling chilly. You rejected this because my thermostat apparently passes the (undefined) Turing test for chilliness, and therefore my subjective views on its feelings of chilliness don't count.

    3. I suggested that the statement might make sense when applied to advanced algorithms but not when applied to more basic ones . You rejected this because It defines ' "thinking" as "appearing to think'. So here we are having a discussion about under what criteria it may be valid to conclude that appearance is reality, but apparently its against the rules to suggest that some things (such as human-style thinking) may actually meet these criteria.

    Just totally missing why you think my answers are evasions.

    Anyway I'm done now on this topic.

    Thanks for hosting this blog.

    ReplyDelete
    Replies
    1. For instance, 1) is an obvious evasion: you are dragging in "well we could define it that way" when that is not what we are talking about. The issue is, why does your "subjective" impression count against the thermostat but my "subjective" impression isn't allowed with a machine that appears to converse? (The Turing test is DESIGNED to rule out this "subjective" belief. Why is it suddenly allowed as relevant for the thermometer?)

      And THAT you haven't answered anywhere.

      Delete
    2. By the way, I am sure you are not consciously evading this issue: you shy away from it because if you looked it squarely in the eye, your view would collapse. So it is quite natural for you to be baffled as to what you are evading!

      Delete
  4. I won't comment further just ask a question:

    Isn't the Turing test, with its dependance upon the view of the judge, inherently subjective in nature ?

    ReplyDelete

Post a Comment

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness