A problem with arguments by analogy...

Is that any analogy must differ from the situation to which it is analogous in some ways, or it would just be that situation. This problem is especially tricky when dealing with people in the grip of an ideology, because inevitably, what they will do is seize upon one of these differences, and play it up as if the fact there is some difference makes the analogy worthless. (Of course, if that were true, every analogy would be worthless, because, as I said, there is always some difference.)

And so it went with my first round of Turing Test analogies. The point of the whole exercise was to show that black box tests don't tell you anything about where in a system the intelligence lies. If a computer passes the test, I would agree that is evidence that there is intelligence somewhere in the system! Furthermore, I can tell you just where that intelligence lies: it is with the programmers who built the program that enabled the machine to pass the test. Just like there is intelligence in alarm clock that properly wakes you up at 6:30 AM. And that intelligence lies in the engineers who built the alarm clock. Just like there is intelligence in the rabbit trap that falls on a rabbit at just the right moment. And that intelligence lies in the hunter who built and set up the trap.

The point of my story with Emily and the historians is that if we can't look inside the "box," then we can't decide where the intelligence lies, and we will have to say that "Emily" is very good at history. Of course, AI ideologues seized on irrelevant points here. First of all, they insisted on taking the idea of a "black box" completely literally, and said that there was a problem with the historians being inside an actual, literal box. This dull-minded literalism is surprising on the part of people who believe that routing electricity around through elaborate circuits magically creates thought. But the "black box" that is important is the entire system from which the answers emerge, not a literal box! In any case, let us satisfy their "literal box" fetish: Emily communicates with the historian's via text message. Now they are not literally in a box with her. Problem solved!

"No, no!" the AI acolytes will cry, "They are helping her once the test starts! Can't have that!"

My mistake here was that I thought my audience would understand how computers work, and would know that they contain something called "computer programs," that exist to allow programmers to set up the responses for the machine they are programming in advance. (This is why I like the rabbit trap analogy: the hunter sets up the trap in advance, so that it "knows" when the rabbit is there even though the hunter is at home relaxing in bed.) Thus, the exact moment where the tested entity "gets help" from "outside" doesn't really matter.

But OK, let's handle this one as well: we'll say Emily's family has even more oodles of money. They hire this team of famous historians well in advance, and say to them, "You can surely guess the 10,000 most common history questions that someone might ask. Please prepare answers for each of these questions for Emily." The answers have been prepared and elaborately indexed, so that Emily can put her finger on the answer to any of these 10,000 questions fast enough for the test conditions. (If you think 10,000 is too few, give her parents even more money, and make it 100,000. Or a million.)

Now, not only are the historians not literally in a box with her, they are also not helping her "once the test starts."

So, the commenters were picking on completely irrelevant points in my analogy in an attempt to defease its import. The fact that the "black box" that is important is the system as a whole, and not a literal box on a stage, should have been obvious to them. And so should the fact that any real-time help we might give the entity being tested can easily be shifted backwards in time.

So why did they miss these points? Because AI is part of a new religion, one that they desperately wish to believe in.

Comments

  1. I spent less than five minutes perusing the comments over there. It isn't worth your time. Bob seems to enjoy the echo chamber. Just leave him be.

    ReplyDelete
    Replies
    1. I spent less than five minutes perusing the comments over there. It isn't worth your time. Bob seems to enjoy the echo chamber. Just leave him be.

      (1) I spent most of the time in the comments defending Gene.

      (2) You think I *enjoy* dealing with those comments?!?!?

      Delete
  2. I would put this not as establishing intelligence, but that we can not tell it is not intelligent. There might be some, rather small, chance it is intelligent, or some, rather large, chance we have not yet devised the means to tell it is not intelligent. As humans often fail these and computers often succeed, our understanding of intelligence is likely lacking and we will have difficulty distinguishing and may never be able to tell, but we may learn more about what intelligence is.

    ReplyDelete
  3. Oh Gene, what am I going to do with you? You have a great and unique perspective on this stuff, but you are determined to make sure your opponents don't see it. Two quick points because I actually have a lot of work to do:

    (1) I also was complaining about one "irrelevant" part of your analogy. But I am on your side. Indeed you converted me away from materialism. So you have to know it's wrong when you attribute your critics' reactions to blind ideology on their part.

    (2) I understand your perspective (you drove it home with the "Go" commentary--and that was very interesting), but you are mistaken if you think you can salvage the Turing Test by letting the thing being tested communicate in real-time with outsiders. If you think that *is* a trivial detail, then try this one:

    A person takes IBM's "Deep Blue" and Steve Harvey and puts them into a big black box. A studio audience types questions into the box, and reads the output on the screen. Soon everybody is cracking up.

    In this situation, would you say Turing is forced to conclude that IBM's computer is intelligent, and indeed funny? Of course not. If someone wrote that on an Intro to Philosophy exam he'd get it marked wrong and the professor would call his wife in to laugh at this kid.

    (Please Gene, believe me when I say that I understand why *you* think the timing is irrelevant, whether Deep Blue gets help in real-time from Steve Harvey versus relying on instructions previously put into Deep Blue by its programmers. But that is a very minority position. You can't just say "You guys are idiots!" when you're challenging something that fundamental.)

    ReplyDelete
    Replies
    1. You've been the picture of calm reason in arguing with the dogmatic atheists at Free Advice. How is that working out? Most of them have converted?

      Delete
    2. Gene, Bob isn't criticizing you for having righteous indignation, he's criticizing you for having misplaced righteous indignation. You are taking a controversial step for granted, and criticizing your opponents for not seeing something so obvious, when you are not explaining why it should be obvious to them. (The controversial step being that it doesn't matter whether a computer is getting real-time help from a human being or whether a computer has been programmed in advance by a human being.)

      Delete
    3. See, Keshav, it doesn't really matter. Arguments aren't what count here anyway. I will explain why in a post later.

      Delete
  4. Gene, until your opponents see that there is not much to recommend the reductionist-materialist account of the mind they (perhaps implicitly) take as an article of faith, no analogy, no matter how clever, will ever convince them.

    ReplyDelete
  5. Gene wrote:

    "Arguments aren't what count here anyway. I will explain why in a post later."

    I assume when you "explain why" you're going to rely on abstract art or poetry right? There would be no point in you trying to rationally explain to us your views...

    ReplyDelete
    Replies
    1. I think my new post should clear this up for you.

      Delete
  6. I don't expect you to publish this (I actually added it as a comment on Bob's blog).

    According to wikipedia the Turing test is a test designed to "test a machine's ability to exhibit intelligent behavior ".

    Gene's objections to the test is that AI advocates (who he ,apparently literally, sees as part of new religion) go further and believe the test shows real as opposed to merely exhibited intelligence.

    It not clear what Gene's view on what constitutes real intelligence is. He wavers from accepting that machines may thinks (see http://gene-callahan.blogspot.com/2015/02/a-stunning-misinterpreatation-of-point.html) to apparently stating that all intelligence in machines is derived from the humans (and therefor don't really think?) who designed the machines as in his most recent posts.

    I am not sure how we can bridge the gap between "exhibiting intelligent behavior" and "really being intelligent". Our fellow humans may appear intelligent but might actually be very sophisticated but not conscious robots.

    Whether we attribute "real intelligence" or not to things that exhibit intelligent behavior comes down to entirely subjective definitions and classifications.

    Gene seems on the verge of categorizing people who don't accept bis subjective definitions and classifications as bad people. His next posts on the topic should be interesting.

    ReplyDelete
    Replies
    1. See, rob, it is good to read more than the first sentence of these articles:

      "The power and appeal of the Turing test derives from its simplicity. The philosophy of mind, psychology, and modern neuroscience have been unable to provide definitions of "intelligence" and "thinking" that are sufficiently precise and general to be applied to machines. Without such definitions, the central questions of the philosophy of artificial intelligence cannot be answered. The Turing test, even if imperfect, at least provides something that can actually be measured. As such, it is a pragmatic attempt to answer a difficult philosophical question."

      Delete
    2. "Gene seems on the verge of categorizing people who don't accept bis subjective definitions and classifications as bad people."

      rob seems on the verge of just making crap up. Should be interesting to see where this goes.

      Delete
    3. I certainly agree with those additional lines you quote from the wikipedia article, especially "it is a pragmatic attempt to answer a difficult philosophical question."

      Delete
  7. So, in case you hadn't seen my reply to you and this follow-up on Bob's blog, they key points are:

    1) The problem wasn't that you used an analogy, but you left out key parts of the Turing Test that that break your specific strategy. Failing to address them shows a poor understanding of the TT, not a good argument.

    2) I actually agree that a reasonable measure of intelligence would indeed have to look at more than black box I/O, but your post didn't give a good reason for grounding that argument.

    3) Some reductionists have indeed argued in favor of 2) above, like by citing the relevance of e.g. scaling -- whether a subject can pass the test with access only a polynomially-large amount of resources rather than an exponential one (with respect to the length of the conversation).

    4) Your updated strategy, of memorizing 10k possible questions for historians, still fails to appreciate the wisdom of the Turing Test, which is that the judge can *adaptively* change the questions, requiring the ability to answer a number that grows exponentially in the number of queries allowed, placing a high bar for being able to pass it without real-time access to a human and thus a legitimate challenge. Memorizing a constant number of questions, even a high one, doesn't help.

    5) Your extreme emphasis on how much the programmer did to the computer beforehand still falls prey to the "Einstein's mother" argument: is Einstein's intelligence just a reflection of his mother's? Yes, he could interact with other fonts of wisdom, but so could AlphaGo. Yes, the way he makes decisions is opaque to his mother, but so is AlphaGo with respect to his programmers.

    So if you have a point to make, would it hurt to be a little less obscure about it in the future?

    ReplyDelete
    Replies
    1. "but you left out key parts of the Turing Test that that break your specific strategy."

      And you are wrong about that. Live help was irrelevant, since I can program in help. Your response shows a poor understanding of computers.

      "Your updated strategy, of memorizing 10k possible questions for historians, still fails to appreciate the wisdom of the Turing Test, which is that the judge can *adaptively* change the questions..."

      I already handled this, Silas: just write out enough answers that any likely question is handled. Sigh.

      Delete
    2. "Your extreme emphasis on how much the programmer did to the computer beforehand still falls prey to the "Einstein's mother" argument: is Einstein's intelligence just a reflection of his mother's?"

      God, that is a stupid objection. If you have a point to make, could it not be dumb?

      Delete

Post a Comment

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness