The Turing Test and Google Translate

You have to hire a Croatian translator. Two candidates for the job show up to interview, me, who knows not a word of Croatian, and my friend Neil, who is Croatian. You begin to interview us, but I have brought a trump card: I have along my attorney, Elen Touring, who says, "Wait a second, this is not fair to my client: Mr. Ganic here simply looks much more Croatian than Dr. Callahan, and so you must interview them blind. In addition, since Mr. Ganic sounds more Croatian, and you only need to have someone translate written material, it is only fair that you can't hear them either. You must test them both, behind curtains, without knowing which is which.

So we both are concealed in curtained rooms. In Neil's, he translates Croatian text to English "by hand." But in mine, I just type it into Google translate on my phone, and write down the English output. When you judge my output superior, Attorney Touring insights you must hire me.

Someone may wish to point out, "Well, that means Google Translate is doing a great job!" So it does: but it also means that I am completely superfluous. But not allowing you to look behind the curtain, Attorney Touring has seriously hampered your ability to evaluate your potential translators (including Google). And we regularly want to "peek behind the curtain" in doing such evaluations: at students taking tests, at job applicants, at witnesses testifying in court. The demand for the curtain is a demand that we look only at the surface of phenomena, and forego any analysis of how that surface emerged. If a mechanical horse bucks like a horse, it is a horse. If Viola in disguise convinces Olivia she is a man, then she is a man.

A great theme in Shakespeare was appearance versus reality. But Alan Turing gives us the metaphysics of appearance as reality.

NOTE: Obviously the above story involving Google Translate resembles John Searle's Chinese room thought experiment. But I think it is sufficiently different to be interesting in its own right.

Comments

  1. The curtain would be useful if I really wanted to determine if google translate was better than a real live Croatian, and wanted to eliminate any prejudices I might have against software translators.

    Similarly, if something behind a curtain appears to be conscious and fails none of the tests that I set it to disprove its consciousness, and then I look behind the curtain and discover a robot - what grounds would there be to change my mind about its consciousness?

    ReplyDelete
    Replies
    1. I don't know rob: if Viola appears to be a man, what grounds would there possibly be to declare her a woman when you take off her pantaloons?

      Other than a dull insistence that if something appears to be like X, it IS X?

      Delete
    2. Or alternately: as someone who actually writes computer programs, I know with great certainty that every single little bit of intelligence in any program I right came from me, or the language writer, or the OS writer, etc. If *I* write the wrong code, never once has the computer said, "Oh, Gene didn't mean that: let me fill in the right value here."

      So just like I know a rabbit trap I set up to catch a rabbit doesn't fall because it *knows* there is a rabbit there, but because that is what I built it to do, so I know my option trading system did not display fair values because it *understands* that is what options are worth, but because that is what I programmed it to do.

      Delete
    3. Perhaps this comes down to definitions.

      The definition of "woman" probably consist of physical attributes that could not be ascertained from behind a curtain.

      The definition of "conscious being" is trickier and this is probably why Turing came up with his test.

      I am sure you can image that a program might be written one day that could act as though it was an intelligent, conscious being. Unless you believe that "consciousness" and "intelligence" are defined by physical attributes , like gender, then how would you conclude they are not "conscious" or "intelligent".

      Delete
    4. Well, rob, you could note that it is a machine, *built* by an intellient critter, for the purpose of acting *as if* it is intelligent.

      Thermostats act *as if* they get chilly, and then turn the heat on. Do you think they really do, or do you note that they are simply a mechanism built to serve our purposes when we get chilly?

      Delete
    5. Well, if my thermostat claimed to be conscious and passed the Turing test , why would the fact that it was a machine built by humans mean that it could not in fact have consciousness? What definition of consciousness do you have that excludes that possibility ?

      Delete
    6. Rob, ok:
      1) When my cat moves near the fire, I think it is because it is chilly. But it has never said to me "I am conscious," nor could it pass a Turing test. So why should I think it was chilly, and not my thermostat?

      2) Where did I ever say it was impossible my computer, or even my thermostat, is conscious? Nowhere!

      All I have ever said is that the fact a machine does exactly what I constructed it to do cannot possibly be evidence that it is conscious.

      Delete
    7. Well OK, but when you talk about the importance of looking beyond the surfaces of things and say:

      'But Alan Turing gives us the metaphysics of appearance as reality.'

      I thought you were suggesting that when an entity acts as if it is has the attributes of intelligence and/or consciousness then there are some additional objective tests that could be done to see if it really had these attributes if only one one was allowed to see behind the curtain (as would be the case with Viola's claim to maleness).

      Delete
    8. Of course there are additional "objective" tests! The thing is, you would apply them only to the thermometer, and not to the "talking" computer!

      Delete
  2. You always have to stop at some curtain. How do I know you are not a cyborg, short of cutting you open? Even that's not enough; perhaps your controller uses something other than the pineal gland of which I am unaware.

    So why is this impossible: when HAL 3000 passes the room test the next test wraps him in a flesh simulacrum, and you can observe through glass, and when HAL 4000 passes that .... ?

    ReplyDelete
    Replies
    1. Wow, Ken, but Turing says "Stop at the very first curtain you are presented with"! That doesn't sound very "scientific" to me. But furthermore, your instance that all of the curtains are merely physical is a metaphysical assumption. Why can't we try other forms of analysis to penetrate past your merely physical curtains... unless we already assume (like Turing) the metaphysics of materialism.

      Delete
    2. "insistence," not "instance"!

      Delete
  3. Having now read the link about the Chinese room I probably should have said "robot or man with sufficient paper, pencils, erasers, and filing cabinets"

    ReplyDelete
  4. (1) Very often, we don't care about the implementation of the thing, but its functionality.

    (2) Our definition of function may be complicated. It may or may not include running time, error rate, how often the thing will break down, etc. The Turing Test forces us to focus on whether the thing we care about can be expressed in terms of functionality. If the Google Translate team takes 10 years to come back with their answer, then we may not want to hire them.

    (3) Functionality is subjective. A racist may care that his translator has black skin. A non-racist would not.

    (4) I think that if we're going to study reasoning, then functionality is paramount. Whether the thinking happened with neurons or silicon or neurons with an embedded chip are interesting only if they make a functional difference in the thoughts and reasoning that arise. I *also* think that philosophers who study reasoning without focusing on functionality tend to get lost in the weeds, rather than discover "new insights". Searle is a good example of this type of failure. See the discussion of Searle's Chinese Room at http://www.scottaaronson.com/democritus/lec4.html

    (5) There are an *awful lot* of cases where the quality of the translation is going to matter more to the intereviewer than the process of translation. This is not true for every case. For example, if we were hiring the translator as a tour guide in our tour company, then we would want someone that our customers are comfortable with.

    ReplyDelete
    Replies
    1. 1) OK: so what?

      2) Turing did not ask, "Does the machine carry out the function of conversation?" He was asking "Is this machine intelligent?"

      His test provides an answer to 1) It fails totally to provide an answer to what it purports to answer.

      3) Red herring. Turing was trying to offer an *objective* test for whether a machine thinks.

      4) Sheer assertion.

      And thinking does not happen with neurons or with chips. Thought is immaterial. This has been demonstrated 100 times.

      5) Now I am completely lost as to the relevance of your point.

      Delete

Post a Comment

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness