A more-than-somewhat oblique look at the issues of trying to define machine ‘intelligence’
Two hobbits are sat in a tavern in a quiet backwater of The Shire. After a prolonged silence devoted to the appreciation of their mugs of Bucktooth’s Old Tawny, one turns lazily to the other and mumbles …
“I see the Rollyberry Magic Man’s been caught out once and for all!”
“How’s that?” asks the other.
“Those acorns he was making appear out of thin air … Turned out he had them stuffed up his sleeves all along.”
“So, he’s a fake then?”
“S’pose so. Doesn’t surprise me though. I always had my suspicions about that bloke. Old Bill Gamwise reckons, when he made that fire appear from nowhere, he had a contraption made out of flints in his pocket.”
“Hmm, so probably all the magic he’s done has been phoney then?”
“Yep, reckon so … “
A further period of ‘reflection’ follows … Eventually …
“So, do you think there really IS any magic then?”
So, a computer has passed the ‘Turing Test’ for ‘intelligence’, has it? No, not really; in fact, no, not at all. But, boy, has it stirred up some public interest in the subject? That alone has to be good. More than that, it’s got senior computer scientists debating anew about how the test should be implemented … and even what it actually means.
The usual bite-sized version of the Turing Test (TT) for public consumption is this … Put a human in one room, in communication with both another human and a computer in a different room. In modern terms, the communication would take the form of something like a text message conversation with each. If the first human couldn’t say which of the second human or the computer was which (or got it wrong), then that would make the computer intelligent. Last week, there was widespread coverage in the press that a computer – well a computer program – had passed the test.
Well, it’s hard to know where to begin with what’s wrong with this … Continue reading