next up previous
Next: The Lovelace Test in Up: Is It Possible to Previous: The Roadblock: Personhood

The Lovelace Test

As you probably know, Turing predicted in his famous ``Computing Machinery and Intelligence" (1964) that by the turn of the century computers would be so smart that when talking to them from a distance (via email, if you will) we would not be able to tell them from humans: they would be able to pass what is now known as the Turing Test (TT). Well, New Year's Eve of 1999 has come and gone, all the celebratory pyrotechnics have died, and the fact is: AI hasn't managed to produce a computer with the conversational punch of a toddler.

But the really depressing thing is that though progress toward Turing's dream is being made, it's coming only on the strength of clever but shallow trickery. For example, the human creators of artificial agents that compete in present-day versions of TT know all too well that they have merely tried to fool those people who interact with their agents into believing that these agents really have minds. In such scenarios it's really the human creators against the human judges; the intervening computation is in many ways simply along for the ride.

It seems to me that a better test is one that insists on a certain restrictive epistemic relation between a an artificial agent A, its output o, and the human architect H of S -- a relation which, roughly speaking, obtains when H cannot account for how A produced o. I call this test the ``Lovelace Test" in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds.



 
next up previous
Next: The Lovelace Test in Up: Is It Possible to Previous: The Roadblock: Personhood
Selmer Bringsjord
2001-06-27