Image: Shutterstock
“This is unfortunate. I’m a huge fan of Turing, but his test is indeed inadequate,” Selmer Bringsjord, one of the designers of the Lovelace Test, a more rigorous AI detector, told me in an interview.The nature of the Turing Test is that it pits a human interlocutor against a computer program. The machine basically has to trick the human into thinking it’s a person, which essentially entails a human-to-human match of wits. The programmer only has to build a program that can fool an opponent into thinking its intelligent. In Goostman’s case, giving the bot a young age and foreign nationality played into the manipulation.What’s more, to be effective, chatbots designed to pass the Turing Test only have to mimic basic language skills rather than demonstrate genuine machine intelligence—ordering words and phrases in a convincing way, without knowing what they mean.
The Lovelace Test is designed to be more rigorous, testing for true machine cognition. It was designed in the early 2000s by Bringsjord and a team of computer scientists that included David Ferrucci, who later went on to develop Jeopardy-winning Watson computer for IBM. They named it after Ada Lovelace, often described as the world's first computer programmer.The Lovelace Test removes the potential for manipulation on the part of the program or its designers and tests for genuine autonomous intelligence—human-like creativity and origination—instead of simply manipulating syntax.Until a machine can originate an idea that it wasn’t designed to, Lovelace argued, it can’t be considered intelligent in the same way humans are.
Advertisement
Advertisement