They say it’s not the winning that counts but the taking part. WatsonÂ managed to do both — but perhaps the winning actually counted forÂ less. All the indications are that Watson’s win was as much about hisÂ — or its — speed on the buzzer as for his knowledge.
As competitor BradÂ Rutter said, “On any given night nearly all the contestants know nearlyÂ all the answers, so it’s just a matter of who masters buzzer rhythmÂ the best.”
So the impressive thing is that Watson had sufficientÂ knowledge to be a credible contestant, the winning was secondary.
While of course the sheer power of the computers involved and theÂ man-decades of development count for a lot, the real key to Watson’sÂ success was in its use of semantic markup. Most chatbots (of whichÂ Watson is a specialized type) rely on matching an input patternÂ against a database of possible pattens, and then returning theÂ response programmed for that pattern. This is how Pandorabots (muchÂ used in Second Life) work, and to an extent our own Discourse system.Â The problem is that this approach does not scale, and the bot “knows”Â nothing — it just matches characters.
With a semantic approach the bot has a web of knowledge. The most pureÂ way is to use triples, very simple relationships between an objectÂ (“sea”), an association (“has colour”), and a subject (“blue”). WhenÂ these elements are then linked into an ontology of things (blue is aÂ colour, sea is made of water, sea is a habitat, sea is a geographicÂ feature) then the bot can begin to make deductions and links betweenÂ the words in the input text.
This means he can deduce theÂ things that the question might be alluding to, and try and map that toÂ an answer. Watson is not the first chatbot to use semantic markupÂ (we’ve also Â been doing similar things since 2008 but just not withÂ IBM’s resources — see our Halo Â bot talking about ants and stars), but we are in no doubtÂ that semantic representation will be key to chatbots in the future.
As to whether Watson is the future and represents a great leap for artificial intelligence, it all depends on what you are looking for. If you see AI asÂ being about god-like bots, Terminator’s Skynet, or, more prosaically,Â expert systems, then yes this is a step forward. Wolfram Alpha was oneÂ step towards the “global answer engine” (but Google and Wikipedia moreÂ so), and Watson is another step. But these are brute-force solutionsÂ to brute-force problems.
One of the truisms of machine intelligenceÂ is that as long as a machine is unable do a particular task, people say that the task requires intelligence. But once the machine can do it, then they say that the task didn’t need intelligence after all. Chess is a good example. Once seen as almostÂ the best measure of an “intelligent” person, after Deep Blue beatÂ Kasparov in 1997 chess was seen as more of a brute-force problem, andÂ no one claimed that Deep Blue (or the chess app on your smartphone)Â was intelligent.
The issue is that Watson is an “answer engine,” not aÂ “conversation engine,” and definitely not an “artificial generalÂ intelligence.”
Trying to mimic the flow of human conversation is something that stillÂ appears to be beyond our reach, and for this Watson is actually heading in the wrongÂ direction. Human conversation is often very imprecise, evenÂ inaccurate, but it flows. If I asked Watson what 16,353 times 9,543 was he’dÂ probably know — but most humans wouldn’t. In creating more “human”Â bots we actually find we have to try and dumb the computer down.
ForÂ example, in a virtual world we don’t want our bot to say “you are atÂ 123.2, 31.42, 78.12,” but rather “you are standing on my foot,” or “you’re a couple of feet from the door.” While giving IBMÂ credit for their achievement with Watson, I’ll personally be moreÂ impressed when IBM create a believable four-year-old rather than aÂ Jeopardy champion.
These two paths, towards a global answer engine, and towards anÂ analogue of a “real” human, are both valid, and to an extentÂ connected, but they are different paths. And I feel that humantityÂ will be more challenged ultimately by the possibilities of the latter than by Watson and his descendants.