You may have seen or heard about IBM’s Watson computer when it competed on “Jeopardy”. Of course, it easily defeated its human counterparts – having instant access to 200 million pages (four terabytes) of stored data.
That was impressive, to be sure…but there’s one task Watson can’t perform very well: comprehension. It’s one thing to have a lot of facts, but quite another to use them to form a conclusion.
Ever since the advent of computers, science fiction writers have envisioned futuristic worlds in which thinking machines have outpaced humans in terms of intelligence.
But scientists who work with the concept of artificial intelligence (AI) have learned over the years that simply increasing the number of processor cycles won’t get them to that point any time soon. The brain is so much more complex than anything man’s best efforts to rival it have been able to develop. Progress is being made, however, and there’s no reason not to be optimistic – given what we’ve seen so far.
But what happens when you take a state-of-the-art AI device and try to measure its intelligence in human terms – using a standard IQ test administered to children? A research team with members from the University of Illinois and a research group in Hungary did exactly that, recently with ConceptNet 4, an AI developed at Massachusetts Institute of Technology.
ConceptNet is a semantic network that contains a large store of information used to teach the system to understand concepts. For example, ConceptNet knows that a saxophone is a musical instrument in the same way as does any other computer, but it goes further, having the ability to process relationships between different things. It learns that a saxophone is used a great deal in jazz and can respond to questions addressing that fact.
So how did it do on the test?
In their report, the researchers state, “We found that the WPPSI-III VIQ psychometric test gives a WPPSI-III VIQ to ConceptNet 4 that is equivalent to that of an average four-year old. The performance of the system fell when compared to older children, and it compared poorly to seven year olds.”
So despite all the impressive abilities demonstrated by AI in the realms of fact retention, mathematics, pattern recognition and so forth, in real terms it still remains – for now – the digital version of a slightly odd preschooler.