I have super great respect for Ray Kurzweil as an inventor. After all his inventions are the basis of work. He built the first scanner, invented OCR and contributed a lot to artificial intelligence. But in his philosophical and scientific writing he sometimes takes things too far. In his older books and now in his new book “How to Create a Mind”.
In this book Kurzweil presents a pattern recognition theory of mind (PRTM). While there is some truth to it that pattern recognition plays an important role in the cognitive capabilities of humans – and we have built a lot of algorithms on this assumption – we today know that the more complex functions are based on a rules framework.
A very good overview and critical review by Gary Marcus, a professor of psychology at N.Y.U., can be found in the New Yorker:
“Kurzweil illustrates this thesis[PRTM] in the context of a system for reading words. At the lowest level, a set of pattern recognizers search for properties like horizontal lines, diagonal lines, curves, and so forth; at the next level up, a set of pattern recognizers hunt for letters (A, B, C, and so forth) that are built out of conjunctions of lines and curves; and at still a higher level, individual pattern recognizers look for particular words (like APPLE, PEAR, and so on that are built out of conjunctions of letters).”
Gary Marcus comes to the same conclusion as we in our work with statistical classifiers in the last decade. And he finds a nice example that directly leads to the need to use real language understanding and semantic systems:
“What Kurzweil doesn’t seem to realize is that a whole slew of machines have been programmed to be hierarchical-pattern recognizers, and none of them works all that well, save for very narrow domains like postal computers that recognize digits in handwritten zip codes. This summer, Google built the largest pattern recognizer of them all, a system running on sixteen thousand processor cores that analyzed ten million YouTube videos and managed to learn, all by itself, to recognize cats and faces—which initially sounds impressive, but only until you realize that in a larger sample (of twenty thousand categories), the system’s overall score fell to a dismal 15.8 per cent.
The real lesson from Google’s “cat detector” is that, even with the vast expanses of data and computing power available to Google, hierarchical-pattern recognizers still stink. They cannot come close to actually understanding natural language, or anything else for which complex inference is required.”
Good point and encouragement to continue working on semantic analysis.