Are We There Yet?
Nello Cristianini– University of Bristol
[NOTE: this article is currently submitted for publication, and is based on my Keynote Speeches of ICANN 2008 and ECML/PKDD 2009]
Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence?
Machine Intelligence, AD 1958
On November 23rd, 1958, a diverse group of scientists from all around the world and from many disciplines, gathered near London for a conference that lasted 4 days and involved about 200 people. The topic was: can machines think?
The Conference was called “On the Mechanisation of Thought Processes” and its proceedings encapsulate the zeitgeist of those days, and give us a chance to reflect on the achievements and directions of research in Machine Intelligence.
That group of engineers, biologists, mathematicians, represented both the early ideas of Cybernetics and the newly emerging ideas of Artificial Intelligence. They were brought together by the common vision that mental processes can be created in machines. Their conviction was that natural intelligence could be understood at the light of the laws of science, a position spelled out in Alan Turing’s 1947 paper “On Intelligent Machinery” . They also believed that it could be reproduced in artefacts.
Their common goals were clearly stated: understanding intelligent behaviour in natural systems and creating it in machines. The key challenges were identified and named, in the Preface of the proceedings: “This symposium was held to bring together scientists studying artificial thinking, character and pattern recognition, learning, mechanical language translation, biology, automatic programming, industrial planning and clerical mechanisation. It was felt that a common theme in all these fields was ‘the mechanisation of thought processes’ and that an interchange of ideas between these specialists would be very valuable”.
A further look at the two volumes of the Proceedings reveals a general organisation that still is found in modern meetings in this area. Sessions were devoted to: General principles; Automatic Programming; Mechanical Language Translation; Speech Recognition; Learning in Machines; Implications for Biology; Implications for Industry.
The list of participants included both members of the Cybernetics movement (both from the UK Ratio club and the US Macy Conferences) and exponents of the newly growing AI movement. It included Frank Rosenblatt (inventor of the Perceptron); Arthur Samuel (inventor of the first learning algorithm); Marvin Minsky (one of the founding fathers of AI); Oliver Selfridge (inventor of the Pandemonium architecture, a paradigm for modern agent systems); John McCarthy (inventor of LISP, and of the name Artificial Intelligence); Donald MacKay (cyberneticist); Warren McCulloch (co-inventor of the neural networks model still used today); Ross Ashby (inventor of the concept of homeostasis); Grey Walter (roboticist).