Back in the ’50s, the general belief was that machines would not be able to behave intelligently. Almost seven decades later, the matter of machines achieving human-level intelligence seems to be just a matter of time. Ray Kurzweil, a futurist well-known for his history of accurate predictions, claims that by 2029 Artificial Intelligence will pass a valid Turing test and hence achieve human-level intelligence. What’s more, Kurzweil seems quite convinced that Singularity – the moment we will multiply our effective intelligence a billion fold by merging with the intelligence we have created, will occur in 2045.
Time will tell, but there are two ways to measure the future of AI. One is to consider its current successes and failures as well as the ever-increasing power of technology. The other is to look back in the past and see how far it has come in a span of 60 years, which is what we’ll do in this article. We will check through its boom-and-bust cycles, from inception to the age of autonomous cars, virtual assistants and AI robots.
The 1956 Dartmouth Conference
In the early ’50s, the study of “thinking machines” had various names like cybernetics, automata theory, and information processing. By 1956, genius scientists as Alan Turing, Norbert Wiener, Claude Shannon and Warren McCullough had already been working independently on cybernetics, mathematics, algorithms and network theories.
However, it was computer and cognitive scientist John McCarthy who came up with the idea of joining these separate research efforts into a single field that would study a new topic for the human imagination – Artificial Intelligence. He was the one who coined the term and later on founded the AI labs at MIT and Stanford.
In 1956, the same McCarthy set up the Dartmouth Conference in Hanover, New Hampshire. Leading researchers in the complexity theory, language simulation, the relationship between randomness and creative thinking, neural networks were invited. The purpose of the newly created research field was to develop machines that could simulate every aspect of intelligence. That’s why the 1956 Dartmouth Conference is considered to be the birth of Artificial Intelligence.
Ever since, Artificial Intelligence has lived through alternate decades of glory and scorn, widely known as AI summers and winters. Its summers were characterized by optimism and huge fundings, whereas its winters were faced with funding cuts, disbelief and pessimism.
If you want to browse through the AI topics discussed during the Dartmouth Conference, feel free to delve into this reading. Meanwhile here’s the proposal advanced at the time by John McCarthy (Dartmouth College) together with M. L. Minsky (Harvard University), N. Rochester (IBM Corporation) and C.E. Shannon (Bell Telephone Laboratories).
“We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”
AI Summer 1 [1956-1973]
The Dartmouth Conference was followed by 17 years of incredible progress. Research projects carried out at MIT, the universities of Edinburgh, Stanford and Carnegie Mellon received massive funding, which eventually paid off.
It was during those years that programming computers started to perform algebraic problems, prove geometric theorems, understand and use English syntax and grammar. In spite of the abandonment of connectionism and the failed machine translation, which put the Natural Language Processing (NLP) research on hold for many years, many accomplishments from back then made history. Here are a few of them:
- Machine learning pioneer Ray Solomonoff laid the foundations of the mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction;
- Thomas Evans created the heuristic ANALOGY program, which allowed computers to solve geometric-analogy problems;
- Unimation, the first robotics company in the world, created the industrial robot Unimate, which worked on a General Motors automobile assembly line;
- Joseph Weizenbaum built ELIZA – an interactive program that could carry conversations in English on any topic;
- Ross Quillian demonstrated semantic nets, whereas Jaime Carbonell (Sr.) developed Scholar – an interactive program for computer assisted instruction based on semantic nets;
- Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles on AI.
Encouraged by these impressive first results, researchers gained hope that by 1985 they would be able to build the first truly thinking machine capable of doing any work a man could do.
AI Winter 1 [1974-1980]
By 1974, the general perception was that researchers had over-promised and under-delivered. Computers could not technically keep up the pace with the complexity of problems advanced by researchers. The more complicated the problem, the greater the amount of computing power it required. For instance, an AI system that analyzed the English language could only handle a 20-word vocabulary, because that was the maximum data the computer memory could store.
Back in the ’60s, the Defense Advanced Research Projects Agency (DARPA) had invested millions of dollars in AI research without pressuring researchers to achieve particular results. However, the 1969 Mansfield Amendment required DARPA to fund only mission-oriented direct research. In other words, researchers would only receive funding if their results could produce useful military technology, like autonomous tanks or battle management systems, which they did. For example, they built the Dynamic Analysis and Replanning tool, a battle management system that proved to be successful during the first Gulf War. However, it wasn’t good enough.
DARPA was disappointed in the failure of the autonomous tank project. The allegedly poor results of the Speech Understanding Research (SUR) program carried out at Carnegie Mellon University did not help either. Apparently, DARPA had expected a system that could respond to voice commands from a pilot. The system built by the SUR team could indeed recognize spoken English, but only if the words were uttered in a specific order. As it turned out, the results led to massive grant cuts, which put a lot of research on hold.
In spite of the early optimistic projections, funders began to lose trust and interest in the field. Eventually, the limitless spending and vast scope of research were replaced with a “work smarter, not harder” approach.
AI Summer 2 [1981 – 1987]
This period was governed by what is now known as Weak Artificial Intelligence (Weak AI). The term is an interpretation of AI in a narrow sense, meaning that Weak AI accomplished only specific problem-solving tasks like configuration; it did not encompass the full range of human cognitive abilities.
This new chapter in the history of AI brought to life the “expert systems”, i.e. AI programs that could automate highly specific decisions, based on logical rules derived from expert knowledge. It was the first time AI solved real-world problems and saved corporations a lot of money.
For example, before expert systems were invented, people had to order each component for their computer systems and often dealt with the wrong cables and drivers. This was due to salespeople from hardware companies making erroneous, incomplete and unpractical configurations. XCON, an expert system developed by Professor John McDermott for the Digital Equipment Corporation (DEC), was a computer configurator that delivered valid and fully configured computers. XCON is reported to have earned DEC approximately $40 million per year.
At the same time, the expert systems led to the birth of a new industry – specialized AI hardware. Lisp, one of the most successful specialized computers at the time, was optimized to process Lisp, i.e. the most popular language for AI. However, the golden age of Lisp machines was short. By 1987 Apple and IBM were building desktop computers far more powerful and cheaper than Lisp machines. Soon the era of expert systems would come to an end.
Japan, on the other hand, dedicated $850 million for the Fifth Generation Computer initiative – a 10-year project, which aimed at writing programs and building machines that could carry conversations, translate languages, interpret images and think like humans. DARPA did not want to lose the AI race to the Japanese government, so they continued to pour money into AI research through the Strategic Computing Initiative, which focused on supercomputing and microelectronics.
AI Winter 2 [1988-2011]
In the late ’80s, AI’s glory began to fade yet again. In spite of their usefulness, the expert systems proved to be too expensive to maintain; they had to be manually updated, could not really handle unusual inputs and most of all could not learn. Unexpectedly, the advanced computer systems developed by Apple and IBM buried the specialized AI hardware industry, which was worth half a billion dollars at the time.
DARPA concluded that AI researchers underdelivered, so they killed the Strategic Computing Initiative project. Japan hadn’t made any progress either, since none of the goals of the Fifth Generation Computer project had been met, in spite of $400 million spendings. Once again over-inflated expectations were followed by curbed enthusiasm.
In spite of the funding cuts and the stigma, Artificial Intelligence did make progress in all its areas: machine learning, intelligent tutoring, case-based reasoning, uncertain reasoning, data mining, natural language understanding and translation, vision, multi-agent planning, etc. Here are some of the highlights:
- In 1997, IBM’s supercomputer Deep Blue beat world chess champion Garry Kasparov after six games; Deep Blue used tree search to calculate up to a maximum of 20 possible moves;
- Also in 1997, the Nomad robot built by Carnegie Mellon University, navigated over 200 km of the Atacama Desert in Northern Chile, in an attempt to prove that it could also ride on Mars and the Moon;
- In the late ’90s, Cynthia Breazeal from MIT published her dissertation on Sociable Machines and introduced Kismet, a robot with a face that could express emotions (an experiment in affective computing);
- In 2005, Stanley, an autonomous vehicle built at Stanford, won DARPA Grand Challenge race;
Starting with 2010, Artificial Intelligence has been moving at a fast pace, feeding the hype apparently more than ever. It’s hard to forget that only last year social humanoid robot was granted citizenship by the Kingdom of Saudi Arabia. What’s more, this year British filmmaker Tony Kaye has announced that his next movie, Second Born, would star an actual robot, trained in different acting methods and techniques.
Without a doubt, there’s more to come since the technologies developed by AI researchers in the past two decades have proven successful in so many fields – machine translation, Google’s search engine, data mining, robotics, speech recognition, medical diagnosis and more. Check in for our next article on Artificial Intelligence, where we’ll be discussing its biggest successes and failures over the past 10 years.
You might also be interested in:
Future Horizons in Review - 5 Tech Conversations That Ruled in 2018
Posted at 12:46h, 14 July[…] of the talk, check out this blog post. If you need a richer context, you might want to delve into this short history of Artificial Intelligence and/or browse through the biggest AI breakthroughs over the past 10 […]
[Episode 1] Artificial Intelligence from Hype to Reality
Posted at 13:16h, 14 July[…] You might also like A Short History of Artificial Intelligence & The Biggest AI Breakthroughs over the Past 10 […]
[Episode 2] Artificial intelligence from hype to reality - QUALITANCE
Posted at 13:36h, 14 July[…] winters were marked by big cuts, disbelief and pessimism. In case you want to learn more, check out this short review of its advances. […]
[Episode 3] Artificial intelligence from hype to reality - QUALITANCE
Posted at 12:12h, 05 August[…] Moore’s Law, however, has the merit of bringing to light the extraordinary capabilities of hardware – memory, cycles, storage. Let’s not forget that such developments accelerated the victorious return of Machine Learning, catching up to the promises of the early ’50s and ’60s. […]
The Biggest AI Breakthroughs over the Past 10 Years
Posted at 14:25h, 08 August[…] Intelligence has always been an object of admiration and damnation. Skim through AI’ summers and winters, and you’ll understand that we’ve always taken sides. On the one hand, we’ve been […]