Artificial intelligence (AI) is both the intelligence of machines and the branch of computer science which aims to create it.
Major AI textbooks define artificial intelligence as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. AI can be seen as a realization of an abstract intelligent agent (AIA) which exhibits the functional essence of intelligence. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."
Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, , perception and the ability to move and manipulate objects. General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic. AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others. Other names for the field have been proposed, such as computational intelligence, synthetic intelligence, intelligent systems, or computational rationality.
In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.
The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle 60s their research was heavily funded by the U.S. Department of Defense and they were optimistic about the future of the new field:
1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"
1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."
These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. This was the first AI Winter.
In the early 80s, AI research was revived by the commercial success of expert systems (a form of AI program that simulated the knowledge and analytical skills of one or more human experts) and by 1985 the market for AI had reached more than a billion dollars. Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.
In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas. The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.
Can the brain be simulated by a digital computer? If it can, then would the simulation have a mind in the same sense that people do?
In a classic 1950 paper, Alan Turing posed the question "Can Machines Think?" In the years since, the philosophy of artificial intelligence has attempted to answer it.
1-Poole, Mackworth & Goebel 1998, p. 1 (who use the term "computational intelligence" as a synonym for artificial intelligence). Other textbooks that define AI this way include Nilsson (1998), and Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55)
2- This definition, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional components.
3-Abstract Intelligent Agents: Paradigms, Foundations and Conceptualization Problems, A.M. Gadomski, J.M. Zytkow, in "Abstract Intelligent Agent, 2". Printed by ENEA, Rome 1995, ISSN/1120-558X]
4- Although there is some controversy on this point (see Crevier 1993, p. 50), McCarthy states unequivocally "I came up with the term" in a c|net interview. (See Getting Machines to Think Like Us.)
5- See John McCarthy, What is Artificial Intelligence?
6- This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig 2003, Luger & Stubblefield 2004, Poole, Mackworth & Goebel 1998 and Nilsson 1998.
7-General intelligence (strong AI) is discussed by popular introductions to AI, such as: Kurzweil 1999, Kurzweil 2005, Hawkins & Blakeslee 2004
8-Russell & Norvig 2003, pp. 5-16
9-See AI Topics: applications
10-Poole, Mackworth & Goebel 1998, p. 1
11- The name of the journal Intelligent Systems
12- Russell & Norvig 2003, p. 17
13- McCorduck 2004, p. 5, Russell & Norvig 2003, p. 939
14-The Egyptian statue of Amun is discussed by Crevier (1993, p. 1). McCorduck (2004, pp. 6-9) discusses Greek statues. Hermes Trismegistus expressed the common belief that with these statues, craftsman had reproduced "the true nature of the gods", their sensus and spiritus. McCorduck makes the connection between sacred automatons and Mosaic law (developed around the same time), which expressly forbids the worship of robots.
15-McCorduck 2004, p. 13-14 (Paracelsus)
16-Needham 1986, p. 53
17-McCorduck 2004, p. 6
18-A Thirteenth Century Programmable Robot
19-McCorduck 2004, p. 17
20-McCorduck 2004, p. xviii
21-McCorduck (2004, p. 190-25) discusses Frankenstein and identifies the key ethical issues as scientific hubris and the suffering of the monster, e.g. robot rights.
22-Robots could demand legal rights
23-See the Times Online, Human rights for robots? Weâ€™re getting carried away
24-robot rights: Russell Norvig, p. 964
25-Russell & Norvig (2003, p. 960-961)
27-oJoseph Weizenbaum (the AI researcher who developed the first chatterbot program, ELIZA) argued in 1976 that the misuse of artificial intelligence has the potential to devalue human life. Weizenbaum: Crevier 1993, pp. 132âˆ’144, McCorduck 2004, pp. 356-373, Russell & Norvig 2003, p. 961 and Weizenbaum 1976
28-Singularity, transhumanism: Kurzweil 2005, Russell & Norvig 2003, p. 963
29-Quoted in McCorduck (2004, p. 401)
30-Among the researchers who laid the foundations of the theory of computation, cybernetics, information theory and neural networks were Claude Shannon, Norbert Weiner, Warren McCullough, Walter Pitts, Donald Hebb, Donald McKay, Alan Turing and John Von Neumann. McCorduck 2004, pp. 51-107, Crevier 1993, pp. 27-32, Russell & Norvig 2003, pp. 15,940, Moravec 1988, p. 3.
31- Crevier 1993, pp. 47-49, Russell & Norvig 2003, p. 17
32-Russell and Norvig write "it was astonishing whenever a computer did anything kind of smartish." Russell & Norvig 2003, p. 18
33-Crevier 1993, pp. 52-107, Moravec 1988, p. 9 and Russell & Norvig 2003, p. 18-21. The programs described are Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
34-Crevier 1993, pp. 64-65
35-Simon 1965, p. 96 quoted in Crevier 1993, p. 109
36-Minsky 1967, p. 2 quoted in Crevier 1993, p. 109
37-See History of artificial intelligence â€” the problems.
38-o Crevier 1993, pp. 115-117, Russell & Norvig 2003, p. 22, NRC 1999 under "Shift to Applied Research Increases Investment." and also see Howe, J. "Artificial Intelligence at Edinburgh University : a Perspective"
39-Crevier 1993, pp. 161-162,197-203 and and Russell & Norvig 2003, p. 24
40-Crevier 1993, p. 203
41-Crevier 1993, pp. 209-210
42-Russell Norvig, p. 28,NRC 1999 under "Artificial Intelligence in the 90s"
43-Russell Norvig, pp. 25-26
44-All of these positions are mentioned in standard discussions of the subject, such as Russell & Norvig 2003, pp. 947-960 and Fearn 2007, pp. 38-55
45-Turing 1950, Haugeland 1985, pp. 6-9, Crevier 1993, p. 24, Russell & Norvig 2003, pp. 2-3 and 948
46-McCarthy et al. 1955 See also Crevier 1993, p. 28
47- Newell & Simon 1963 and Russell & Norvig 2003, p. 18
48-Dreyfus criticized a version of the physical symbol system hypothesis that he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules". Dreyfus 1992, p. 156. See also Dreyfus & Dreyfus 1986, Russell & Norvig 2003, pp. 950-952, Crevier & 1993 120-132 and Hearn 2007, pp. 50-51
49This is a paraphrase of the most important implication of Godel's theorems, according Hofstadter (1979). See also Russell & Norvig 2003, p. 949, Godel 1931, Church 1936, Kleene 1935, Turing 1937, Turing 1950 under o(2) The Mathematical Objectionâ€
50-Searle 1980. See also Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis," although Searle's arguments, such as the Chinese Room, apply only to physical symbol systems, not to machines in general (he would consider the brain a machine). Also, notice that the positions as Searle states them don't make any commitment to how much intelligence the system has: it is one thing to say a machine can act intelligently, it is another to say it can act as intelligently as a human being.
51-Moravec 1988 and Kurzweil 2005, p. 262. Also see Russell Norvig, p. 957 and Crevier 1993, pp. 271 and 279. The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-70s and was touched on by Zenon Pylyshyn and John Searle in 1980.
HISTORY OF ROBOTS -- ROBOTS IN SPACE -- ROBOTIC SURGERY -- ROBOTS IN THE MILITARY -- MORE --
See also: Schools with Cybernetic Programs