1. LT was specifically designed so as to be able to prove theorems from Russell and Whitehead's Principia Mathematica. Upon learning of LT's accomplishments, Russell was apparently delighted. Viewed from today, the theorems in question seem stunningly simple. For example, LT proved the law of contraposition in the propositional calculus (from p → q one can infer q → p). Contemporary counterparts to LT include powerful theorem provers like Vampire (1995). But however powerful these ATPs may be when compared to LT, and to each other in the CADE ATP System Competition , one of the remarkable things about the state of automated theorem proving at present is that many logic puzzles that are routinely solved by the best undergraduates in logic courses can't be solved by these ATPs. This can be verified by simply inspecting some of the problems on which ATPs, today, falter.
2. It's interesting to note that in playing chess by operating as a computer himself, Turing behaved in a fashion that accords perfectly with how he conceived, and mathematized, a computer: see (Turing 1936), wherein Turing machines are introduced as a formalization of the concept of a computist, a human carrying out simple calculation.
3. One of the interesting things about Descartes' position is that he seems to anticipate a distinct mode of reasoning identified by contemporary psychologists and cognitive scientists: so-called System 2 reasoning. The hallmark of System 2 reasoning is that it is efficaciously applicable to diverse domains, presumably by understanding underlying structure at a very deep level. System 1 cognition, on the other hand, is chained inflexibly to concrete situations. Stanovich and West (2000) provide an excellent treatment of System 1 and 2 cognition, and related matters (such as that the symbol-driven marketplace of the modern civilized world appears to place a premium on System 2 cognition.)
4. Actually, Descartes proposed a test that is much more demanding than TT, but I don't explain and defend this herein. In a nutshell, if you read the passage very carefully, youll see that Descartes test is passed only if the computer has the capacity to answer arbitrary questions. A machine which has a set of stored chunks of text that happen to perfectly fit the queries given it during a Turing test would not pass Descartes test -- even though it would pass Turings.
5. There are some obvious objections that come to mind once Russells position is understood. For example, bounded optimality seems to be at odds with carrying out research now that lays a foundation for future work -- work that will inevitably be based on machines that are much, much more powerful than the ones we have today. This is an objection Russell anticipates; it leads him to present an account of asymptotic bounded optimality. Informally put, this account says that a program is along the right lines iff with speedup (or more space) its worst-case performance is as good as any other program in all environments. Details are available in (Russell 1997).
6. In no way do I mean to suggest that AI research is now exclusively hybrid. A recent treatment of the symbolic approach makes this clear: (Brachman & Levesque 2004). B&L explain that their book is based on what they say is a daring hypothesis, viz., that a top-down approach which ignores neurological details in favor of abstract models of cognition pays great dividends. In addition, a recent argument against a connectionist approach to simulating human literary creativity can be found in (Bringsjord & Ferrucci 2000).
7. Sometimes casual students of AI, logic, and philosophy come to believe that uncertainty has been the phenomenon causing a departure from logicist/symbolic approaches. It's important, especially given the nature of the present venue, to realize that the topic of uncertainty has long been a staple in logic, logicist AI, and epistemology. In fact, alert readers will have noted that (Charniak and McDermott 1985) contains a chapter devoted to the topic.
The uncertainty challenge can be expressed by considering difficulties that arise when an attempt is made to capture what philosophers often call practical reasoning: Suppose that I would like to take a bit of a break from working on this entry for SEP, and would specifically like to fetch today's mail from my mailbox. What does it take for me to accomplish this goal? Well, in order to get the mail, I will need to exit my house, walk approximately half way down my driveway, cut across grass under my three chinese elms, reach my mailbox, open it, reach in, and so on; you get the idea; I omit the remainder. Suppose that this plan consists in the successive execution of actions, starting from some initial state s1 (my being in my study, before deciding to take the postal break), a constant in FOL. Suppose that a function does can be applied to a constant ai (which denotes some action) in a situation sj to produce a new situation does(ai, sj). Given this scheme, we can think of what I plan to do as performing a sequence of actions that will result in a situation in which I have today's mail:
Given this, if I were a robot idling in my study, how would I retrive a plan that would allow me to reach the goal of having mail? If, to ease exposition would simply attempt to prove this the following formula. If provable, the witnesses are the actions I need to successively perform.
Unfortunately, this approach will not work in many, if not most, cases: Yesterday I went to retrieve my mail, and found to my surprise that my usual route to my mailbox, which runs beneath my beloved elms, was cordoned off, because one of these massive trees was being cut down by a crew -- without prior authorization from me. My plan was shot; I needed a new one -- one with a rather elaborate detour, given the topography of my land. (I of course also needed, on the spot, a plan to deal with the fact that this tree, my tree, was unaccountably targeted for death.) I had made it to the point just before passing beneath the elms, so I now needed a sequence of actions that, if performed from this situation, would eventuate in my having my mail. But this complication is one from among an infinite class: lots of other things could have derailed my original plan. The bottom line is that the world is uncertain, and using straight logic to deduce plans in advance will therefore work at best in only a few cases.
Notice that I say straight logic. The postal example I have given is a counter-example to only one approach (situation calculus and Green's Method) within one logical system (FOL). It hardly follows that, in general, a logicist approach can't deal with the uncertainty challenge.
I would be remisss if I did not point out that the uncertainty challenge is a core problem in epistemology. It has long been realized that an adequate theory of knowledge must take account of the fact that, while some of what we know may be self-evident, and while some of what we know may be derived deductively from the self-evident, most of what we know is far from certain. Moreover, there are well-known arguments in the philosophical literature purporting to show that much of what we know cannot be based the result of inductive reasoning over that which is certain. (A classic treatment of these issues can be found in Roderick Chisholm's (1977) Theory of Knowledge.) The connection between this literature, and the uncertainty challenge in AI, is easy to see: The AI researcher is concerned with modeling and computationally simulating (if not outright replicating) intelligent behavior in the face of uncertainty, and the epistemologist seeks a theory of how our intelligent behavior in the face of uncertainty can be analyzed.
8. Obviously, there are other excellent textbooks that serve to introduce and, at least to some degree, canvass, AI. For example, there is the commendable trio: (Ginsberg 1993), (Nilsson 1987), and (Winston 1992). (Winston's book is the third edition. In Nilsson's case, this is his second intro book; the first was (Nilsson 1987).) The reader should rest assured that in each case, whether from this trio or whether AIMA, the coverage is basically the same; the core topics don't vary. In fact, Nilsson's (1998) book, as he states in his preface, is explicitly an agent-based approach, and in fact the book, like AIMA, is written as a progression from the simplest agent through the most capable. (I say a bit about this in the main text, later.) Clearly, then, my reliance on AIMA in no way makes the present SEP entry idiosyncratic. Finally, arguably the attribute most important to an entry such as the present one is encycopedic coverage of AI -- and AIMA delivers in this regard like no other extant text. This situation may change in the future, and if it does, the present entry would of course be updated.
9. Of all the the agencies within the United States, the Information Processing Technology Office (IPTO) at the Defense Advanced Research Projects Agency (DARPA) occupies a unique position. IPTO has supported AI since its inception, and at present, it continues to guide AI forward through visionary programs. It is therefore interesting to note that, at a recent celebration of IPTO and its steadfast sponsorship of research and development in the area of intelligent systems, a number of scientists and engineers whose careers go back to the dawn of AI in the 1950s complained that contemporary machine learning has come to be identified with function-based learning. They pointed out that most clever adults predominatly learn by reading, and called for an attack on this problem in the future. In a sign that the concerns voiced here have gained some traction, there is a Spring 2007 American Association for Artificial Intelligence Spring Symposium on Machine Reading.
10. For confirmation in the case of cognitive psychology, see (Ashcraft 1994). The field of computational cognitive modeling seeks to uncover the nature of human cognition by capturing that cognition in standard computation, and is therefore obviously intimately related to AI. For an excellent overview of computational cognitive modeling that nonetheless reveals the field's failure to confront subjective consciousness, see (Anderson & Lebiere 2003). Ron Sun (1994, 2002) is perhaps unique among computational cognitive modelers in that he considers topics of traditional interest to philosophers.
11. Sometimes AI that puts an emphasis on declarative knowledge and reasoning over that knowledge is referred to not as logic-based or logicist AI, but instead as knowledge-based AI. For example, see (Brachman and Levesque 2004). However, any and all formalisms and techniques constitutive of knowledge-based AI are fundamentally logic-based, but their underlying formal structure may be concealed in the interests of making it easier for practicioners without extensive training in mathematical and philosophical logic to grasp these formalisms and deploy these techniques.
12. I point out for cognoscenti that I here expand the traditional concept of a logical system as deployed, e.g., in Lindström's Theorems, which are elegantly presented in (Ebbinghaus et al. 1984).
13. For a seminal discussion of intensional logic and intentionality, see (Zalta 1988).
14. The broader category, of which neural nets may soon enough be just a small part, is that of statistical learning algorithms. Chapter 20 of AIMA2e provides a very nice discussion of this category. It's important to realize that that artificial neural networks are just that: artificial. They don't correspond to what happens in real human brains (Reeke & Edelman 1988).
15. The literature on hypercomputation has exploded recently. As I have mentioned, one of the earliest sort of hypercomputational device is a so-called trial-and-error machine (Putnam 1965; Gold 1965), but much has happened since then. Volume 317 2004 of Theoretical Computer Science is devoted entirely to hypercomputation. Before this special issue, TCS also featured an interesting kind of hypercomputational machine: so-called analog chaotic neural networks: (Siegelmann and Sontag 1994). These machines, and others, are discussed in (Siegelmann 1999). For more on hypercomputation, with hooks to philosophy, see: (Copeland 1998; Bringsjord 1998, 2002).
16. It must be said here that, for a while at least, OSCAR was also distinguished by being quite a fast automated theorem prover (ATP). I say was because there has of late been a new wave of faster and faster first-order provers. Speed in machine reasoning is an exceedingly relative concept. What's fast today is inevitably slow tomorrow; it all hinges on what the competition is doing. In OSCAR's case, the competition now includes not just the likes of Otter (see e.g. Wos et al. 1992; and go to http://www-unix.mcs.anl.gov/AR/ for a wonderful set of resources related to Otter), but also Vampire (Vronkov 1995). (As far as I can tell, Vampire's core algorithm coincides with Otter's, but increased speed can come from many sources, including how propositions are indexed and organized.) It seems to me that some of OSCAR's speed derives from the fact that in searching for proofs OSCAR approximates some form of goal analysis as a technique for finding proofs in a natural deduction format. Goal analysis will be familiar to those philosophers who have taught natural deduction. The performance of OSCAR and other systems can be found at the TPTP site.
17/ Searle has given various more general forms of the argument. For example, he summarizes the argument on page 39 of (Searle 1984) as one in which from
2. Syntax is not sufficient for semantics.
3. Computer programs are entirely defined by their formal, or syntactical, structure.
4. Minds have mental contents; specifically, they have semantic contents.
it's supposed to follow that
No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
18. The dialectic appeared in 1996, volume 2 of Psyche, accessible via
And of course Psyche can also be located using any standard search engine.19. It seems reasonable to say, about at least most of these predictions, that they presuppose a direct connection between the storage capacity and processing speed of computers, and human-level intelligence. Specifically, the assumption seems to be that if computers process information at a certain speed, and can store it in sufficiently large quantities, human-level mentation will be enabled. This is actually a remarkable assumption, when you think about it. Standard Turing machines as defined in the textbooks (e.g., as they are defined in Lewis and Papadimitriou 1981) have arbitrarily large storage capacity, and perform at arbitrarily fast speeds (each step can be assumed to take any finite amount of time). And yet programming these Turing machines to accomplish particular tasks can be fiendishly difficult. The truly challenging part of building a computer to perform at the level of a human is devising the representations and algorithms to enable it to do.
20. A sustained discussion of the nature of AI in connection specifically with distinction between mere animals and persons can be found in (Bringsjord 2000).
21. The 1956 conference was sponsored by DARPA in response to a proposal claiming that a large part of human thought is based on declarative knowlege, and logic-based reasoning over that knowledge. See Ron Brachman's A Large Part of Human Thought, July 13, 2006, Dartmouth College, at AI@50.
22. Joy's paper is available online at
Also, rest assured that you can type Why The Future Doesn't Need Us Bill Joy into any passable search engine.23. The pattern runs as follows: If science policy allows science and engineering in area X to continue, then itŐs possible that state of affairs P will result; if P results, then disastrous state of affairs Q will possibly ensue; therefore we ought not to allow X. Of course, this is a deductively invalid inference schema. If the schema were accepted, with a modicum of imagination you could prohibit any science and engineering effort whatsoever. You would simply begin by enlisting the help of a creative writer to dream up an imaginative but dangerous state of affairs P that is possible given X. You would then have the writer continue the story so that disastrous consequences of P arrive in the narrative, and lo and behold you have established that X must be banned.
24. Here's the relevant quote from Joy's paper: I had missed Ray's talk and the subsequent panel that Ray and John had been on, and they now picked right up where theyd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldnt be conscious.