As the century turns, all of AI has been to an astonishing degree unified around the conception of an intelligent agent. The unification has in large part come courtesy of a comprehensive textbook intended to cover literally all of AI: Russell and Norvig's (1994) Artificial Intelligence: A Modern Approach (AIMA), the cover of which also displays the phrase ``The Intelligent Agent Book." The overall, informal architecture for an intelligent agent is shown in Figure 3; this is taken directly from the AIMA text. According to this architecture, agents take percepts from the environment, process them in some way that prescribes actions, perform these actions, take in new percepts, and continue in the cycle.6
In AIMA, intelligent agents fall on a spectrum from least intelligent to more intelligent to most intelligent. The least intelligent artificial agent is a ``TABLE-DRIVEN-AGENT," the program (in pseudo-code) for which is shown in Figure 4. Suppose that we have a set of actions each one of which is the utterance of a color name (``Green," ``Red," etc.); and suppose that percepts are digital expressions of the color of an object taken in by the sensor of a table-driven agent. Then given Table 1 our simple intelligent agent, running the program in Figure 4, will utter (through a voice synthesizer, assume) ``Blue" if its sensor detects 100. Of course, this is a stunningly dim agent. What are smarter ones like?
In AIMA we reach artificial agents that might strike some as rather smart when we reach the level of a ``knowledge-based" agent. The program for such an agent is shown in Figure 5. This program presupposes an agent that has a knowledge-base (KB) in which what the agent knows is stored in formulae in the propositional calculus, and the functions
which give the agent the capacity to manipulate information in
accordance with the propositional calculus. (One step up from such an agent
would
be a knowledge-based agent able to represent and reason over information
expressed in full first-order logic.)
A colorful example of such an agent is one clever enough
to negotiate the so-called ``wumpus world." An example of such a world is
shown in
Figure
6. The objective of the agent that finds itself in
this world is to
find the gold and bring it back without getting killed.
As Figure 6
indicates, pits are always surrounded on three sides by breezes, the wumpus is
always surrounded on three sides by a stench, and the gold glitters in the square in
which it's positioned.
The agent dies if it enters a square with a pit in it
(interpreted as falling into a pit) or a wumpus in it (interpreted as
succumbing to an
attack by the wumpus). The percepts for the agent can be given in the form of
quadruples. For example,
Then in light of the fact that
In my lab, a number of students have built actual wumpus-world-winning robots; for a picture of one toiling in this world see Figure 7.
Now I have no problem believing that the techniques and formalisms that constitute the agent-based approach preached in AIMA are sufficient to allow for the construction of characters that operate at the level of animals. But when we reach the level of personhood, all bets, by my lights, are off.
![]() |