next up previous
Next: The Roadblock: Personhood Up: Is It Possible to Previous: A List of Some

Intelligent Agents

As the century turns, all of AI has been to an astonishing degree unified around the conception of an intelligent agent. The unification has in large part come courtesy of a comprehensive textbook intended to cover literally all of AI: Russell and Norvig's (1994) Artificial Intelligence: A Modern Approach (AIMA), the cover of which also displays the phrase ``The Intelligent Agent Book." The overall, informal architecture for an intelligent agent is shown in Figure 3; this is taken directly from the AIMA text. According to this architecture, agents take percepts from the environment, process them in some way that prescribes actions, perform these actions, take in new percepts, and continue in the cycle.6


  
Figure 3: The Architecture of an Intelligent Agent
\includegraphics[width=3in]{/home/58/brings/locker/fig02.01.ps}

In AIMA, intelligent agents fall on a spectrum from least intelligent to more intelligent to most intelligent. The least intelligent artificial agent is a ``TABLE-DRIVEN-AGENT," the program (in pseudo-code) for which is shown in Figure 4. Suppose that we have a set of actions each one of which is the utterance of a color name (``Green," ``Red," etc.); and suppose that percepts are digital expressions of the color of an object taken in by the sensor of a table-driven agent. Then given Table 1 our simple intelligent agent, running the program in Figure 4, will utter (through a voice synthesizer, assume) ``Blue" if its sensor detects 100. Of course, this is a stunningly dim agent. What are smarter ones like?


  
Figure 4: The Least Intelligent Artificial Agent
\includegraphics[width=5in]
{/home/62/faheyj2/public_html/SB/SELPAP/ZOMBANIMALS/fig02.05.ps}



 
Table 1: Lookup Table for TABLE-DRIVEN-AGENT
Percept Action
001 ``Red"
010 ``Green"
100 ``Blue"
011 ``Yellow"
111 ``Black"


  
Figure 5: Program for a Generic Knowledge-Based Agent
\includegraphics[width=5.5in]
{/home/62/faheyj2/public_html/SB/COURSES/INTAI/FIGS/fig06.01.ps}

In AIMA we reach artificial agents that might strike some as rather smart when we reach the level of a ``knowledge-based" agent. The program for such an agent is shown in Figure 5. This program presupposes an agent that has a knowledge-base (KB) in which what the agent knows is stored in formulae in the propositional calculus, and the functions


  
Figure 6: A Typical Wumpus World
\includegraphics[width=2.5in]
{/home/62/faheyj2/public_html/SB/COURSES/INTAI/FIGS/fig06.02.ps}

which give the agent the capacity to manipulate information in accordance with the propositional calculus. (One step up from such an agent would be a knowledge-based agent able to represent and reason over information expressed in full first-order logic.) A colorful example of such an agent is one clever enough to negotiate the so-called ``wumpus world." An example of such a world is shown in Figure 6. The objective of the agent that finds itself in this world is to find the gold and bring it back without getting killed. As Figure 6 indicates, pits are always surrounded on three sides by breezes, the wumpus is always surrounded on three sides by a stench, and the gold glitters in the square in which it's positioned. The agent dies if it enters a square with a pit in it (interpreted as falling into a pit) or a wumpus in it (interpreted as succumbing to an attack by the wumpus). The percepts for the agent can be given in the form of quadruples. For example,

\begin{displaymath}\mbox{(Stench,Breeze,Glitter,None)}\end{displaymath}

means that the agent, in the square in which it's located, perceives a stench, a breeze, a glitter, and no scream. A scream occurs when the agent shoots an arrow that kills the wumpus. There are a number of other details involved, but this is enough to demonstrate how command over the propositional calculus can give an agent a level of intelligence that will allow it to succeed in the wumpus world. For the demonstration, let Si,j represent the fact that there is a stench in column i row j, let Bi,j denote that there is a breeze in column i row j, and let Wi,j denote that there is a wumpus in column i row j. Suppose now that an agent has the following 5 facts in its KB.

1.
$\neg S_{1,1} \wedge \neg S_{2,1} \wedge S_{1,2} \wedge \neg B_{1,1} \wedge
B_{2,1} \wedge \neg B_{1,2}$
2.
$\neg S_{1,1} \rightarrow (\neg W_{1,1} \wedge \neg W_{1,2} \wedge \neg
W_{2,1}) $
3.
$\neg S_{2,1} \rightarrow (\neg W_{1,1} \wedge \neg W_{2,1} \wedge \neg
W_{2,2} \wedge \neg W_{3,1}) $
4.
$\neg S_{1,2} \rightarrow (\neg W_{1,1} \wedge \neg W_{1,2} \wedge \neg
W_{2,2} \wedge \neg W_{1,3}) $
5.
$S_{1,2} \rightarrow (W_{1,3} \vee W_{1,2} \wedge
W_{2,2} \wedge W_{1,3}) $

Then in light of the fact that

\begin{displaymath}\{1, \ldots, 5\} \vdash W_{1,3}\end{displaymath}

in the propositional calculus,7 the agent can come to know (= come to include in its KB) that the wumpus is at location column 1 row 3 -- and this sort of knowledge should directly contribute to the agent's success.

In my lab, a number of students have built actual wumpus-world-winning robots; for a picture of one toiling in this world see Figure 7.

Now I have no problem believing that the techniques and formalisms that constitute the agent-based approach preached in AIMA are sufficient to allow for the construction of characters that operate at the level of animals. But when we reach the level of personhood, all bets, by my lights, are off.


  
Figure: A Real-Life Wumpus-World-Winning Robot in the Minds & Machines Laboratory (Observant readers may note that the wumpus here is represented by a figurine upon which appears the (modified) face of the Director of the M &M Lab: Bringsjord.)
\includegraphics[width=3in]{ww1.ps}


next up previous
Next: The Roadblock: Personhood Up: Is It Possible to Previous: A List of Some
Selmer Bringsjord
2001-06-27