“Ofwhat a stmnge nature is bwtedge! It cZings to the mind, when i t has once seized on it, like a
zation of knowledge can result in a real under- standing system in the not too distant future. We expect that programs based on the theory we out- line here and on our previous work on conceptual dependency and belief systems will combine with the MARGIE system (Schank et al., 1973a; Riesbeck, 1975; Rieger, 1975) to produce a working under- stander. We see understanding as the fitting of
new information into a previously organized view
of the world. We have therefore extended our work on language analysis (Schank, 1973a; piesbeck
1975) to understanding
analyzer, should be “bottom up” until it gets e- nough information to make predictions and become “top down.” Earlier work has found various ways in which a word in a single sentence sets up ex- pectations about what is likely to be found in the rest of the sentence. A single sentence and i t s corresponding conceptualizations set up expecta- tions about what is to follow in the rest of a discourse or story. These expectations character- ize the world knowledge that bears on a given si- tuation, and i t is these expectations that we wish to explore.
111. Scripts
A script, as we use it, is a structure that describes an appropriate sequence of events in a particular context. A script i s made up of slots and requirements about what can f i l l those slots. The structure i s an interconnected whole, and what is in one slot affects what can be in another. Scripts handle stylized everyday situations. They are not subject to much change, nor do they pro-
vide the apparatus for handling novel situations, as plans do (see section V).
For our purposes, a script is a predeter- mined, stereotyped sequence of actions that define a well-known situation. A script is, in effect, a very boring little story. Scripts a l l w for new references to objects within them just as if these objects had been previously mentioned; objects within a script may take “the” without explicit introduction because the script itself has al- ready implicitly introduced them. (This can be found below, i n the reference to ‘*the waitress”
a restaurant, f o r example.)
Stor’ies can invoke scripts in various ways. Usually a story Is a script with one or m r e in- teresting deviations.
I. John went into the restaurant.
4ie ordered a hamburger and a coke.
He asked the waitress for the check and left.
11. John went to a restaurant.
Be ordered a hamburger.
It was cold when the waitress brought it. He left her a very small tip.
111. Harriet went to a birthday party.
National Science Foundation Grant GS-35768.
Zichen on the rock.”
(M. Shelley, fiankenstein or the Modern Prw-
SCRIPTS, PLANS, AND KNOWLEDGE Roger C. Schank and Robert P. Abelsont
Yale University
New Haven, Connecticut USA
– Frankenstein’s Monster metheus, 1818)
Abstract
We describe a theoretical system intended to facilitate the use of knowledge in an understand- ing system. The notion of script is introduced to account for knovledge about mundane situations. A program, SAM, is capable of using scripts to under- stand. The notion of plans i s introduced to ac- count for general knowledge about novel situa- tions.
I. Preface
In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the work of Schank (1973a), Abelson (19731, Charniak (1972), and Norman (1972) as fitting into the notion of “frames.” Piinsky at- tempted to relate this work, in what i s essentially language processing, to areas of vision research that conform to the same notion.
Minsky’s frames paper has created quite a stir in AI and some immediate spinoff research a- long the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that
we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that it does
not lend itself to applications without further specialization. This paper is an attempt to devel- op further the lines of thought set out in Schank
(1975a) and Abelson (1973; 1975a). The ideas pre- senced here can be viewed as a specialization of the frame idea. We shall refer to our central con- structs as “scripts.”
11. The Problem
Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po- sitive effect of producing a working system and the negative effect of producing me that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys- tem or any working computational system the res- triction of world bowledge need not critically concern him.
Our feeling is that an effective characteri-
t The work of the second author was facilitated by
–
an understander. l i k e an
151
i n
She put on a green paper hat.
Just when they sat down to eat the cake, a
piece of plaster fell from the ceiling
Onto the table.
She was lucky, because the dust didn’t get
all over her hair.
IV. Harriet went to Jack’s birthday party.
The cake tasted awful.
Harriet l e f t Jack’s mother a very small tip.
Paragraph I is an unmodified script. It is dull. It would be even duller if allthe events in the standard restaurant script (see below) were included.
Paragraph I1 i s a restaurant script with a stock variation, a customer’s typical reaction
when things go wrong.
Paragraph I11 invokes the birthday party
script, but something wholly outside the range of normal birthday parties occurs -the plaster falls from the ceiling. This deviation from the script
takes over the initiative in the narrative until the problem i t raises i s resolved, but the birth- day script is still available in the indirect re- ference to the party hat and in the possibility that normal party activities be resumed later in the narrative. I t seems natural for reference to
be made to dust in the hair following the plaster’s falling, which implies that there i s a kind of script for falling plaster too. (This kind of script we call a vignette (Abelson, 19791.) NQ- tice that “the ceiling” refers to an uninteresting “room” script, which can be used for references to doors and windows that may occur. Thus i t i s pos- sible to be in more than one script at a time.
Paragraph IV illustrates the kind of absurdi- ty that arises when an action from one script is arbitrarily inserted into another. That one feels the absurdity is an indication that scripts are in inadmissable competition. I t i s conceivable that
with adequate introduction the absurdity in para- graph IV could be eliminated.
With these examples, a number of issues have been raised. Let us at this point give a more ex- tensive description of scripts. We have discussed previously (Schank, 1975b) how paragraphs are rep- resented in memory as causal chains. This work implies that, for a story to be understood, infer- ences must connect each input conceptualization to all the others in the story that relate to it. This connection process is facilitated tremendous- l y by the use of scripts.
Scripts are extremely numerous. There is a restaurant script, a birthday party script, a foot- ball game script, a classroom script, and so on. Each script has players who assume roles in the action. A script takes the point of view of one
of these plavers, and i t often changes when i t i s viewed from mother player’s point of view.
The following is a sketch of a script for a restaurant from the point of view of the customer. Actions are specified in terms of the primitive ACTS of conceptual dependency theory (Schank, 1973b.
script: restaurant
roles: customer, waitress, chef, cashier reason: to get food so as to go up in pleasure
scene 2:
scene 3:
HBUILDwhere to sit PTRANS self to table WVE sit d m
ordering
ATRANS receive menu
MRANS read menu
HBUILD decide what self wants HTRANS order to waitress
eating
ATRANS receive food INGEST food
scene 1:
and down in hunger
entering
PTRANS self into restaurant
ATTEND eyes to where empty tables are
scene 4: exiting
MRANS ask for check
ATWS receive check
ATRANS tip to waitress
PTRANS self to cashier
ATRANS money to cashier PTRANS. self out of restaurant
In this script, the instruments for perform- ing an action might vary with circumstances. For example, in scene 2 the order might be’spoken, or
written down with predesignated numbers for each item, or even (in a foreign country with an unfa- miliar language) indicated by pointing or ges- tures.
Each act sequence uses the principle of cau-
s a l chaining (Schank, 1973b,Abelson, 1973). That is, each action results in conditions that enable the next to occur. To perform the next act in the sequence, the previous acts must be completed sat- isfactorily. I f they cannot be, the hitches must be dealt with. Perhaps a new action not pre- scribed in the script will be generated in order to get things moving again. This “what-if” be- havior, to be discussed later, i s an important component of scripts. I t i s associated with many of the deviations in stories such as paragraph 11.
In a text, new script information is inter- preted in terms of its place in one of the causal chains within the script. Thus in paragraph I the first sentence describes the first action in scene 1 of the restaurant script. Sencence 2 refers to the last action of scene 2, and Sentence 3 to the first and last actions of scene 4. The final in- terpretation of paragraph I contains the entire restaurant script, with specific statements filled in and missing statements (that he sat down, for example) assumed.
In paragraph 11, the first two sentences des- cribe actions in scenes 1 and 2. Part of the third sentence is in the script as the first ac- tion of scene 3, but there is also the information that the hamburger is cold. The fourth sentence (“He left,her a very small tip”) is a modification of the third action of scene 4. The uwdifier, “very small,” is presumably related to the unex- pected information about the “cold hamburger.” Even a stupid processor, checking paragraph I1 a- gainst the standard restaurant script, could come up with the low-level hypothesis that the small size of the tip must have something to do with the temperature of the hamburger, since these two Items of Information are the only deviations from the script. They must be related deviations, be- cause if they were unrelated the narrative would have no business ending vitb two such unexphined
features.
Of course we do not vant our processor to be
stupid. In slightly more complex examples, ade- quate understanding requires attention to the na-
ture of deviations from the script. A smarter processor can infer from a cold hamburger that the INGEST in scene 3 w i l l then violate the pleasure goal for going to a restaurant. The concept of a very small tip can be stored with the restaurant script as a what-if associated with violations of the pleasure goal.
The general form for a script, then, is a set of paths joined at certain crucial points that de- fine the script. For restaurants the crucial parts are the INGEST and the ATRANS of money. There are many normal ways to move from point to point. Ordering may be done by M”Sing to a
waiter or by selecting and taking what you like (in a cafeteria). Likewise the ATRANS of money
may be done by going to the cashier, or paying the waitress, or saying, “Put it on my bill.” There are also paths to take when situations don’t go as planned. Paragraphs 111 and I V call up deviant paths in the birthday party script. All these variations indicate that a script is not a simple
l i s t of events but rather a linked causal chain; a script can branch into multiple possible paths that come together at crucial defining points.
To know when a script i s appropriate, script headers are necessary. These headers define the circumstances under which a.script is called into play. The headers for the restaurant script are concepts having to do with hunger, restaurants, and so on in the context of a plan of action for getting fed. Obviously contexts must be restrict- ed to avoid calling up the restaurant script for sentences that use the word “restaurant” as a place (“Fuel o i l was delivered to the restaurant”).
what-ifs. In scene 2 of the restaurant script, if the waitress ignores the customer, he w i l l try to catch her eye or call to her when she passes near- by. If he can’t make out the menu or needs fur- ther information, he w i l l ask the waitress. If she doesn’t speak h i s language, he w i l l attempt her language, or make gestures, or seek another customer to translate, or accept her suggestion of
what to order. In scene 3, if the waitress does not bring the food, he will again try to catch her eye. Ifthefoodisnotfit,hewillsendit back.
Errors have a slightly different character from obstacles but follow the same general rules. Receiving the menu is errorful if the waitress ATRANSes a printed sheet to the customer but i t i s yesterday’s menu, or the breakfast instead of the dinner menu. Reading the menu may yield an error i f the customer gets the wrong idea o f what i t says -say, he thinks filet mignon is a fish.
Eere it is up to the waitress to supply the what-
if corrective. Deciding what to order may yield
an error if the customer goes through the decision
Scripts organize new inputs in terms of pre- viously stored knowledge. In paragraph I, many items-that are part of the restaurant script are added to the final interpretation of the story.
We don’t need to say that a waitress took the cus- tomer’s order or that he ate the hamburger. These Ideas are firmly a part of the story because the restaurant script requires them. In understanding a story that calls up a script, the script becomes part of the story even when i t i s not spelled out. The answer to the question Who served John the hamburger?” seems obvious, because our world know- ledge, as embodied in scripts, answers it.
what-Ifs
There are at least three major ways in which scripts can be thrown off normal course. One i s distraction, interruption by another script, such as the plaster’s falling from the ceiling. We
w i l l not pursue here an analysis of the conditions and consequences of distraction. The other two ways, obstacle and error, are intimately connected with what-if behavior. An obstacle to the normal seauence occurs when someone or something prevents a normal action from occurring or some usual ena- bling condition for the action is absent. An
error occurs when the action is completed in an in- appropriate manner, so that the normal consequen- ces of the action do not come about.
In principle, every simple ACT in a standard script has potential obstacles and errors. We as- sume that, every time an obstacle or error occurs in a script that is being learned, the methods used to remove the obstacle or redeem the error are stored with the script as what-ifs. The re- sult of many repetitions is that mst of the com- mon what-ifs are attached to the script.
Every obstacle has one or mre characteristic
We have created a program that uses scripts to make inferences in domains it knows about and have added it to the MARGIE program reported at 3IJCAI. SAM (Script Applier Mechanism) receives Conceptual Dependency structures as input (these are gotten from English by using Riesbeck’s parser (1975)). SAM identifies the script to be used
(by using Riesbeck’s request format) and f i l l s i n the identified script with the new inputs as they are received. It makes the inferences that are necessary to complete a causal chain between in- put conceptualizations (see Schank (1973b) f o r a discussion of causal chains). The resultant com- plex structure can then be queried by input Eng- lish questions or be used to make summaries and paraphrases of the initial story. SAM was writ- ten by Chris Riesbeck, Richard Cullingford and
Wendy Lehnert. Goldmau’s generation program (1975) i s used to map the answers of paraphrases Into English. It was modified for this purpose by Gerald DeJong and W.J. Stutzman. An example of SAM’s input and output is:
153
–
process but forgets the stored outcome. The-what;
if is to review the MBUILD (“Let’s see now, what
did I decide?”) Giving the order to the waitress
may be i n error if she writes down something other than what the customer said, or omits a portion of it. The what-if is to repeat the order, asking the waitress whether she i s sure she got a l l of
it.
I V .
The SAM Program
Input:
John went to a restaurant.
ed John. The hostess gave John a menu. The waiter came to the table. John ordered lob- ster. John was served quickly. John l e f t a Large tip. John left the restaurant.
Questions and Answers:
Q: What did John eat?
A: LOBSTER.
Q: Who gave John the menu?
A: THEHOSTESS.
Q: Who gave John the lobster? A: PROBABLY THE WAITER.
Q: Who paid the check?
A: PROBABLY JOHN.
The hostess seat-
Q: What happened vhen John vent to the table?
A: THE HOSTESS GAVE HIM A MENU AND JOHN SAT DOWN. Q: Why did John get a menu?
A: SO HE COULD ORDER.
Q: Why did John give the waiter a large tip?
A: BECAUSE HE WAS SERVED QUICKLY.
Paraphrase:
JOHN DECIDED HE WAS GOING TO GO TO A BESTW- RANT. JOHN WENT TU A RESTAURANT. JOHN INDI- CATED TO THE HOST HE WOULD LIKE TO BE AT A TABLE. THE HOST INDICATED TO JOHN HE COULD GO TO A TABLE. JOAN WENT TO THE TABLE. THE HCST WENT TO THE TABLE. JOHN SAT DOWN I N THE CKAIR. JOHN GOT THE MENU FROM THE HOST.
JOHN READ T?E MENU. THE WAITER SAW JOHN WAS AT THE TABLE. THE WAITER WENT TO THE TABLE. JOHN ORDERED LOBSTER. THE C€EF PREPARED THE LOBSTER. THE WAITER GOT THE LOBSTER FROM THE CHEF. WE WAITER SERVED JOHN THE LOBSTER. JOHN ATE THE LOBSTER. JOHN INDICATED TO TEE WAITER HE WOULD LIKE TO GET THE CHECK FROM HIM. JOHN GOT THE CHECK FROM THE WAITER.
JOHN L E F I THE WAITER A LARGE TIP. JOHN PAID THE CHECK. JOHN LEFT THE RESTAURANT.
A low-level goal is illustrated by: John wanted to cut his steak.
He called to his wife in the kitchen.
A plan is a series of actions that will real- ize a goal. Often in order to realize one goal another must be decided on and a plan drawn up to achieve it. In the first example above, a goal to attain power is reduced to a goal to get arsenic. High-level goals are more interesting and ve have concentrated on them first.
We define a “deltact” as an action or a group of actions that leads to a desired state. Del- tacts constitute subplans that are pursued be- cause of their intended effects. There are five deltacts in the present system:
Summary:
JOHN WENT TO A RESTAURANT AND ATE LOBSTER.
It
Plans are responsible for the deliberate be- havior that people exhibit. Plans describe the set of choices that a person has vhen he sets out to accomplish a goal. In listening to a dis- course, people use plans to make sense of seeming- l y disconnected sentences. By finding a plan, an understander can make guesses about the intentions of an action in an unfolding story and use these guesses to make sense of the story.
Consider the following paragraph:
John knew that his wife’s operation would be very expensive. …
There vas always Uncle Harry
He reached for the suburban phone book.
How are we to make sense of such a paragraph? It makes no use of headers or the scripts they sig- nal. It would be unreasonable to posit a “paying
for an operation” script vith all the necessary acts laid out as in our restaurant script. But, on the other hand, the situation is not entirely novel, either. The problem of understanding this paragraph would not be significantly different if “wife’s operation” were changed to “son’s educa- tion” or “down payment on the mortgage.” There i s a general goal state in each case, raising a lot of money for a legitimate expense, and there is a generalized plan or group of plans that may lead to the goal state.
Plans start with one or more goals. A high- level goal i s illustrated by the sequence:
John wanted to become king. Be vent to get Bome arsenic.
This program runs on the PDP-10 at Yale. currently has only a small amount of knowledge and a small vocabulary. But we feel encouraged that our script theory i s workable because of the sim- plification in the inference process that has re- sulted from the use of scripts.
v. Plans
154
–
a change in obligation to do something for somebody
a change in the control of an object
AAGENCY
ACONT AKNOW APROX
ASOCCONT
There is also a set of lower-level deltacts &el- son, 1975a). Plans are made up of deltacts. When a collocation of deltacts i s used often enough, i t becomes a script.
A plan includes a set of planboxes, lists of actions that will yield state changes and the pre- conditions for these actions, along with a set of questions for choosing the appropriate planbox.
For instance, the TAKE plan has the goal of enabling the taker to do something with an object, whatever i s generally done with i t . To TAKE some- thing you must be close to it, so either the ob- ject and the taker must be in the same location or the taker must use a subplan APROX. Either no one else must have CONTROL of the object or at least there must be no bad consequences in the taker’s attempt to PTRANS the object to himself. The
TAKE plan calls a PTRANS of the object if all the preconditions are positive.
But if, say, someone else CONTROLS the ob- ject, a plan for the taker’s gaining CONTROL must be called. This subplan i s ACONT. ACONT has a Bet of planboxes attached to it. These planboxes define a deltact just as inferences define a primitive ACT. A planbox is a list of primitive ACTS that w i l l achieve a goal. Associated with each ACT are its precondltions, and a planbox checks them. A set of positive conditions allows the desired ACT. Negative conditions call up new planboxes or deltacts that have as their goal the resolution of the negative state.
Preconditions fall into three classes. A controlled precondition can be fixed when i t i s negative by doing an ACT. A negative uncontrolled precondition cannot be fixed, and another planbox must be tried. Negative mediating preconditions can be altered but require plans of their own to change. Mediating preconditions usually refer to the willingness of other parties to participate in plans. Further details on planboxes appear in Schank (1975~).
To see why an understander needs plans, con- sider the following sequence:
Willa vas hungry.
She took out the Michelin Guide.
Most readers understand that Villa was using the
–
– a change i n vhat an actor knows
–
objects and actors
– a change in social control over a per-
son or a situation
a change in the proximity relations of
Michelin Guide to find a good restaurant. But if the first sentence were subjected to straightfor-
ward inference (a la Rieger, 1975), predicting
that Willa is likely to do something to enable her- self to INGEST food, the second sentence would
seem to answer this prediction only in the weird interpretation that she w i l l eat the Michelin Guide. An understander w i l l reject this in favor of any better path that it can find. The first sentence w i l l be analyzed for any goal that might generate a plan. “Hungry” is listed in the dic- tionary as indicating the need for a plan to do a
ACONT of food. One means for gaining control of food is a restaurant. An enablement for this means is going to a restaurant, which requires APROX. This in turn requires knowing where you are going, which may require AKNOW.
In the dictionary, a l l books are listed as means of satisfying AKNOWs and the Michelin Guide
is listed as a book. To complete the processing of this sequence it would, of course, be necessary to have the information that the Michelin Guide lists restaurants. Without this information, the sequence might be as nonsensical as ‘Villa was hungry. She took out Introduction to Artificial Intelligence.
With the information that the Michelin Guide is a source of knowledge about restaurants, we
know why the second action was done and can pre- dict future actions. We have transformed a seem- ingly disconnected sequence into one that provides the expectations that are so vital to understand- ing. If the next sentence is ‘Villa got.in her car,” we w i l l know that the plan i s being effected.
By using what we know about cars (that they are instruments of PTRANS) and the script for restau- rants (that is starts with a PTRANS), we can make the inference that Willa is on her way to a res- taurant. Some restaurant header would still be required to initiate the restaurant script in its full glory.
The procedure of taking out the Michelin Guide when hungry, while seemingly novel, could conceivably be routine for a certain individual in a certain context. If we know that Willa is a gourmet tourist staying in Paris who enjoys going to a different restaurant every evening, then the procedure of looking in the Guide might become part of her restaurant script. For her there is a scene before scene 1 in which she ATTENDS to the Guide, MBUILDs a choice, and MTRhNSes a reserva- tion. A routinized plan can become a script, at least from the planner’s point of view.
VI. Conclusion and Prognosis
It is clear that in order to understand one neras muwleage. Knowledge is a potentially un-
wieldy thing, so what we must do is determine the types of knowledge that there are and find out how toapplythem. TheSAMsystemisafirststepat adding structured knowledge to the MARGIE system
(Schank et al, 1973). We are currently building
up our knowledge base by adding more scripts to SAM. In addition we are adding a plan component PAM. These two programs should bring us up to the level of understanding simple stories about a large range of known domains.
But what about complex stories? Is the kind of understanding that humans exhibit on real stor- ies likely to resemble the mechanisms to be found in SAM?
When a person reads a 300 page novel he does not (unless he i s very unusual) remember a l l the conceptualizations stated in the story in the form of a giant causal chain. Rather he remembers the gist of the book. Maybe 5 or 10 pages of summary could be extracted from him after reading the book. Previously we have said that Conceptual Dependency Theory w i l l account far memry for
gist of sentences. But it cannot be seriously proposed that this is a l l that i s needed for gist of long and complex stories. Some other explana- tion must be given.
In a recent experiment, Abelson [1975b] showed that people remember stories better when they are asked to take some particular point of view (of one of the participants or of an observ- er i n a particular place), and that what they re- member i s contingent on which point of view they had. The ramifications of this experiment for a theory of language understanding have, to be that
when people have a clue of what to forget they do better at remembering. In other words, good for- getting is the key to remembering. Likewise, if – we want to build programs that remember, we had best teach them how to forget. One method of forgetting is simply not noticing levels of de-
t a i l that are there. This can be done by treating the instruments for an action at a different level than the main ACTS that they explain. When look- ing at a story at one level of detail we would not see the level of detail underneath it unless spe- cifically called upon to do so.
For example, consider the sentence, “John went to New York by bus.” We have previously rep- resented this sentence by a simple ACT (PTRANS), and an instrumental act (PROPEL). But it must be realized that as with any other script, questions could be answered about this sentence that were not specifically in it. Subjects all seem to agree that the answer to “Was there a bus driver?” is “Yes” and to “Did John pay money to get on the bus?” i s “Probably.” This seems t o indicate that the instrument of John’s PTRANS is, in actuality, the entire bus script.
Should we, as understanders, go so f a r as t o place the entire bus script in what we obtain from understanding the above sentence? The answer seems obvious. You don’t want to do a l l that un- less you need to, but you want to have quick ac-
cess
to it in case you need to. Consider the following story:
Johnwanted some cheesecake. He decided to go to New York. He went to New York by bus. On the bus he met a nice old lady who he talked to about the prices in the supermar- ket. When he left the bus he thanked the driver for the ride and found the subway to go to Lindy’s. On the subvay he was reading the ads when suddenly he was robbed. He wasn’t hurt though and he got off the train and entered Lindy’s and had his cheesecake. When the check came, he said he couldn’t pay
. and was told he would have to wash dishes. Later he went back to New Haven.
Ideally, our representation of this story should account for the fact that hearers of this story invariably forget the sentences about talking to the old lady and the bus driver, but always re- member the mugging, its consequence of dishwashing, and the main goal of going to New York to eat cheesecake.
We propose to represent stories therefore in the follcwing way: There w i l l be a causal chain connecting the main events of the story. (Here the PTRANS t o the restaurant, the INGEST, and the PTRANS back home.) Underneath each of these main events w i l l be the instrumental script that under- lies each of them. (The bus script, the subway
script, and the restaurant script.) These scripts w i l l be “forgotten” t o be reconstructed l a t e r , with the exception that any event that occurred within them that vas not predicated by them w i l l be placed on a “weird list” to be specially remem- bered.
The final representation of a story will con- sist of the events connected directly to the goals and plans to realize those goals made by the par- ticipants. These goals w i l l be tied to the events that actually occurred and to the weird events and their consequences. Thus four lists replace our original (and growing) causal chain. An event list (with script events left out); a goal list; a plan list; and a weird list. What these lists do is help us forget. And of course forgetting helps
-us remember.
There are two ways in which this occurs: by omission and by prototyping. Events which enter none of the four l i s t s (such as the conversation vith the old lady), are dropped entirely. (More precisely, they are retained only until the con- structed final representation i s transferred from working memory to long-term memory. Anything not
in this final representation is lost.) Also, the event list and plan list are condensed by using pointers to prototypes. The details are thus “normalized” (Bartlett, 1932) ; what i s remembered is that a normal plan for satisfying such-and-such goal was used, including normal enactments of appropriate scripts. The function of the weird
l i s t i s to mark the interesting departures from these normalities.
What we are saying then is that one of the major issues in Artificial Intelligence research much be the creation of the theory of forgetting. It simply i s not possible to assume that people do, or that machines should, remember everything they encounter. In listening to a speaker, read- ing a book, or engaging in a conversation, people could not possibly remember everything they are
told verbatim. In attempting to get the gist of a sequence, they must employ what we call forgetting heuristics. As part of these forgetting heuris- tics, are heuristics that search out items of major importance. The selection of these major items is the key to forgetting. We don’t really wish to assert that people couldn’t possibly re- member everything they hear. Rather we wish to find a procedure that will let us see only the ma- jor items, yet also find, with some difficulty, the thoughts or statements that underlie them, and the ideas that underlie those, and so on.
Thus, the key t o understanding must be, i n order to facilitate search among what has been understood, an organization of the new information, in such a fashion as to seem to forget the unim- portant material and to highlight the important material. Forgetting heuristics must do this for us. So the first task before us is to establish what the most significant items in a text are likely to be, and then to establish the heuristics which will extract and remember exactly those items.
References
Abelson, R.P. (1973). The structure of b e l i e f systems. In R.C. Schank and K.M. Colby (eds.), Computer mdels of thought and language. San Francisco: Freeman.
Abelson, R.P. (1975a). Concepts f o r representing mundane reality in plans. In D. Bobrow and A. Collins (eds.), Representation and understand- ing: Studies in cognitive science. New York: Academic.
Abelson, R.P. (1975b). Does a Story Understander need a point of view? In R.C. Schank and B. Nash-Webber (eds.), Using Knowledge to Under- stand, in Proceedings of the Conference on Theoretical Issues in Natural Language Proces- sing.
B a r t l e t t , F. (1932). Remembering. Oxford Univer- sity Press.
Bobrow, D. (1975). Dimensions o f representation. In D. Bobrow and A. Collins (eds.), Representa-
tion and understanding: Studies i n Cognitive– Science. New York: Academic.
Charniak, E. (1972). Towards a model of chil- dren’s s t o r y comprehension. A I TR-266, Mass. Institute of Technology, Cambridge, Mass.
Goldman (1975). Conceptual generation. In R. Schank, Ed., Conceptual Information Processing. North Holland Publishing, Amsterdam.
Uinsky, M. (1974). Frame-systems. AI. Memo.
Mass. Institute of Technology, Cambridge, Mass.
Norman, D. (1972). Memory, knowledge, and the an- swering of questions. Center for Human Infor- mation Processing Memo CHIP-25. Univ. of Cal- ifornia at San Diego.
Rieger, C. (1975). Conceptual memory. In R. Schank, ed., Conceptual Information Processinq. North Holland Publishing, Amsterdam.
Riesbeck, C. (1975). Conceptual analysis. In R. Schank, ed., Conceptual Information Processing. North Holland Publishing, Amsterdam.
Schank, R. (1973a). Identification of conceptual- izations underlying natural language. In Schank and Colby (eds.), Computer Models of Thought and Language. W.H. Freeman Press.
Schank, R. (1973b). Causality and reasoning. Technical Report #1. Istituto per g l i studi Semantic1 e Cognitivi. Castagnola, Switzer- land.
Schank, R. (1975a). The Role of Memory in Lan- guage Processing. To appear in C. Cofer and
R. Atkinson (eds.), The Nature of Human Memory. W.H. Freeman Press.
Schank, R. (197%). The structure of episodes in memory. In D. Bobrow and T. Collins (eds.), Representation and understanding: Studies in cognitive science. New York: Academic.
Schank e t a l . (1973). R. Schank, N. Goldman, c . Rieger, and C. Riesbeck. MARGIE: Memory Anal- ysis Response Generation and Inference on Eng- lish. Proceedings of the 3IJCAI.
I
I
Schank, R. (1975~). Using Knowledge to Under-
+.
Winograd, T. (1972). Understanding Natural Lan- guage. Academic Press.
Winograd, T. (1975). Frame Representations and the Declarative/Procedural Controversy. In D. Bob- row and A. Collins (eds.), Representation and Understanding: Studies in Cognitive Science. New York: Academic.
In R. Schank and B. Nash-Webber (eds.), Proceedings of the Conference on Theoretical Issues in Latural Language Processing.
157