Structure and Interpretation of Computer Programs
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Structure and Interpretation
of Computer Programs
second edition
Harold Abelson and Gerald Jay Sussman
with Julie Sussman
foreword by Alan J. Perlis
The MIT Press
Cambridge, Massachusetts London, England
McGraw-Hill Book Company
New York St. Louis San Francisco Montreal Toronto
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
This book is one of a series of texts written by faculty of the Electrical Engineering and Computer
Science Department at the Massachusetts Institute of Technology. It was edited and produced by
The MIT Press under a joint production-distribution arrangement with the McGraw-Hill Book
Company.
Ordering Information:
North America
Text orders should be addressed to the McGraw-Hill Book Company.
All other orders should be addressed to The MIT Press.
Outside North America
All orders should be addressed to The MIT Press or its local distributor.
© 1996 by The Massachusetts Institute of Technology
Second edition
All rights reserved. No part of this book may be reproduced in any form or by any electronic or
mechanical means (including photocopying, recording, or information storage and retrieval)
without permission in writing from the publisher.
This book was set by the authors using the LATEX typesetting system and was printed and bound
in the United States of America.
Library of Congress Cataloging-in-Publication Data
Abelson, Harold
Structure and interpretation of computer programs / Harold Abelson
and Gerald Jay Sussman, with Julie Sussman. — 2nd ed.
p. cm. — (Electrical engineering and computer science
series)
Includes bibliographical references and index.
ISBN 0-262-01153-0 (MIT Press hardcover)
ISBN 0-262-51087-1 (MIT Press paperback)
ISBN 0-07-000484-6 (McGraw-Hill hardcover)
1. Electronic digital computers — Programming. 2. LISP (Computer
program language) I. Sussman, Gerald Jay. II. Sussman, Julie.
III. Title. IV. Series: MIT electrical engineering and computer
science series.
QA76.6.A255 1996
005.13’3 — dc20 96-17756
Fourth printing, 1999
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
This book is dedicated, in respect and admiration, to the spirit that lives in the computer.
‘‘I think that it’s extraordinarily important that we in computer science keep fun in computing. When
it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and
then, and after a while we began to take their complaints seriously. We began to feel as if we really
were responsible for the successful, error-free perfect use of these machines. I don’t think we are. I
think we’re responsible for stretching them, setting them off in new directions, and keeping fun in the
house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don’t
become missionaries. Don’t feel as if you’re Bible salesmen. The world has too many of those already.
What you know about computing other people will learn. Don’t feel as if the key to successful
computing is only in your hands. What’s in your hands, I think and hope, is intelligence: the ability to
see the machine as more than when you were first led up to it, that you can make it more.’’
Alan J. Perlis (April 1, 1922-February 7, 1990)
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Contents
Foreword
Preface to the Second Edition
Preface to the First Edition
Acknowledgments
1 Building Abstractions with Procedures
1.1 The Elements of Programming
1.1.1 Expressions
1.1.2 Naming and the Environment
1.1.3 Evaluating Combinations
1.1.4 Compound Procedures
1.1.5 The Substitution Model for Procedure Application
1.1.6 Conditional Expressions and Predicates
1.1.7 Example: Square Roots by Newton’s Method
1.1.8 Procedures as Black-Box Abstractions
1.2 Procedures and the Processes They Generate
1.2.1 Linear Recursion and Iteration
1.2.2 Tree Recursion
1.2.3 Orders of Growth
1.2.4 Exponentiation
1.2.5 Greatest Common Divisors
1.2.6 Example: Testing for Primality
1.3 Formulating Abstractions with Higher-Order Procedures
1.3.1 Procedures as Arguments
1.3.2 Constructing Procedures Using Lambda
1.3.3 Procedures as General Methods
1.3.4 Procedures as Returned Values
2 Building Abstractions with Data
2.1 Introduction to Data Abstraction
2.1.1 Example: Arithmetic Operations for Rational Numbers
2.1.2 Abstraction Barriers
2.1.3 What Is Meant by Data?
2.1.4 Extended Exercise: Interval Arithmetic
2.2 Hierarchical Data and the Closure Property
2.2.1 Representing Sequences
2.2.2 Hierarchical Structures
2.2.3 Sequences as Conventional Interfaces
2.2.4 Example: A Picture Language
2.3 Symbolic Data
2.3.1 Quotation
2.3.2 Example: Symbolic Differentiation
2.3.3 Example: Representing Sets
2.3.4 Example: Huffman Encoding Trees
2.4 Multiple Representations for Abstract Data
2.4.1 Representations for Complex Numbers
2.4.2 Tagged data
2.4.3 Data-Directed Programming and Additivity
2.5 Systems with Generic Operations
2.5.1 Generic Arithmetic Operations
2.5.2 Combining Data of Different Types
2.5.3 Example: Symbolic Algebra
3 Modularity, Objects, and State
3.1 Assignment and Local State
3.1.1 Local State Variables
3.1.2 The Benefits of Introducing Assignment
3.1.3 The Costs of Introducing Assignment
3.2 The Environment Model of Evaluation
3.2.1 The Rules for Evaluation
3.2.2 Applying Simple Procedures
3.2.3 Frames as the Repository of Local State
3.2.4 Internal Definitions
3.3 Modeling with Mutable Data
3.3.1 Mutable List Structure
3.3.2 Representing Queues
3.3.3 Representing Tables
3.3.4 A Simulator for Digital Circuits
3.3.5 Propagation of Constraints
3.4 Concurrency: Time Is of the Essence
3.4.1 The Nature of Time in Concurrent Systems
3.4.2 Mechanisms for Controlling Concurrency
3.5 Streams
3.5.1 Streams Are Delayed Lists
3.5.2 Infinite Streams
3.5.3 Exploiting the Stream Paradigm
3.5.4 Streams and Delayed Evaluation
3.5.5 Modularity of Functional Programs and Modularity of Objects
4 Metalinguistic Abstraction
4.1 The Metacircular Evaluator
4.1.1 The Core of the Evaluator
4.1.2 Representing Expressions
4.1.3 Evaluator Data Structures
4.1.4 Running the Evaluator as a Program
4.1.5 Data as Programs
4.1.6 Internal Definitions
4.1.7 Separating Syntactic Analysis from Execution
4.2 Variations on a Scheme — Lazy Evaluation
4.2.1 Normal Order and Applicative Order
4.2.2 An Interpreter with Lazy Evaluation
4.2.3 Streams as Lazy Lists
4.3 Variations on a Scheme — Nondeterministic Computing
4.3.1 Amb and Search
4.3.2 Examples of Nondeterministic Programs
4.3.3 Implementing the Amb Evaluator
4.4 Logic Programming
4.4.1 Deductive Information Retrieval
4.4.2 How the Query System Works
4.4.3 Is Logic Programming Mathematical Logic?
4.4.4 Implementing the Query System
5 Computing with Register Machines
5.1 Designing Register Machines
5.1.1 A Language for Describing Register Machines
5.1.2 Abstraction in Machine Design
5.1.3 Subroutines
5.1.4 Using a Stack to Implement Recursion
5.1.5 Instruction Summary
5.2 A Register-Machine Simulator
5.2.1 The Machine Model
5.2.2 The Assembler
5.2.3 Generating Execution Procedures for Instructions
5.2.4 Monitoring Machine Performance
5.3 Storage Allocation and Garbage Collection
5.3.1 Memory as Vectors
5.3.2 Maintaining the Illusion of Infinite Memory
5.4 The Explicit-Control Evaluator
5.4.1 The Core of the Explicit-Control Evaluator
5.4.2 Sequence Evaluation and Tail Recursion
5.4.3 Conditionals, Assignments, and Definitions
5.4.4 Running the Evaluator
5.5 Compilation
5.5.1 Structure of the Compiler
5.5.2 Compiling Expressions
5.5.3 Compiling Combinations
5.5.4 Combining Instruction Sequences
5.5.5 An Example of Compiled Code
5.5.6 Lexical Addressing
5.5.7 Interfacing Compiled Code to the Evaluator
References
List of Exercises
Index
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Foreword
Educators, generals, dieticians, psychologists, and parents program. Armies, students, and some
societies are programmed. An assault on large problems employs a succession of programs, most of
which spring into existence en route. These programs are rife with issues that appear to be particular to
the problem at hand. To appreciate programming as an intellectual activity in its own right you must
turn to computer programming; you must read and write computer programs — many of them. It
doesn’t matter much what the programs are about or what applications they serve. What does matter is
how well they perform and how smoothly they fit with other programs in the creation of still greater
programs. The programmer must seek both perfection of part and adequacy of collection. In this book
the use of ‘‘program’’ is focused on the creation, execution, and study of programs written in a dialect
of Lisp for execution on a digital computer. Using Lisp we restrict or limit not what we may program,
but only the notation for our program descriptions.
Our traffic with the subject matter of this book involves us with three foci of phenomena: the human
mind, collections of computer programs, and the computer. Every computer program is a model,
hatched in the mind, of a real or mental process. These processes, arising from human experience and
thought, are huge in number, intricate in detail, and at any time only partially understood. They are
modeled to our permanent satisfaction rarely by our computer programs. Thus even though our
programs are carefully handcrafted discrete collections of symbols, mosaics of interlocking functions,
they continually evolve: we change them as our perception of the model deepens, enlarges, generalizes
until the model ultimately attains a metastable place within still another model with which we struggle.
The source of the exhilaration associated with computer programming is the continual unfolding
within the mind and on the computer of mechanisms expressed as programs and the explosion of
perception they generate. If art interprets our dreams, the computer executes them in the guise of
programs!
For all its power, the computer is a harsh taskmaster. Its programs must be correct, and what we wish
to say must be said accurately in every detail. As in every other symbolic activity, we become
convinced of program truth through argument. Lisp itself can be assigned a semantics (another model,
by the way), and if a program’s function can be specified, say, in the predicate calculus, the proof
methods of logic can be used to make an acceptable correctness argument. Unfortunately, as programs
get large and complicated, as they almost always do, the adequacy, consistency, and correctness of the
specifications themselves become open to doubt, so that complete formal arguments of correctness
seldom accompany large programs. Since large programs grow from small ones, it is crucial that we
develop an arsenal of standard program structures of whose correctness we have become sure — we
call them idioms — and learn to combine them into larger structures using organizational techniques of
proven value. These techniques are treated at length in this book, and understanding them is essential
to participation in the Promethean enterprise called programming. More than anything else, the
uncovering and mastery of powerful organizational techniques accelerates our ability to create large,
significant programs. Conversely, since writing large programs is very taxing, we are stimulated to
invent new methods of reducing the mass of function and detail to be fitted into large programs.
Unlike programs, computers must obey the laws of physics. If they wish to perform rapidly — a few
nanoseconds per state change — they must transmit electrons only small distances (at most 1 1 /2 feet).
The heat generated by the huge number of devices so concentrated in space has to be removed. An
exquisite engineering art has been developed balancing between multiplicity of function and density of
devices. In any event, hardware always operates at a level more primitive than that at which we care to
program. The processes that transform our Lisp programs to ‘‘machine’’ programs are themselves
abstract models which we program. Their study and creation give a great deal of insight into the
organizational programs associated with programming arbitrary models. Of course the computer itself
can be so modeled. Think of it: the behavior of the smallest physical switching element is modeled by
quantum mechanics described by differential equations whose detailed behavior is captured by
numerical approximations represented in computer programs executing on computers composed of
…!
It is not merely a matter of tactical convenience to separately identify the three foci. Even though, as
they say, it’s all in the head, this logical separation induces an acceleration of symbolic traffic between
these foci whose richness, vitality, and potential is exceeded in human experience only by the
evolution of life itself. At best, relationships between the foci are metastable. The computers are never
large enough or fast enough. Each breakthrough in hardware technology leads to more massive
programming enterprises, new organizational principles, and an enrichment of abstract models. Every
reader should ask himself periodically ‘‘Toward what end, toward what end?’’ — but do not ask it too
often lest you pass up the fun of programming for the constipation of bittersweet philosophy.
Among the programs we write, some (but never enough) perform a precise mathematical function such
as sorting or finding the maximum of a sequence of numbers, determining primality, or finding the
square root. We call such programs algorithms, and a great deal is known of their optimal behavior,
particularly with respect to the two important parameters of execution time and data storage
requirements. A programmer should acquire good algorithms and idioms. Even though some programs
resist precise specifications, it is the responsibility of the programmer to estimate, and always to
attempt to improve, their performance.
Lisp is a survivor, having been in use for about a quarter of a century. Among the active programming
languages only Fortran has had a longer life. Both languages have supported the programming needs
of important areas of application, Fortran for scientific and engineering computation and Lisp for
artificial intelligence. These two areas continue to be important, and their programmers are so devoted
to these two languages that Lisp and Fortran may well continue in active use for at least another
quarter-century.
Lisp changes. The Scheme dialect used in this text has evolved from the original Lisp and differs from
the latter in several important ways, including static scoping for variable binding and permitting
functions to yield functions as values. In its semantic structure Scheme is as closely akin to Algol 60
as to early Lisps. Algol 60, never to be an active language again, lives on in the genes of Scheme and
Pascal. It would be difficult to find two languages that are the communicating coin of two more
different cultures than those gathered around these two languages. Pascal is for building pyramids —
imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for
building organisms — imposing, breathtaking, dynamic structures built by squads fitting fluctuating
myriads of simpler organisms into place. The organizing principles used are the same in both cases,
except for one extraordinarily important difference: The discretionary exportable functionality
entrusted to the individual Lisp programmer is more than an order of magnitude greater than that to be
found within Pascal enterprises. Lisp programs inflate libraries with functions whose utility transcends
the application that produced them. The list, Lisp’s native data structure, is largely responsible for such
growth of utility. The simple structure and natural applicability of lists are reflected in functions that
are amazingly nonidiosyncratic. In Pascal the plethora of declarable data structures induces a
specialization within functions that inhibits and penalizes casual cooperation. It is better to have 100
functions operate on one data structure than to have 10 functions operate on 10 data structures. As a
result the pyramid must stand unchanged for a millennium; the organism must evolve or perish.
To illustrate this difference, compare the treatment of material and exercises within this book with that
in any first-course text using Pascal. Do not labor under the illusion that this is a text digestible at MIT
only, peculiar to the breed found there. It is precisely what a serious book on programming Lisp must
be, no matter who the student is or where it is used.
Note that this is a text about programming, unlike most Lisp books, which are used as a preparation for
work in artificial intelligence. After all, the critical programming concerns of software engineering and
artificial intelligence tend to coalesce as the systems under investigation become larger. This explains
why there is such growing interest in Lisp outside of artificial intelligence.
As one would expect from its goals, artificial intelligence research generates many significant
programming problems. In other programming cultures this spate of problems spawns new languages.
Indeed, in any very large programming task a useful organizing principle is to control and isolate
traffic within the task modules via the invention of language. These languages tend to become less
primitive as one approaches the boundaries of the system where we humans interact most often. As a
result, such systems contain complex language-processing functions replicated many times. Lisp has
such a simple syntax and semantics that parsing can be treated as an elementary task. Thus parsing
technology plays almost no role in Lisp programs, and the construction of language processors is
rarely an impediment to the rate of growth and change of large Lisp systems. Finally, it is this very
simplicity of syntax and semantics that is responsible for the burden and freedom borne by all Lisp
programmers. No Lisp program of any size beyond a few lines can be written without being saturated
with discretionary functions. Invent and fit; have fits and reinvent! We toast the Lisp programmer who
pens his thoughts within nests of parentheses.
Alan J. Perlis
New Haven, Connecticut
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Preface to the Second Edition
Is it possible that software is not like anything else, that it
is meant to be discarded: that the whole point is to always
see it as a soap bubble?
Alan J. Perlis
The material in this book has been the basis of MIT’s entry-level computer science subject since 1980.
We had been teaching this material for four years when the first edition was published, and twelve
more years have elapsed until the appearance of this second edition. We are pleased that our work has
been widely adopted and incorporated into other texts. We have seen our students take the ideas and
programs in this book and build them in as the core of new computer systems and languages. In literal
realization of an ancient Talmudic pun, our students have become our builders. We are lucky to have
such capable students and such accomplished builders.
In preparing this edition, we have incorporated hundreds of clarifications suggested by our own
teaching experience and the comments of colleagues at MIT and elsewhere. We have redesigned most
of the major programming systems in the book, including the generic-arithmetic system, the
interpreters, the register-machine simulator, and the compiler; and we have rewritten all the program
examples to ensure that any Scheme implementation conforming to the IEEE Scheme standard (IEEE
1990) will be able to run the code.
This edition emphasizes several new themes. The most important of these is the central role played by
different approaches to dealing with time in computational models: objects with state, concurrent
programming, functional programming, lazy evaluation, and nondeterministic programming. We have
included new sections on concurrency and nondeterminism, and we have tried to integrate this theme
throughout the book.
The first edition of the book closely followed the syllabus of our MIT one-semester subject. With all
the new material in the second edition, it will not be possible to cover everything in a single semester,
so the instructor will have to pick and choose. In our own teaching, we sometimes skip the section on
logic programming (section 4.4), we have students use the register-machine simulator but we do not
cover its implementation (section 5.2), and we give only a cursory overview of the compiler
(section 5.5). Even so, this is still an intense course. Some instructors may wish to cover only the first
three or four chapters, leaving the other material for subsequent courses.
The World-Wide-Web site www-mitpress.mit.edu/sicp provides support for users of this
book. This includes programs from the book, sample programming assignments, supplementary
materials, and downloadable implementations of the Scheme dialect of Lisp.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Preface to the First Edition
A computer is like a violin. You can imagine a novice
trying first a phonograph and then a violin. The latter, he
says, sounds terrible. That is the argument we have heard
from our humanists and most of our computer scientists.
Computer programs are good, they say, for particular
purposes, but they aren’t flexible. Neither is a violin, or a
typewriter, until you learn how to use it.
Marvin Minsky, ‘‘Why Programming Is a Good
Medium for Expressing Poorly-Understood and
Sloppily-Formulated Ideas’’
‘‘The Structure and Interpretation of Computer Programs’’ is the entry-level subject in computer
science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in
electrical engineering or in computer science, as one-fourth of the ‘‘common core curriculum,’’ which
also includes two subjects on circuits and linear systems and a subject on the design of digital systems.
We have been involved in the development of this subject since 1978, and we have taught this material
in its present form since the fall of 1980 to between 600 and 700 students each year. Most of these
students have had little or no prior formal training in computation, although many have played with
computers a bit and a few have had extensive programming or hardware-design experience.
Our design of this introductory computer-science subject reflects two major concerns. First, we want
to establish the idea that a computer language is not just a way of getting a computer to perform
operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus,
programs must be written for people to read, and only incidentally for machines to execute. Second,
we believe that the essential material to be addressed by a subject at this level is not the syntax of
particular programming-language constructs, nor clever algorithms for computing particular functions
efficiently, nor even the mathematical analysis of algorithms and the foundations of computing, but
rather the techniques used to control the intellectual complexity of large software systems.
Our goal is that students who complete this subject should have a good feel for the elements of style
and the aesthetics of programming. They should have command of the major techniques for
controlling complexity in a large system. They should be capable of reading a 50-page-long program,
if it is written in an exemplary style. They should know what not to read, and what they need not
understand at any moment. They should feel secure about modifying a program, retaining the spirit
and style of the original author.
These skills are by no means unique to computer programming. The techniques we teach and draw
upon are common to all of engineering design. We control complexity by building abstractions that
hide details when appropriate. We control complexity by establishing conventional interfaces that
enable us to construct systems by combining standard, well-understood pieces in a ‘‘mix and match’’
way. We control complexity by establishing new languages for describing a design, each of which
emphasizes particular aspects of the design and deemphasizes others.
Underlying our approach to this subject is our conviction that ‘‘computer science’’ is not a science and
that its significance has little to do with computers. The computer revolution is a revolution in the way
we think and in the way we express what we think. The essence of this change is the emergence of
what might best be called procedural epistemology — the study of the structure of knowledge from an
imperative point of view, as opposed to the more declarative point of view taken by classical
mathematical subjects. Mathematics provides a framework for dealing precisely with notions of ‘‘what
is.’’ Computation provides a framework for dealing precisely with notions of ‘‘how to.’’
In teaching our material we use a dialect of the programming language Lisp. We never formally teach
the language, because we don’t have to. We just use it, and students pick it up in a few days. This is
one great advantage of Lisp-like languages: They have very few ways of forming compound
expressions, and almost no syntactic structure. All of the formal properties can be covered in an hour,
like the rules of chess. After a short time we forget about syntactic details of the language (because
there are none) and get on with the real issues — figuring out what we want to compute, how we will
decompose problems into manageable parts, and how we will work on the parts. Another advantage of
Lisp is that it supports (but does not enforce) more of the large-scale strategies for modular
decomposition of programs than any other language we know. We can make procedural and data
abstractions, we can use higher-order functions to capture common patterns of usage, we can model
local state using assignment and data mutation, we can link parts of a program with streams and
delayed evaluation, and we can easily implement embedded languages. All of this is embedded in an
interactive environment with excellent support for incremental program design, construction, testing,
and debugging. We thank all the generations of Lisp wizards, starting with John McCarthy, who have
fashioned a fine tool of unprecedented power and elegance.
Scheme, the dialect of Lisp that we use, is an attempt to bring together the power and elegance of Lisp
and Algol. From Lisp we take the metalinguistic power that derives from the simple syntax, the
uniform representation of programs as data objects, and the garbage-collected heap-allocated data.
From Algol we take lexical scoping and block structure, which are gifts from the pioneers of
programming-language design who were on the Algol committee. We wish to cite John Reynolds and
Peter Landin for their insights into the relationship of Church’s lambda calculus to the structure of
programming languages. We also recognize our debt to the mathematicians who scouted out this
territory decades before computers appeared on the scene. These pioneers include Alonzo Church,
Barkley Rosser, Stephen Kleene, and Haskell Curry.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Acknowledgments
We would like to thank the many people who have helped us develop this book and this curriculum.
Our subject is a clear intellectual descendant of ‘‘6.231,’’ a wonderful subject on programming
linguistics and the lambda calculus taught at MIT in the late 1960s by Jack Wozencraft and Arthur
Evans, Jr.
We owe a great debt to Robert Fano, who reorganized MIT’s introductory curriculum in electrical
engineering and computer science to emphasize the principles of engineering design. He led us in
starting out on this enterprise and wrote the first set of subject notes from which this book evolved.
Much of the style and aesthetics of programming that we try to teach were developed in conjunction
with Guy Lewis Steele Jr., who collaborated with Gerald Jay Sussman in the initial development of the
Scheme language. In addition, David Turner, Peter Henderson, Dan Friedman, David Wise, and Will
Clinger have taught us many of the techniques of the functional programming community that appear
in this book.
Joel Moses taught us about structuring large systems. His experience with the Macsyma system for
symbolic computation provided the insight that one should avoid complexities of control and
concentrate on organizing the data to reflect the real structure of the world being modeled.
Marvin Minsky and Seymour Papert formed many of our attitudes about programming and its place in
our intellectual lives. To them we owe the understanding that computation provides a means of
expression for exploring ideas that would otherwise be too complex to deal with precisely. They
emphasize that a student’s ability to write and modify programs provides a powerful medium in which
exploring becomes a natural activity.
We also strongly agree with Alan Perlis that programming is lots of fun and we had better be careful to
support the joy of programming. Part of this joy derives from observing great masters at work. We are
fortunate to have been apprentice programmers at the feet of Bill Gosper and Richard Greenblatt.
It is difficult to identify all the people who have contributed to the development of our curriculum. We
thank all the lecturers, recitation instructors, and tutors who have worked with us over the past fifteen
years and put in many extra hours on our subject, especially Bill Siebert, Albert Meyer, Joe Stoy,
Randy Davis, Louis Braida, Eric Grimson, Rod Brooks, Lynn Stein, and Peter Szolovits. We would
like to specially acknowledge the outstanding teaching contributions of Franklyn Turbak, now at
Wellesley; his work in undergraduate instruction set a standard that we can all aspire to. We are
grateful to Jerry Saltzer and Jim Miller for helping us grapple with the mysteries of concurrency, and
to Peter Szolovits and David McAllester for their contributions to the exposition of nondeterministic
evaluation in chapter 4.
Many people have put in significant effort presenting this material at other universities. Some of the
people we have worked closely with are Jacob Katzenelson at the Technion, Hardy Mayer at the
University of California at Irvine, Joe Stoy at Oxford, Elisha Sacks at Purdue, and Jan Komorowski at
the Norwegian University of Science and Technology. We are exceptionally proud of our colleagues
who have received major teaching awards for their adaptations of this subject at other universities,
including Kenneth Yip at Yale, Brian Harvey at the University of California at Berkeley, and Dan
Huttenlocher at Cornell.
Al Moyé arranged for us to teach this material to engineers at Hewlett-Packard, and for the production
of videotapes of these lectures. We would like to thank the talented instructors — in particular Jim
Miller, Bill Siebert, and Mike Eisenberg — who have designed continuing education courses
incorporating these tapes and taught them at universities and industry all over the world.
Many educators in other countries have put in significant work translating the first edition. Michel
Briand, Pierre Chamard, and André Pic produced a French edition; Susanne Daniels-Herold produced
a German edition; and Fumio Motoyoshi produced a Japanese edition. We do not know who produced
the Chinese edition, but we consider it an honor to have been selected as the subject of an
‘‘unauthorized’’ translation.
It is hard to enumerate all the people who have made technical contributions to the development of the
Scheme systems we use for instructional purposes. In addition to Guy Steele, principal wizards have
included Chris Hanson, Joe Bowbeer, Jim Miller, Guillermo Rozas, and Stephen Adams. Others who
have put in significant time are Richard Stallman, Alan Bawden, Kent Pitman, Jon Taft, Neil Mayle,
John Lamping, Gwyn Osnos, Tracy Larrabee, George Carrette, Soma Chaudhuri, Bill Chiarchiaro,
Steven Kirsch, Leigh Klotz, Wayne Noss, Todd Cass, Patrick O’Donnell, Kevin Theobald, Daniel
Weise, Kenneth Sinclair, Anthony Courtemanche, Henry M. Wu, Andrew Berlin, and Ruth Shyu.
Beyond the MIT implementation, we would like to thank the many people who worked on the IEEE
Scheme standard, including William Clinger and Jonathan Rees, who edited the R4 RS, and Chris
Haynes, David Bartley, Chris Hanson, and Jim Miller, who prepared the IEEE standard.
Dan Friedman has been a long-time leader of the Scheme community. The community’s broader work
goes beyond issues of language design to encompass significant educational innovations, such as the
high-school curriculum based on EdScheme by Schemer’s Inc., and the wonderful books by Mike
Eisenberg and by Brian Harvey and Matthew Wright.
We appreciate the work of those who contributed to making this a real book, especially Terry Ehling,
Larry Cohen, and Paul Bethge at the MIT Press. Ella Mazel found the wonderful cover image. For the
second edition we are particularly grateful to Bernard and Ella Mazel for help with the book design,
and to David Jones, TEX wizard extraordinaire. We also are indebted to those readers who made
penetrating comments on the new draft: Jacob Katzenelson, Hardy Mayer, Jim Miller, and especially
Brian Harvey, who did unto this book as Julie did unto his book Simply Scheme.
Finally, we would like to acknowledge the support of the organizations that have encouraged this work
over the years, including support from Hewlett-Packard, made possible by Ira Goldstein and Joel
Birnbaum, and support from DARPA, made possible by Bob Kahn.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Chapter 1
Building Abstractions with Procedures
The acts of the mind, wherein it exerts its power over
simple ideas, are chiefly these three: 1. Combining
several simple ideas into one compound one, and thus all
complex ideas are made. 2. The second is bringing two
ideas, whether simple or complex, together, and setting
them by one another so as to take a view of them at once,
without uniting them into one, by which it gets all its
ideas of relations. 3. The third is separating them from all
other ideas that accompany them in their real existence:
this is called abstraction, and thus all its general ideas are
made.
John Locke, An Essay Concerning Human Understanding
(1690)
We are about to study the idea of a computational process. Computational processes are abstract
beings that inhabit computers. As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules called a program. People create programs to
direct processes. In effect, we conjure the spirits of the computer with our spells.
A computational process is indeed much like a sorcerer’s idea of a spirit. It cannot be seen or touched.
It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can
answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm
in a factory. The programs we use to conjure processes are like a sorcerer’s spells. They are carefully
composed from symbolic expressions in arcane and esoteric programming languages that prescribe the
tasks we want our processes to perform.
A computational process, in a correctly working computer, executes programs precisely and
accurately. Thus, like the sorcerer’s apprentice, novice programmers must learn to understand and to
anticipate the consequences of their conjuring. Even small errors (usually called bugs or glitches) in
programs can have complex and unanticipated consequences.
Fortunately, learning to program is considerably less dangerous than learning sorcery, because the
spirits we deal with are conveniently contained in a secure way. Real-world programming, however,
requires care, expertise, and wisdom. A small bug in a computer-aided design program, for example,
can lead to the catastrophic collapse of an airplane or a dam or the self-destruction of an industrial
robot.
Master software engineers have the ability to organize programs so that they can be reasonably sure
that the resulting processes will perform the tasks intended. They can visualize the behavior of their
systems in advance. They know how to structure programs so that unanticipated problems do not lead
to catastrophic consequences, and when problems do arise, they can debug their programs.
Well-designed computational systems, like well-designed automobiles or nuclear reactors, are
designed in a modular manner, so that the parts can be constructed, replaced, and debugged separately.
Programming in Lisp
We need an appropriate language for describing processes, and we will use for this purpose the
programming language Lisp. Just as our everyday thoughts are usually expressed in our natural
language (such as English, French, or Japanese), and descriptions of quantitative phenomena are
expressed with mathematical notations, our procedural thoughts will be expressed in Lisp. Lisp was
invented in the late 1950s as a formalism for reasoning about the use of certain kinds of logical
expressions, called recursion equations, as a model for computation. The language was conceived by
John McCarthy and is based on his paper ‘‘Recursive Functions of Symbolic Expressions and Their
Computation by Machine’’ (McCarthy 1960).
Despite its inception as a mathematical formalism, Lisp is a practical programming language. A Lisp
interpreter is a machine that carries out processes described in the Lisp language. The first Lisp
interpreter was implemented by McCarthy with the help of colleagues and students in the Artificial
Intelligence Group of the MIT Research Laboratory of Electronics and in the MIT Computation
Center.1 Lisp, whose name is an acronym for LISt Processing, was designed to provide
symbol-manipulating capabilities for attacking programming problems such as the symbolic
differentiation and integration of algebraic expressions. It included for this purpose new data objects
known as atoms and lists, which most strikingly set it apart from all other languages of the period.
Lisp was not the product of a concerted design effort. Instead, it evolved informally in an experimental
manner in response to users’ needs and to pragmatic implementation considerations. Lisp’s informal
evolution has continued through the years, and the community of Lisp users has traditionally resisted
attempts to promulgate any ‘‘official’’ definition of the language. This evolution, together with the
flexibility and elegance of the initial conception, has enabled Lisp, which is the second oldest language
in widespread use today (only Fortran is older), to continually adapt to encompass the most modern
ideas about program design. Thus, Lisp is by now a family of dialects, which, while sharing most of
the original features, may differ from one another in significant ways. The dialect of Lisp used in this
book is called Scheme.2
Because of its experimental character and its emphasis on symbol manipulation, Lisp was at first very
inefficient for numerical computations, at least in comparison with Fortran. Over the years, however,
Lisp compilers have been developed that translate programs into machine code that can perform
numerical computations reasonably efficiently. And for special applications, Lisp has been used with
great effectiveness.3 Although Lisp has not yet overcome its old reputation as hopelessly inefficient,
Lisp is now used in many applications where efficiency is not the central concern. For example, Lisp
has become a language of choice for operating-system shell languages and for extension languages for
editors and computer-aided design systems.
If Lisp is not a mainstream language, why are we using it as the framework for our discussion of
programming? Because the language possesses unique features that make it an excellent medium for
studying important programming constructs and data structures and for relating them to the linguistic
features that support them. The most significant of these features is the fact that Lisp descriptions of
processes, called procedures, can themselves be represented and manipulated as Lisp data. The
importance of this is that there are powerful program-design techniques that rely on the ability to blur
the traditional distinction between ‘‘passive’’ data and ‘‘active’’ processes. As we shall discover,
Lisp’s flexibility in handling procedures as data makes it one of the most convenient languages in
existence for exploring these techniques. The ability to represent procedures as data also makes Lisp
an excellent language for writing programs that must manipulate other programs as data, such as the
interpreters and compilers that support computer languages. Above and beyond these considerations,
programming in Lisp is great fun.
1 The Lisp 1 Programmer’s Manual appeared in 1960, and the Lisp 1.5 Programmer’s Manual
(McCarthy 1965) was published in 1962. The early history of Lisp is described in McCarthy 1978.
2 The two dialects in which most major Lisp programs of the 1970s were written are MacLisp (Moon
1978; Pitman 1983), developed at the MIT Project MAC, and Interlisp (Teitelman 1974), developed at
Bolt Beranek and Newman Inc. and the Xerox Palo Alto Research Center. Portable Standard Lisp
(Hearn 1969; Griss 1981) was a Lisp dialect designed to be easily portable between different
machines. MacLisp spawned a number of subdialects, such as Franz Lisp, which was developed at the
University of California at Berkeley, and Zetalisp (Moon 1981), which was based on a special-purpose
processor designed at the MIT Artificial Intelligence Laboratory to run Lisp very efficiently. The Lisp
dialect used in this book, called Scheme (Steele 1975), was invented in 1975 by Guy Lewis Steele Jr.
and Gerald Jay Sussman of the MIT Artificial Intelligence Laboratory and later reimplemented for
instructional use at MIT. Scheme became an IEEE standard in 1990 (IEEE 1990). The Common Lisp
dialect (Steele 1982, Steele 1990) was developed by the Lisp community to combine features from the
earlier Lisp dialects to make an industrial standard for Lisp. Common Lisp became an ANSI standard
in 1994 (ANSI 1994).
3 One such special application was a breakthrough computation of scientific importance — an
integration of the motion of the Solar System that extended previous results by nearly two orders of
magnitude, and demonstrated that the dynamics of the Solar System is chaotic. This computation was
made possible by new integration algorithms, a special-purpose compiler, and a special-purpose
computer all implemented with the aid of software tools written in Lisp (Abelson et al. 1992; Sussman
and Wisdom 1992).
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
1.1 The Elements of Programming
A powerful programming language is more than just a means for instructing a computer to perform
tasks. The language also serves as a framework within which we organize our ideas about processes.
Thus, when we describe a language, we should pay particular attention to the means that the language
provides for combining simple ideas to form more complex ideas. Every powerful language has three
mechanisms for accomplishing this:
primitive expressions, which represent the simplest entities the language is concerned with,
means of combination, by which compound elements are built from simpler ones, and
means of abstraction, by which compound elements can be named and manipulated as units.
In programming, we deal with two kinds of elements: procedures and data. (Later we will discover that
they are really not so distinct.) Informally, data is ‘‘stuff’’ that we want to manipulate, and procedures
are descriptions of the rules for manipulating the data. Thus, any powerful programming language
should be able to describe primitive data and primitive procedures and should have methods for
combining and abstracting procedures and data.
In this chapter we will deal only with simple numerical data so that we can focus on the rules for
building procedures.4 In later chapters we will see that these same rules allow us to build procedures
to manipulate compound data as well.
1.1.1 Expressions
One easy way to get started at programming is to examine some typical interactions with an interpreter
for the Scheme dialect of Lisp. Imagine that you are sitting at a computer terminal. You type an
expression, and the interpreter responds by displaying the result of its evaluating that expression.
One kind of primitive expression you might type is a number. (More precisely, the expression that you
type consists of the numerals that represent the number in base 10.) If you present Lisp with a number
486
the interpreter will respond by printing5
486
Expressions representing numbers may be combined with an expression representing a primitive
procedure (such as + or *) to form a compound expression that represents the application of the
procedure to those numbers. For example:
(+ 137 349)
486
(- 1000 334)
666
(* 5 99)
495
(/ 10 5)
2
(+ 2.7 10)
12.7
Expressions such as these, formed by delimiting a list of expressions within parentheses in order to
denote procedure application, are called combinations. The leftmost element in the list is called the
operator, and the other elements are called operands. The value of a combination is obtained by
applying the procedure specified by the operator to the arguments that are the values of the operands.
The convention of placing the operator to the left of the operands is known as prefix notation, and it
may be somewhat confusing at first because it departs significantly from the customary mathematical
convention. Prefix notation has several advantages, however. One of them is that it can accommodate
procedures that may take an arbitrary number of arguments, as in the following examples:
(+ 21 35 12 7)
75
(* 25 4 12)
1200
No ambiguity can arise, because the operator is always the leftmost element and the entire
combination is delimited by the parentheses.
A second advantage of prefix notation is that it extends in a straightforward way to allow combinations
to be nested, that is, to have combinations whose elements are themselves combinations:
(+ (* 3 5) (- 10 6))
19
There is no limit (in principle) to the depth of such nesting and to the overall complexity of the
expressions that the Lisp interpreter can evaluate. It is we humans who get confused by still relatively
simple expressions such as
(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))
which the interpreter would readily evaluate to be 57. We can help ourselves by writing such an
expression in the form
(+ (* 3
(+ (* 2 4)
(+ 3 5)))
(+ (- 10 7)
6))
following a formatting convention known as pretty-printing, in which each long combination is
written so that the operands are aligned vertically. The resulting indentations display clearly the
structure of the expression.6
Even with complex expressions, the interpreter always operates in the same basic cycle: It reads an
expression from the terminal, evaluates the expression, and prints the result. This mode of operation is
often expressed by saying that the interpreter runs in a read-eval-print loop. Observe in particular that
it is not necessary to explicitly instruct the interpreter to print the value of the expression.7
1.1.2 Naming and the Environment
A critical aspect of a programming language is the means it provides for using names to refer to
computational objects. We say that the name identifies a variable whose value is the object.
In the Scheme dialect of Lisp, we name things with define. Typing
(define size 2)
causes the interpreter to associate the value 2 with the name size.8 Once the name size has been
associated with the number 2, we can refer to the value 2 by name:
size
2
(* 5 size)
10
Here are further examples of the use of define:
(define pi 3.14159)
(define radius 10)
(* pi (* radius radius))
314.159
(define circumference (* 2 pi radius))
circumference
62.8318
Define is our language’s simplest means of abstraction, for it allows us to use simple names to refer
to the results of compound operations, such as the circumference computed above. In general,
computational objects may have very complex structures, and it would be extremely inconvenient to
have to remember and repeat their details each time we want to use them. Indeed, complex programs
are constructed by building, step by step, computational objects of increasing complexity. The
interpreter makes this step-by-step program construction particularly convenient because name-object
associations can be created incrementally in successive interactions. This feature encourages the
incremental development and testing of programs and is largely responsible for the fact that a Lisp
program usually consists of a large number of relatively simple procedures.
It should be clear that the possibility of associating values with symbols and later retrieving them
means that the interpreter must maintain some sort of memory that keeps track of the name-object
pairs. This memory is called the environment (more precisely the global environment, since we will
see later that a computation may involve a number of different environments).9
1.1.3 Evaluating Combinations
One of our goals in this chapter is to isolate issues about thinking procedurally. As a case in point, let
us consider that, in evaluating combinations, the interpreter is itself following a procedure.
To evaluate a combination, do the following:
1. Evaluate the subexpressions of the combination.
2. Apply the procedure that is the value of the leftmost subexpression (the operator) to the
arguments that are the values of the other subexpressions (the operands).
Even this simple rule illustrates some important points about processes in general. First, observe that
the first step dictates that in order to accomplish the evaluation process for a combination we must first
perform the evaluation process on each element of the combination. Thus, the evaluation rule is
recursive in nature; that is, it includes, as one of its steps, the need to invoke the rule itself.10
Notice how succinctly the idea of recursion can be used to express what, in the case of a deeply nested
combination, would otherwise be viewed as a rather complicated process. For example, evaluating
(* (+ 2 (* 4 6))
(+ 3 5 7))
requires that the evaluation rule be applied to four different combinations. We can obtain a picture of
this process by representing the combination in the form of a tree, as shown in figure 1.1. Each
combination is represented by a node with branches corresponding to the operator and the operands of
the combination stemming from it. The terminal nodes (that is, nodes with no branches stemming from
them) represent either operators or numbers. Viewing evaluation in terms of the tree, we can imagine
that the values of the operands percolate upward, starting from the terminal nodes and then combining
at higher and higher levels. In general, we shall see that recursion is a very powerful technique for
dealing with hierarchical, treelike objects. In fact, the ‘‘percolate values upward’’ form of the
evaluation rule is an example of a general kind of process known as tree accumulation.
Figure 1.1: Tree representation, showing the value of each subcombination.
Figure 1.1: Tree representation, showing the value of each subcombination.
Next, observe that the repeated application of the first step brings us to the point where we need to
evaluate, not combinations, but primitive expressions such as numerals, built-in operators, or other
names. We take care of the primitive cases by stipulating that
the values of numerals are the numbers that they name,
the values of built-in operators are the machine instruction sequences that carry out the
corresponding operations, and
the values of other names are the objects associated with those names in the environment.
We may regard the second rule as a special case of the third one by stipulating that symbols such as +
and * are also included in the global environment, and are associated with the sequences of machine
instructions that are their ‘‘values.’’ The key point to notice is the role of the environment in
determining the meaning of the symbols in expressions. In an interactive language such as Lisp, it is
meaningless to speak of the value of an expression such as (+ x 1) without specifying any
information about the environment that would provide a meaning for the symbol x (or even for the
symbol +). As we shall see in chapter 3, the general notion of the environment as providing a context
in which evaluation takes place will play an important role in our understanding of program execution.
Notice that the evaluation rule given above does not handle definitions. For instance, evaluating
(define x 3) does not apply define to two arguments, one of which is the value of the symbol
x and the other of which is 3, since the purpose of the define is precisely to associate x with a value.
(That is, (define x 3) is not a combination.)
Such exceptions to the general evaluation rule are called special forms. Define is the only example
of a special form that we have seen so far, but we will meet others shortly. Each special form has its
own evaluation rule. The various kinds of expressions (each with its associated evaluation rule)
constitute the syntax of the programming language. In comparison with most other programming
languages, Lisp has a very simple syntax; that is, the evaluation rule for expressions can be described
by a simple general rule together with specialized rules for a small number of special forms.11
1.1.4 Compound Procedures
We have identified in Lisp some of the elements that must appear in any powerful programming
language:
Numbers and arithmetic operations are primitive data and procedures.
Nesting of combinations provides a means of combining operations.
Definitions that associate names with values provide a limited means of abstraction.
Now we will learn about procedure definitions, a much more powerful abstraction technique by which
a compound operation can be given a name and then referred to as a unit.
We begin by examining how to express the idea of ‘‘squaring.’’ We might say, ‘‘To square something,
multiply it by itself.’’ This is expressed in our language as
(define (square x) (* x x))
We can understand this in the following way:
(define (square x) (* x x))
To square something, multiply it by itself.
We have here a compound procedure, which has been given the name square. The procedure
represents the operation of multiplying something by itself. The thing to be multiplied is given a local
name, x, which plays the same role that a pronoun plays in natural language. Evaluating the definition
creates this compound procedure and associates it with the name square.12
The general form of a procedure definition is
(define (
The
corresponding arguments of the procedure. The is an expression that will yield the value of the
procedure application when the formal parameters are replaced by the actual arguments to which the
procedure is applied.14 The
just as they would be in an actual call to the procedure being defined.
Having defined square, we can now use it:
(square 21)
441
(square (+ 2 5))
49
(square (square 3))
81
We can also use square as a building block in defining other procedures. For example, x2 + y2 can
be expressed as
(+ (square x) (square y))
We can easily define a procedure sum-of-squares that, given any two numbers as arguments,
produces the sum of their squares:
(define (sum-of-squares x y)
(+ (square x) (square y)))
(sum-of-squares 3 4)
25
Now we can use sum-of-squares as a building block in constructing further procedures:
(define (f a)
(sum-of-squares (+ a 1) (* a 2)))
(f 5)
136
Compound procedures are used in exactly the same way as primitive procedures. Indeed, one could
not tell by looking at the definition of sum-of-squares given above whether square was built
into the interpreter, like + and *, or defined as a compound procedure.
1.1.5 The Substitution Model for Procedure Application
To evaluate a combination whose operator names a compound procedure, the interpreter follows much
the same process as for combinations whose operators name primitive procedures, which we described
in section 1.1.3. That is, the interpreter evaluates the elements of the combination and applies the
procedure (which is the value of the operator of the combination) to the arguments (which are the
values of the operands of the combination).
We can assume that the mechanism for applying primitive procedures to arguments is built into the
interpreter. For compound procedures, the application process is as follows:
To apply a compound procedure to arguments, evaluate the body of the procedure with each
formal parameter replaced by the corresponding argument.
To illustrate this process, let’s evaluate the combination
(f 5)
where f is the procedure defined in section 1.1.4. We begin by retrieving the body of f:
(sum-of-squares (+ a 1) (* a 2))
Then we replace the formal parameter a by the argument 5:
(sum-of-squares (+ 5 1) (* 5 2))
Thus the problem reduces to the evaluation of a combination with two operands and an operator
sum-of-squares. Evaluating this combination involves three subproblems. We must evaluate the
operator to get the procedure to be applied, and we must evaluate the operands to get the arguments.
Now (+ 5 1) produces 6 and (* 5 2) produces 10, so we must apply the sum-of-squares
procedure to 6 and 10. These values are substituted for the formal parameters x and y in the body of
sum-of-squares, reducing the expression to
(+ (square 6) (square 10))
If we use the definition of square, this reduces to
(+ (* 6 6) (* 10 10))
which reduces by multiplication to
(+ 36 100)
and finally to
136
The process we have just described is called the substitution model for procedure application. It can be
taken as a model that determines the ‘‘meaning’’ of procedure application, insofar as the procedures in
this chapter are concerned. However, there are two points that should be stressed:
The purpose of the substitution is to help us think about procedure application, not to provide a
description of how the interpreter really works. Typical interpreters do not evaluate procedure
applications by manipulating the text of a procedure to substitute values for the formal
parameters. In practice, the ‘‘substitution’’ is accomplished by using a local environment for the
formal parameters. We will discuss this more fully in chapters 3 and 4 when we examine the
implementation of an interpreter in detail.
Over the course of this book, we will present a sequence of increasingly elaborate models of how
interpreters work, culminating with a complete implementation of an interpreter and compiler in
chapter 5. The substitution model is only the first of these models — a way to get started thinking
formally about the evaluation process. In general, when modeling phenomena in science and
engineering, we begin with simplified, incomplete models. As we examine things in greater
detail, these simple models become inadequate and must be replaced by more refined models.
The substitution model is no exception. In particular, when we address in chapter 3 the use of
procedures with ‘‘mutable data,’’ we will see that the substitution model breaks down and must
be replaced by a more complicated model of procedure application.15
Applicative order versus normal order
According to the description of evaluation given in section 1.1.3, the interpreter first evaluates the
operator and operands and then applies the resulting procedure to the resulting arguments. This is not
the only way to perform evaluation. An alternative evaluation model would not evaluate the operands
until their values were needed. Instead it would first substitute operand expressions for parameters
until it obtained an expression involving only primitive operators, and would then perform the
evaluation. If we used this method, the evaluation of
(f 5)
would proceed according to the sequence of expansions
(sum-of-squares (+ 5 1) (* 5 2))
(+ (square (+ 5 1)) (square (* 5 2)) )
(+ (* (+ 5 1) (+ 5 1)) (* (* 5 2) (* 5 2)))
followed by the reductions
(+ (* 6 6) (* 10 10))
(+ 36 100)
136
This gives the same answer as our previous evaluation model, but the process is different. In
particular, the evaluations of (+ 5 1) and (* 5 2) are each performed twice here, corresponding
to the reduction of the expression
(* x x)
with x replaced respectively by (+ 5 1) and (* 5 2).
This alternative ‘‘fully expand and then reduce’’ evaluation method is known as normal-order
evaluation, in contrast to the ‘‘evaluate the arguments and then apply’’ method that the interpreter
actually uses, which is called applicative-order evaluation. It can be shown that, for procedure
applications that can be modeled using substitution (including all the procedures in the first two
chapters of this book) and that yield legitimate values, normal-order and applicative-order evaluation
produce the same value. (See exercise 1.5 for an instance of an ‘‘illegitimate’’ value where
normal-order and applicative-order evaluation do not give the same result.)
Lisp uses applicative-order evaluation, partly because of the additional efficiency obtained from
avoiding multiple evaluations of expressions such as those illustrated with (+ 5 1) and (* 5 2)
above and, more significantly, because normal-order evaluation becomes much more complicated to
deal with when we leave the realm of procedures that can be modeled by substitution. On the other
hand, normal-order evaluation can be an extremely valuable tool, and we will investigate some of its
implications in chapters 3 and 4.16
1.1.6 Conditional Expressions and Predicates
The expressive power of the class of procedures that we can define at this point is very limited,
because we have no way to make tests and to perform different operations depending on the result of a
test. For instance, we cannot define a procedure that computes the absolute value of a number by
testing whether the number is positive, negative, or zero and taking different actions in the different
cases according to the rule
This construct is called a case analysis, and there is a special form in Lisp for notating such a case
analysis. It is called cond (which stands for ‘‘conditional’’), and it is used as follows:
(define (abs x)
(cond ((> x 0) x)
((= x 0) 0)
((< x 0) (- x))))
The general form of a conditional expression is
(cond (
(
(
consisting of the symbol cond followed by parenthesized pairs of expressions (
clauses. The first expression in each pair is a predicate — that is, an expression whose value is
interpreted as either true or false.17
Conditional expressions are evaluated as follows. The predicate
false, then
continues until a predicate is found whose value is true, in which case the interpreter returns the value
of the corresponding consequent expression
expression. If none of the
’s is found to be true, the value of the cond is undefined.
The word predicate is used for procedures that return true or false, as well as for expressions that
evaluate to true or false. The absolute-value procedure abs makes use of the primitive predicates >, <,
and =.18 These take two numbers as arguments and test whether the first number is, respectively,
greater than, less than, or equal to the second number, returning true or false accordingly.
Another way to write the absolute-value procedure is
(define (abs x)
(cond ((< x 0) (- x))
(else x)))
which could be expressed in English as ‘‘If x is less than zero return - x; otherwise return x.’’ Else is
a special symbol that can be used in place of the
in the final clause of a cond. This causes the
cond to return as its value the value of the corresponding
been bypassed. In fact, any expression that always evaluates to a true value could be used as the
here.
Here is yet another way to write the absolute-value procedure:
(define (abs x)
(if (< x 0)
(- x)
x))
This uses the special form if, a restricted type of conditional that can be used when there are
precisely two cases in the case analysis. The general form of an if expression is
(if
To evaluate an if expression, the interpreter starts by evaluating the
expression. If the
In addition to primitive predicates such as <, =, and >, there are logical composition operations, which
enable us to construct compound predicates. The three most frequently used are these:
(and
The interpreter evaluates the expressions
evaluates to false, the value of the and expression is false, and the rest of the
evaluated. If all
last one.
(or
The interpreter evaluates the expressions
evaluates to a true value, that value is returned as the value of the or expression, and the rest of
the
(not
The value of a not expression is true when the expression
otherwise.
Notice that and and or are special forms, not procedures, because the subexpressions are not
necessarily all evaluated. Not is an ordinary procedure.
As an example of how these are used, the condition that a number x be in the range 5 < x < 10 may be
expressed as
(and (> x 5) (< x 10))
As another example, we can define a predicate to test whether one number is greater than or equal to
another as
(define (>= x y)
(or (> x y) (= x y)))
or alternatively as
(define (>= x y)
(not (< x y)))
Exercise 1.1. Below is a sequence of expressions. What is the result printed by the interpreter in
response to each expression? Assume that the sequence is to be evaluated in the order in which it is
presented.
10
(+ 5 3 4)
(- 9 1)
(/ 6 2)
(+ (* 2 4) (- 4 6))
(define a 3)
(define b (+ a 1))
(+ a b (* a b))
(= a b)
(if (and (> b a) (< b (* a b)))
b
a)
(cond ((= a 4) 6)
((= b 4) (+ 6 7 a))
(else 25))
(+ 2 (if (> b a) b a))
(* (cond ((> a b) a)
((< a b) b)
(else -1))
(+ a 1))
Exercise 1.2. Translate the following expression into prefix form
Exercise 1.3. Define a procedure that takes three numbers as arguments and returns the sum of the
squares of the two larger numbers.
Exercise 1.4. Observe that our model of evaluation allows for combinations whose operators are
compound expressions. Use this observation to describe the behavior of the following procedure:
(define (a-plus-abs-b a b)
((if (> b 0) + -) a b))
Exercise 1.5. Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is
using applicative-order evaluation or normal-order evaluation. He defines the following two
procedures:
(define (p) (p))
(define (test x y)
(if (= x 0)
0
y))
Then he evaluates the expression
(test 0 (p))
What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What
behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer.
(Assume that the evaluation rule for the special form if is the same whether the interpreter is using
normal or applicative order: The predicate expression is evaluated first, and the result determines
whether to evaluate the consequent or the alternative expression.)
1.1.7 Example: Square Roots by Newton’s Method
Procedures, as introduced above, are much like ordinary mathematical functions. They specify a value
that is determined by one or more parameters. But there is an important difference between
mathematical functions and computer procedures. Procedures must be effective.
As a case in point, consider the problem of computing square roots. We can define the square-root
function as
This describes a perfectly legitimate mathematical function. We could use it to recognize whether one
number is the square root of another, or to derive facts about square roots in general. On the other
hand, the definition does not describe a procedure. Indeed, it tells us almost nothing about how to
actually find the square root of a given number. It will not help matters to rephrase this definition in
pseudo-Lisp:
(define (sqrt x)
(the y (and (>= y 0)
(= (square y) x))))
This only begs the question.
The contrast between function and procedure is a reflection of the general distinction between
describing properties of things and describing how to do things, or, as it is sometimes referred to, the
distinction between declarative knowledge and imperative knowledge. In mathematics we are usually
concerned with declarative (what is) descriptions, whereas in computer science we are usually
concerned with imperative (how to) descriptions.20
How does one compute square roots? The most common way is to use Newton’s method of successive
approximations, which says that whenever we have a guess y for the value of the square root of a
number x, we can perform a simple manipulation to get a better guess (one closer to the actual square
root) by averaging y with x/y.21 For example, we can compute the square root of 2 as follows.
Suppose our initial guess is 1:
Guess Quotient Average
1 (2/1) = 2 ((2 + 1)/2) = 1.5
1.5 (2/1.5) = 1.3333 ((1.3333 + 1.5)/2) = 1.4167
1.4167 (2/1.4167) = 1.4118 ((1.4167 + 1.4118)/2) = 1.4142
1.4142 … …
Continuing this process, we obtain better and better approximations to the square root.
Now let’s formalize the process in terms of procedures. We start with a value for the radicand (the
number whose square root we are trying to compute) and a value for the guess. If the guess is good
enough for our purposes, we are done; if not, we must repeat the process with an improved guess. We
write this basic strategy as a procedure:
(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter (improve guess x)
x)))
A guess is improved by averaging it with the quotient of the radicand and the old guess:
(define (improve guess x)
(average guess (/ x guess)))
where
(define (average x y)
(/ (+ x y) 2))
We also have to say what we mean by ‘‘good enough.’’ The following will do for illustration, but it is
not really a very good test. (See exercise 1.7.) The idea is to improve the answer until it is close
enough so that its square differs from the radicand by less than a predetermined tolerance (here
0.001):22
(define (good-enough? guess x)
(< (abs (- (square guess) x)) 0.001))
Finally, we need a way to get started. For instance, we can always guess that the square root of any
number is 1:23
(define (sqrt x)
(sqrt-iter 1.0 x))
If we type these definitions to the interpreter, we can use sqrt just as we can use any procedure:
(sqrt 9)
3.00009155413138
(sqrt (+ 100 37))
11.704699917758145
(sqrt (+ (sqrt 2) (sqrt 3)))
1.7739279023207892
(square (sqrt 1000))
1000.000369924366
The sqrt program also illustrates that the simple procedural language we have introduced so far is
sufficient for writing any purely numerical program that one could write in, say, C or Pascal. This
might seem surprising, since we have not included in our language any iterative (looping) constructs
that direct the computer to do something over and over again. Sqrt-iter, on the other hand,
demonstrates how iteration can be accomplished using no special construct other than the ordinary
ability to call a procedure.24
Exercise 1.6. Alyssa P. Hacker doesn’t see why if needs to be provided as a special form. ‘‘Why
can’t I just define it as an ordinary procedure in terms of cond?’’ she asks. Alyssa’s friend Eva Lu
Ator claims this can indeed be done, and she defines a new version of if:
(define (new-if predicate then-clause else-clause)
(cond (predicate then-clause)
(else else-clause)))
Eva demonstrates the program for Alyssa:
(new-if (= 2 3) 0 5)
5
(new-if (= 1 1) 0 5)
0
Delighted, Alyssa uses new-if to rewrite the square-root program:
(define (sqrt-iter guess x)
(new-if (good-enough? guess x)
guess
(sqrt-iter (improve guess x)
x)))
What happens when Alyssa attempts to use this to compute square roots? Explain.
Exercise 1.7. The good-enough? test used in computing square roots will not be very effective for
finding the square roots of very small numbers. Also, in real computers, arithmetic operations are
almost always performed with limited precision. This makes our test inadequate for very large
numbers. Explain these statements, with examples showing how the test fails for small and large
numbers. An alternative strategy for implementing good-enough? is to watch how guess changes
from one iteration to the next and to stop when the change is a very small fraction of the guess. Design
a square-root procedure that uses this kind of end test. Does this work better for small and large
numbers?
Exercise 1.8. Newton’s method for cube roots is based on the fact that if y is an approximation to the
cube root of x, then a better approximation is given by the value
Use this formula to implement a cube-root procedure analogous to the square-root procedure. (In
section 1.3.4 we will see how to implement Newton’s method in general as an abstraction of these
square-root and cube-root procedures.)
1.1.8 Procedures as Black-Box Abstractions
Sqrt is our first example of a process defined by a set of mutually defined procedures. Notice that the
definition of sqrt-iter is recursive; that is, the procedure is defined in terms of itself. The idea of
being able to define a procedure in terms of itself may be disturbing; it may seem unclear how such a
‘‘circular’’ definition could make sense at all, much less specify a well-defined process to be carried
out by a computer. This will be addressed more carefully in section 1.2. But first let’s consider some
other important points illustrated by the sqrt example.
Observe that the problem of computing square roots breaks up naturally into a number of
subproblems: how to tell whether a guess is good enough, how to improve a guess, and so on. Each of
these tasks is accomplished by a separate procedure. The entire sqrt program can be viewed as a
cluster of procedures (shown in figure 1.2) that mirrors the decomposition of the problem into
subproblems.
Figure 1.2: Procedural decomposition of the sqrt program.
Figure 1.2: Procedural decomposition of the sqrt program.
The importance of this decomposition strategy is not simply that one is dividing the program into
parts. After all, we could take any large program and divide it into parts -- the first ten lines, the next
ten lines, the next ten lines, and so on. Rather, it is crucial that each procedure accomplishes an
identifiable task that can be used as a module in defining other procedures. For example, when we
define the good-enough? procedure in terms of square, we are able to regard the square
procedure as a ‘‘black box.’’ We are not at that moment concerned with how the procedure computes
its result, only with the fact that it computes the square. The details of how the square is computed can
be suppressed, to be considered at a later time. Indeed, as far as the good-enough? procedure is
concerned, square is not quite a procedure but rather an abstraction of a procedure, a so-called
procedural abstraction. At this level of abstraction, any procedure that computes the square is equally
good.
Thus, considering only the values they return, the following two procedures for squaring a number
should be indistinguishable. Each takes a numerical argument and produces the square of that number
as the value.25
(define (square x) (* x x))
(define (square x)
(exp (double (log x))))
(define (double x) (+ x x))
So a procedure definition should be able to suppress detail. The users of the procedure may not have
written the procedure themselves, but may have obtained it from another programmer as a black box.
A user should not need to know how the procedure is implemented in order to use it.
Local names
One detail of a procedure’s implementation that should not matter to the user of the procedure is the
implementer’s choice of names for the procedure’s formal parameters. Thus, the following procedures
should not be distinguishable:
(define (square x) (* x x))
(define (square y) (* y y))
This principle -- that the meaning of a procedure should be independent of the parameter names used
by its author -- seems on the surface to be self-evident, but its consequences are profound. The
simplest consequence is that the parameter names of a procedure must be local to the body of the
procedure. For example, we used square in the definition of good-enough? in our square-root
procedure:
(define (good-enough? guess x)
(< (abs (- (square guess) x)) 0.001))
The intention of the author of good-enough? is to determine if the square of the first argument is
within a given tolerance of the second argument. We see that the author of good-enough? used the
name guess to refer to the first argument and x to refer to the second argument. The argument of
square is guess. If the author of square used x (as above) to refer to that argument, we see that
the x in good-enough? must be a different x than the one in square. Running the procedure
square must not affect the value of x that is used by good-enough?, because that value of x may
be needed by good-enough? after square is done computing.
If the parameters were not local to the bodies of their respective procedures, then the parameter x in
square could be confused with the parameter x in good-enough?, and the behavior of
good-enough? would depend upon which version of square we used. Thus, square would not
be the black box we desired.
A formal parameter of a procedure has a very special role in the procedure definition, in that it doesn’t
matter what name the formal parameter has. Such a name is called a bound variable, and we say that
the procedure definition binds its formal parameters. The meaning of a procedure definition is
unchanged if a bound variable is consistently renamed throughout the definition.26 If a variable is not
bound, we say that it is free. The set of expressions for which a binding defines a name is called the
scope of that name. In a procedure definition, the bound variables declared as the formal parameters of
the procedure have the body of the procedure as their scope.
In the definition of good-enough? above, guess and x are bound variables but <, -, abs, and
square are free. The meaning of good-enough? should be independent of the names we choose
for guess and x so long as they are distinct and different from <, -, abs, and square. (If we
renamed guess to abs we would have introduced a bug by capturing the variable abs. It would
have changed from free to bound.) The meaning of good-enough? is not independent of the names
of its free variables, however. It surely depends upon the fact (external to this definition) that the
symbol abs names a procedure for computing the absolute value of a number. Good-enough? will
compute a different function if we substitute cos for abs in its definition.
Internal definitions and block structure
We have one kind of name isolation available to us so far: The formal parameters of a procedure are
local to the body of the procedure. The square-root program illustrates another way in which we would
like to control the use of names. The existing program consists of separate procedures:
(define (sqrt x)
(sqrt-iter 1.0 x))
(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter (improve guess x) x)))
(define (good-enough? guess x)
(< (abs (- (square guess) x)) 0.001))
(define (improve guess x)
(average guess (/ x guess)))
The problem with this program is that the only procedure that is important to users of sqrt is sqrt.
The other procedures (sqrt-iter, good-enough?, and improve) only clutter up their minds.
They may not define any other procedure called good-enough? as part of another program to work
together with the square-root program, because sqrt needs it. The problem is especially severe in the
construction of large systems by many separate programmers. For example, in the construction of a
large library of numerical procedures, many numerical functions are computed as successive
approximations and thus might have procedures named good-enough? and improve as auxiliary
procedures. We would like to localize the subprocedures, hiding them inside sqrt so that sqrt could
coexist with other successive approximations, each having its own private good-enough?
procedure. To make this possible, we allow a procedure to have internal definitions that are local to
that procedure. For example, in the square-root problem we can write
(define (sqrt x)
(define (good-enough? guess x)
(< (abs (- (square guess) x)) 0.001))
(define (improve guess x)
(average guess (/ x guess)))
(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter (improve guess x) x)))
(sqrt-iter 1.0 x))
Such nesting of definitions, called block structure, is basically the right solution to the simplest
name-packaging problem. But there is a better idea lurking here. In addition to internalizing the
definitions of the auxiliary procedures, we can simplify them. Since x is bound in the definition of
sqrt, the procedures good-enough?, improve, and sqrt-iter, which are defined internally
to sqrt, are in the scope of x. Thus, it is not necessary to pass x explicitly to each of these
procedures. Instead, we allow x to be a free variable in the internal definitions, as shown below. Then
x gets its value from the argument with which the enclosing procedure sqrt is called. This discipline
is called lexical scoping.27
(define (sqrt x)
(define (good-enough? guess)
(< (abs (- (square guess) x)) 0.001))
(define (improve guess)
(average guess (/ x guess)))
(define (sqrt-iter guess)
(if (good-enough? guess)
guess
(sqrt-iter (improve guess))))
(sqrt-iter 1.0))
We will use block structure extensively to help us break up large programs into tractable pieces.28
The idea of block structure originated with the programming language Algol 60. It appears in most
advanced programming languages and is an important tool for helping to organize the construction of
large programs.
4 The characterization of numbers as ‘‘simple data’’ is a barefaced bluff. In fact, the treatment of
numbers is one of the trickiest and most confusing aspects of any programming language. Some
typical issues involved are these: Some computer systems distinguish integers, such as 2, from real
numbers, such as 2.71. Is the real number 2.00 different from the integer 2? Are the arithmetic
operations used for integers the same as the operations used for real numbers? Does 6 divided by 2
produce 3, or 3.0? How large a number can we represent? How many decimal places of accuracy can
we represent? Is the range of integers the same as the range of real numbers? Above and beyond these
questions, of course, lies a collection of issues concerning roundoff and truncation errors -- the entire
science of numerical analysis. Since our focus in this book is on large-scale program design rather than
on numerical techniques, we are going to ignore these problems. The numerical examples in this
chapter will exhibit the usual roundoff behavior that one observes when using arithmetic operations
that preserve a limited number of decimal places of accuracy in noninteger operations.
5 Throughout this book, when we wish to emphasize the distinction between the input typed by the
user and the response printed by the interpreter, we will show the latter in slanted characters.
6 Lisp systems typically provide features to aid the user in formatting expressions. Two especially
useful features are one that automatically indents to the proper pretty-print position whenever a new
line is started and one that highlights the matching left parenthesis whenever a right parenthesis is
typed.
7 Lisp obeys the convention that every expression has a value. This convention, together with the old
reputation of Lisp as an inefficient language, is the source of the quip by Alan Perlis (paraphrasing
Oscar Wilde) that ‘‘Lisp programmers know the value of everything but the cost of nothing.’’
8 In this book, we do not show the interpreter’s response to evaluating definitions, since this is highly
implementation-dependent.
9 Chapter 3 will show that this notion of environment is crucial, both for understanding how the
interpreter works and for implementing interpreters.
10 It may seem strange that the evaluation rule says, as part of the first step, that we should evaluate
the leftmost element of a combination, since at this point that can only be an operator such as + or *
representing a built-in primitive procedure such as addition or multiplication. We will see later that it
is useful to be able to work with combinations whose operators are themselves compound expressions.
11 Special syntactic forms that are simply convenient alternative surface structures for things that can
be written in more uniform ways are sometimes called syntactic sugar, to use a phrase coined by Peter
Landin. In comparison with users of other languages, Lisp programmers, as a rule, are less concerned
with matters of syntax. (By contrast, examine any Pascal manual and notice how much of it is devoted
to descriptions of syntax.) This disdain for syntax is due partly to the flexibility of Lisp, which makes
it easy to change surface syntax, and partly to the observation that many ‘‘convenient’’ syntactic
constructs, which make the language less uniform, end up causing more trouble than they are worth
when programs become large and complex. In the words of Alan Perlis, ‘‘Syntactic sugar causes
cancer of the semicolon.’’
12 Observe that there are two different operations being combined here: we are creating the procedure,
and we are giving it the name square. It is possible, indeed important, to be able to separate these
two notions -- to create procedures without naming them, and to give names to procedures that have
already been created. We will see how to do this in section 1.3.2.
13 Throughout this book, we will describe the general syntax of expressions by using italic symbols
delimited by angle brackets -- e.g.,
when such an expression is actually used.
14 More generally, the body of the procedure can be a sequence of expressions. In this case, the
interpreter evaluates each expression in the sequence in turn and returns the value of the final
expression as the value of the procedure application.
15 Despite the simplicity of the substitution idea, it turns out to be surprisingly complicated to give a
rigorous mathematical definition of the substitution process. The problem arises from the possibility of
confusion between the names used for the formal parameters of a procedure and the (possibly
identical) names used in the expressions to which the procedure may be applied. Indeed, there is a long
history of erroneous definitions of substitution in the literature of logic and programming semantics.
See Stoy 1977 for a careful discussion of substitution.
16 In chapter 3 we will introduce stream processing, which is a way of handling apparently ‘‘infinite’’
data structures by incorporating a limited form of normal-order evaluation. In section 4.2 we will
modify the Scheme interpreter to produce a normal-order variant of Scheme.
17 ‘‘Interpreted as either true or false’’ means this: In Scheme, there are two distinguished values that
are denoted by the constants #t and #f. When the interpreter checks a predicate’s value, it interprets
#f as false. Any other value is treated as true. (Thus, providing #t is logically unnecessary, but it is
convenient.) In this book we will use names true and false, which are associated with the values
#t and #f respectively.
18 Abs also uses the ‘‘minus’’ operator -, which, when used with a single operand, as in (- x),
indicates negation.
19 A minor difference between if and cond is that the
sequence of expressions. If the corresponding
is found to be true, the expressions
evaluated in sequence and the value of the final expression in the sequence is returned as the value of
the cond. In an if expression, however, the
expressions.
20 Declarative and imperative descriptions are intimately related, as indeed are mathematics and
computer science. For instance, to say that the answer produced by a program is ‘‘correct’’ is to make
a declarative statement about the program. There is a large amount of research aimed at establishing
techniques for proving that programs are correct, and much of the technical difficulty of this subject
has to do with negotiating the transition between imperative statements (from which programs are
constructed) and declarative statements (which can be used to deduce things). In a related vein, an
important current area in programming-language design is the exploration of so-called very high-level
languages, in which one actually programs in terms of declarative statements. The idea is to make
interpreters sophisticated enough so that, given ‘‘what is’’ knowledge specified by the programmer,
they can generate ‘‘how to’’ knowledge automatically. This cannot be done in general, but there are
important areas where progress has been made. We shall revisit this idea in chapter 4.
21 This square-root algorithm is actually a special case of Newton’s method, which is a general
technique for finding roots of equations. The square-root algorithm itself was developed by Heron of
Alexandria in the first century A.D. We will see how to express the general Newton’s method as a Lisp
procedure in section 1.3.4.
22 We will usually give predicates names ending with question marks, to help us remember that they
are predicates. This is just a stylistic convention. As far as the interpreter is concerned, the question
mark is just an ordinary character.
23 Observe that we express our initial guess as 1.0 rather than 1. This would not make any difference
in many Lisp implementations. MIT Scheme, however, distinguishes between exact integers and
decimal values, and dividing two integers produces a rational number rather than a decimal. For
example, dividing 10 by 6 yields 5/3, while dividing 10.0 by 6.0 yields 1.6666666666666667. (We
will learn how to implement arithmetic on rational numbers in section 2.1.1.) If we start with an initial
guess of 1 in our square-root program, and x is an exact integer, all subsequent values produced in the
square-root computation will be rational numbers rather than decimals. Mixed operations on rational
numbers and decimals always yield decimals, so starting with an initial guess of 1.0 forces all
subsequent values to be decimals.
24 Readers who are worried about the efficiency issues involved in using procedure calls to implement
iteration should note the remarks on ‘‘tail recursion’’ in section 1.2.1.
25 It is not even clear which of these procedures is a more efficient implementation. This depends
upon the hardware available. There are machines for which the ‘‘obvious’’ implementation is the less
efficient one. Consider a machine that has extensive tables of logarithms and antilogarithms stored in a
very efficient manner.
26 The concept of consistent renaming is actually subtle and difficult to define formally. Famous
logicians have made embarrassing errors here.
27 Lexical scoping dictates that free variables in a procedure are taken to refer to bindings made by
enclosing procedure definitions; that is, they are looked up in the environment in which the procedure
was defined. We will see how this works in detail in chapter 3 when we study environments and the
detailed behavior of the interpreter.
28 Embedded definitions must come first in a procedure body. The management is not responsible for
the consequences of running programs that intertwine definition and use.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
1.2 Procedures and the Processes They Generate
We have now considered the elements of programming: We have used primitive arithmetic operations,
we have combined these operations, and we have abstracted these composite operations by defining
them as compound procedures. But that is not enough to enable us to say that we know how to
program. Our situation is analogous to that of someone who has learned the rules for how the pieces
move in chess but knows nothing of typical openings, tactics, or strategy. Like the novice chess player,
we don’t yet know the common patterns of usage in the domain. We lack the knowledge of which
moves are worth making (which procedures are worth defining). We lack the experience to predict the
consequences of making a move (executing a procedure).
The ability to visualize the consequences of the actions under consideration is crucial to becoming an
expert programmer, just as it is in any synthetic, creative activity. In becoming an expert photographer,
for example, one must learn how to look at a scene and know how dark each region will appear on a
print for each possible choice of exposure and development conditions. Only then can one reason
backward, planning framing, lighting, exposure, and development to obtain the desired effects. So it is
with programming, where we are planning the course of action to be taken by a process and where we
control the process by means of a program. To become experts, we must learn to visualize the
processes generated by various types of procedures. Only after we have developed such a skill can we
learn to reliably construct programs that exhibit the desired behavior.
A procedure is a pattern for the local evolution of a computational process. It specifies how each stage
of the process is built upon the previous stage. We would like to be able to make statements about the
overall, or global, behavior of a process whose local evolution has been specified by a procedure. This
is very difficult to do in general, but we can at least try to describe some typical patterns of process
evolution.
In this section we will examine some common ‘‘shapes’’ for processes generated by simple
procedures. We will also investigate the rates at which these processes consume the important
computational resources of time and space. The procedures we will consider are very simple. Their
role is like that played by test patterns in photography: as oversimplified prototypical patterns, rather
than practical examples in their own right.
1.2.1 Linear Recursion and Iteration
Figure 1.3: A linear recursive process for computing 6!.
Figure 1.3: A linear recursive process for computing 6!.
We begin by considering the factorial function, defined by
There are many ways to compute factorials. One way is to make use of the observation that n! is equal
to n times (n – 1)! for any positive integer n:
Thus, we can compute n! by computing (n – 1)! and multiplying the result by n. If we add the
stipulation that 1! is equal to 1, this observation translates directly into a procedure:
(define (factorial n)
(if (= n 1)
1
(* n (factorial (- n 1)))))
We can use the substitution model of section 1.1.5 to watch this procedure in action computing 6!, as
shown in figure 1.3.
Now let’s take a different perspective on computing factorials. We could describe a rule for computing
n! by specifying that we first multiply 1 by 2, then multiply the result by 3, then by 4, and so on until
we reach n. More formally, we maintain a running product, together with a counter that counts from 1
up to n. We can describe the computation by saying that the counter and the product simultaneously
change from one step to the next according to the rule
product counter · product
counter counter + 1
and stipulating that n! is the value of the product when the counter exceeds n.
Figure 1.4: A linear iterative process for computing 6!.
Figure 1.4: A linear iterative process for computing 6!.
Once again, we can recast our description as a procedure for computing factorials:29
(define (factorial n)
(fact-iter 1 1 n))
(define (fact-iter product counter max-count)
(if (> counter max-count)
product
(fact-iter (* counter product)
(+ counter 1)
max-count)))
As before, we can use the substitution model to visualize the process of computing 6!, as shown in
figure 1.4.
Compare the two processes. From one point of view, they seem hardly different at all. Both compute
the same mathematical function on the same domain, and each requires a number of steps proportional
to n to compute n!. Indeed, both processes even carry out the same sequence of multiplications,
obtaining the same sequence of partial products. On the other hand, when we consider the ‘‘shapes’’ of
the two processes, we find that they evolve quite differently.
Consider the first process. The substitution model reveals a shape of expansion followed by
contraction, indicated by the arrow in figure 1.3. The expansion occurs as the process builds up a chain
of deferred operations (in this case, a chain of multiplications). The contraction occurs as the
operations are actually performed. This type of process, characterized by a chain of deferred
operations, is called a recursive process. Carrying out this process requires that the interpreter keep
track of the operations to be performed later on. In the computation of n!, the length of the chain of
deferred multiplications, and hence the amount of information needed to keep track of it, grows
linearly with n (is proportional to n), just like the number of steps. Such a process is called a linear
recursive process.
By contrast, the second process does not grow and shrink. At each step, all we need to keep track of,
for any n, are the current values of the variables product, counter, and max-count. We call this
an iterative process. In general, an iterative process is one whose state can be summarized by a fixed
number of state variables, together with a fixed rule that describes how the state variables should be
updated as the process moves from state to state and an (optional) end test that specifies conditions
under which the process should terminate. In computing n!, the number of steps required grows
linearly with n. Such a process is called a linear iterative process.
The contrast between the two processes can be seen in another way. In the iterative case, the program
variables provide a complete description of the state of the process at any point. If we stopped the
computation between steps, all we would need to do to resume the computation is to supply the
interpreter with the values of the three program variables. Not so with the recursive process. In this
case there is some additional ‘‘hidden’’ information, maintained by the interpreter and not contained in
the program variables, which indicates ‘‘where the process is’’ in negotiating the chain of deferred
operations. The longer the chain, the more information must be maintained.30
In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive
process with the notion of a recursive procedure. When we describe a procedure as recursive, we are
referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the
procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive,
we are speaking about how the process evolves, not about the syntax of how a procedure is written. It
may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an
iterative process. However, the process really is iterative: Its state is captured completely by its three
state variables, and an interpreter need keep track of only three variables in order to execute the
process.
One reason that the distinction between process and procedure may be confusing is that most
implementations of common languages (including Ada, Pascal, and C) are designed in such a way that
the interpretation of any recursive procedure consumes an amount of memory that grows with the
number of procedure calls, even when the process described is, in principle, iterative. As a
consequence, these languages can describe iterative processes only by resorting to special-purpose
‘‘looping constructs’’ such as do, repeat, until, for, and while. The implementation of
Scheme we shall consider in chapter 5 does not share this defect. It will execute an iterative process in
constant space, even if the iterative process is described by a recursive procedure. An implementation
with this property is called tail-recursive. With a tail-recursive implementation, iteration can be
expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful
only as syntactic sugar.31
Exercise 1.9. Each of the following two procedures defines a method for adding two positive integers
in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its
argument by 1.
(define (+ a b)
(if (= a 0)
b
(inc (+ (dec a) b))))
(define (+ a b)
(if (= a 0)
b
(+ (dec a) (inc b))))
Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4
5). Are these processes iterative or recursive?
Exercise 1.10. The following procedure computes a mathematical function called Ackermann’s
function.
(define (A x y)
(cond ((= y 0) 0)
((= x 0) (* 2 y))
((= y 1) 2)
(else (A (- x 1)
(A x (- y 1))))))
What are the values of the following expressions?
(A 1 10)
(A 2 4)
(A 3 3)
Consider the following procedures, where A is the procedure defined above:
(define (f n) (A 0 n))
(define (g n) (A 1 n))
(define (h n) (A 2 n))
(define (k n) (* 5 n n))
Give concise mathematical definitions for the functions computed by the procedures f, g, and h for
positive integer values of n. For example, (k n) computes 5n2 .
1.2.2 Tree Recursion
Another common pattern of computation is called tree recursion. As an example, consider computing
the sequence of Fibonacci numbers, in which each number is the sum of the preceding two:
In general, the Fibonacci numbers can be defined by the rule
We can immediately translate this definition into a recursive procedure for computing Fibonacci
numbers:
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
Figure 1.5: The tree-recursive process generated in computing (fib 5).
Figure 1.5: The tree-recursive process generated in computing (fib 5).
Consider the pattern of this computation. To compute (fib 5), we compute (fib 4) and (fib
3). To compute (fib 4), we compute (fib 3) and (fib 2). In general, the evolved process
looks like a tree, as shown in figure 1.5. Notice that the branches split into two at each level (except at
the bottom); this reflects the fact that the fib procedure calls itself twice each time it is invoked.
This procedure is instructive as a prototypical tree recursion, but it is a terrible way to compute
Fibonacci numbers because it does so much redundant computation. Notice in figure 1.5 that the entire
computation of (fib 3) — almost half the work — is duplicated. In fact, it is not hard to show that
the number of times the procedure will compute (fib 1) or (fib 0) (the number of leaves in the
above tree, in general) is precisely Fib(n + 1). To get an idea of how bad this is, one can show that the
value of Fib(n) grows exponentially with n. More precisely (see exercise 1.13), Fib(n) is the closest
integer to n / 5, where
is the golden ratio, which satisfies the equation
Thus, the process uses a number of steps that grows exponentially with the input. On the other hand,
the space required grows only linearly with the input, because we need keep track only of which nodes
are above us in the tree at any point in the computation. In general, the number of steps required by a
tree-recursive process will be proportional to the number of nodes in the tree, while the space required
will be proportional to the maximum depth of the tree.
We can also formulate an iterative process for computing the Fibonacci numbers. The idea is to use a
pair of integers a and b, initialized to Fib(1) = 1 and Fib(0) = 0, and to repeatedly apply the
simultaneous transformations
It is not hard to show that, after applying this transformation n times, a and b will be equal,
respectively, to Fib(n + 1) and Fib(n). Thus, we can compute Fibonacci numbers iteratively using the
procedure
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
This second method for computing Fib(n) is a linear iteration. The difference in number of steps
required by the two methods — one linear in n, one growing as fast as Fib(n) itself — is enormous, even
for small inputs.
One should not conclude from this that tree-recursive processes are useless. When we consider
processes that operate on hierarchically structured data rather than numbers, we will find that tree
recursion is a natural and powerful tool.32 But even in numerical operations, tree-recursive processes
can be useful in helping us to understand and design programs. For instance, although the first fib
procedure is much less efficient than the second one, it is more straightforward, being little more than
a translation into Lisp of the definition of the Fibonacci sequence. To formulate the iterative algorithm
required noticing that the computation could be recast as an iteration with three state variables.
Example: Counting change
It takes only a bit of cleverness to come up with the iterative Fibonacci algorithm. In contrast, consider
the following problem: How many different ways can we make change of $ 1.00, given half-dollars,
quarters, dimes, nickels, and pennies? More generally, can we write a procedure to compute the
number of ways to change any given amount of money?
This problem has a simple solution as a recursive procedure. Suppose we think of the types of coins
available as arranged in some order. Then the following relation holds:
The number of ways to change amount a using n kinds of coins equals
the number of ways to change amount a using all but the first kind of coin, plus
the number of ways to change amount a – d using all n kinds of coins, where d is the
denomination of the first kind of coin.
To see why this is true, observe that the ways to make change can be divided into two groups: those
that do not use any of the first kind of coin, and those that do. Therefore, the total number of ways to
make change for some amount is equal to the number of ways to make change for the amount without
using any of the first kind of coin, plus the number of ways to make change assuming that we do use
the first kind of coin. But the latter number is equal to the number of ways to make change for the
amount that remains after using a coin of the first kind.
Thus, we can recursively reduce the problem of changing a given amount to the problem of changing
smaller amounts using fewer kinds of coins. Consider this reduction rule carefully, and convince
yourself that we can use it to describe an algorithm if we specify the following degenerate cases:33
If a is exactly 0, we should count that as 1 way to make change.
If a is less than 0, we should count that as 0 ways to make change.
If n is 0, we should count that as 0 ways to make change.
We can easily translate this description into a recursive procedure:
(define (count-change amount)
(cc amount 5))
(define (cc amount kinds-of-coins)
(cond ((= amount 0) 1)
((or (< amount 0) (= kinds-of-coins 0)) 0)
(else (+ (cc amount
(- kinds-of-coins 1))
(cc (- amount
(first-denomination kinds-of-coins))
kinds-of-coins)))))
(define (first-denomination kinds-of-coins)
(cond ((= kinds-of-coins 1) 1)
((= kinds-of-coins 2) 5)
((= kinds-of-coins 3) 10)
((= kinds-of-coins 4) 25)
((= kinds-of-coins 5) 50)))
(The first-denomination procedure takes as input the number of kinds of coins available and
returns the denomination of the first kind. Here we are thinking of the coins as arranged in order from
largest to smallest, but any order would do as well.) We can now answer our original question about
changing a dollar:
(count-change 100)
292
Count-change generates a tree-recursive process with redundancies similar to those in our first
implementation of fib. (It will take quite a while for that 292 to be computed.) On the other hand, it
is not obvious how to design a better algorithm for computing the result, and we leave this problem as
a challenge. The observation that a tree-recursive process may be highly inefficient but often easy to
specify and understand has led people to propose that one could get the best of both worlds by
designing a ‘‘smart compiler’’ that could transform tree-recursive procedures into more efficient
procedures that compute the same result.34
Exercise 1.11. A function f is defined by the rule that f(n) = n if n<3 and f(n) = f(n - 1) + 2f(n - 2) +
3f(n - 3) if n> 3. Write a procedure that computes f by means of a recursive process. Write a procedure
that computes f by means of an iterative process.
Exercise 1.12. The following pattern of numbers is called Pascal’s triangle.
The numbers at the edge of the triangle are all 1, and each number inside the triangle is the sum of the
two numbers above it.35 Write a procedure that computes elements of Pascal’s triangle by means of a
recursive process.
Exercise 1.13. Prove that Fib(n) is the closest integer to n / 5, where = (1 + 5)/2. Hint: Let =
(1 – 5)/2. Use induction and the definition of the Fibonacci numbers (see section 1.2.2) to prove that
Fib(n) = ( n – n )/ 5.
1.2.3 Orders of Growth
The previous examples illustrate that processes can differ considerably in the rates at which they
consume computational resources. One convenient way to describe this difference is to use the notion
of order of growth to obtain a gross measure of the resources required by a process as the inputs
become larger.
Let n be a parameter that measures the size of the problem, and let R(n) be the amount of resources the
process requires for a problem of size n. In our previous examples we took n to be the number for
which a given function is to be computed, but there are other possibilities. For instance, if our goal is
to compute an approximation to the square root of a number, we might take n to be the number of
digits accuracy required. For matrix multiplication we might take n to be the number of rows in the
matrices. In general there are a number of properties of the problem with respect to which it will be
desirable to analyze a given process. Similarly, R(n) might measure the number of internal storage
registers used, the number of elementary machine operations performed, and so on. In computers that
do only a fixed number of operations at a time, the time required will be proportional to the number of
elementary machine operations performed.
We say that R(n) has order of growth (f(n)), written R(n) = (f(n)) (pronounced ‘‘theta of f(n)’’), if
there are positive constants k1 and k2 independent of n such that
for any sufficiently large value of n. (In other words, for large n, the value R(n) is sandwiched between
k1 f(n) and k2 f(n).)
For instance, with the linear recursive process for computing factorial described in section 1.2.1 the
number of steps grows proportionally to the input n. Thus, the steps required for this process grows as
(n). We also saw that the space required grows as (n). For the iterative factorial, the number of
steps is still (n) but the space is (1) — that is, constant.36 The tree-recursive Fibonacci
computation requires ( n ) steps and space (n), where is the golden ratio described in
section 1.2.2.
Orders of growth provide only a crude description of the behavior of a process. For example, a process
requiring n2 steps and a process requiring 1000n2 steps and a process requiring 3n2 + 10n + 17 steps
all have (n2 ) order of growth. On the other hand, order of growth provides a useful indication of
how we may expect the behavior of the process to change as we change the size of the problem. For a
(n) (linear) process, doubling the size will roughly double the amount of resources used. For an
exponential process, each increment in problem size will multiply the resource utilization by a
constant factor. In the remainder of section 1.2 we will examine two algorithms whose order of growth
is logarithmic, so that doubling the problem size increases the resource requirement by a constant
amount.
Exercise 1.14. Draw the tree illustrating the process generated by the count-change procedure of
section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and number of
steps used by this process as the amount to be changed increases?
Exercise 1.15. The sine of an angle (specified in radians) can be computed by making use of the
approximation sin x x if x is sufficiently small, and the trigonometric identity
to reduce the size of the argument of sin. (For purposes of this exercise an angle is considered
‘‘sufficiently small’’ if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in
the following procedures:
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
a. How many times is the procedure p applied when (sine 12.15) is evaluated?
b. What is the order of growth in space and number of steps (as a function of a) used by the process
generated by the sine procedure when (sine a) is evaluated?
1.2.4 Exponentiation
Consider the problem of computing the exponential of a given number. We would like a procedure
that takes as arguments a base b and a positive integer exponent n and computes bn . One way to do
this is via the recursive definition
which translates readily into the procedure
(define (expt b n)
(if (= n 0)
1
(* b (expt b (- n 1)))))
This is a linear recursive process, which requires (n) steps and (n) space. Just as with factorial, we
can readily formulate an equivalent linear iteration:
(define (expt b n)
(expt-iter b n 1))
(define (expt-iter b counter product)
(if (= counter 0)
product
(expt-iter b
(- counter 1)
(* b product))))
This version requires (n) steps and (1) space.
We can compute exponentials in fewer steps by using successive squaring. For instance, rather than
computing b8 as
we can compute it using three multiplications:
This method works fine for exponents that are powers of 2. We can also take advantage of successive
squaring in computing exponentials in general if we use the rule
We can express this method as a procedure:
(define (fast-expt b n)
(cond ((= n 0) 1)
((even? n) (square (fast-expt b (/ n 2))))
(else (* b (fast-expt b (- n 1))))))
where the predicate to test whether an integer is even is defined in terms of the primitive procedure
remainder by
(define (even? n)
(= (remainder n 2) 0))
The process evolved by fast-expt grows logarithmically with n in both space and number of steps.
To see this, observe that computing b2n using fast-expt requires only one more multiplication
than computing bn . The size of the exponent we can compute therefore doubles (approximately) with
every new multiplication we are allowed. Thus, the number of multiplications required for an exponent
of n grows about as fast as the logarithm of n to the base 2. The process has (log n) growth.37
The difference between (log n) growth and (n) growth becomes striking as n becomes large. For
example, fast-expt for n = 1000 requires only 14 multiplications.38 It is also possible to use the
idea of successive squaring to devise an iterative algorithm that computes exponentials with a
logarithmic number of steps (see exercise 1.16), although, as is often the case with iterative
algorithms, this is not written down so straightforwardly as the recursive algorithm.39
Exercise 1.16. Design a procedure that evolves an iterative exponentiation process that uses
successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the
observation that (bn/2 )2 = (b2 )n/2 , keep, along with the exponent n and the base b, an additional state
variable a, and define the state transformation in such a way that the product a bn is unchanged from
state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of
a at the end of the process. In general, the technique of defining an invariant quantity that remains
unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
Exercise 1.17. The exponentiation algorithms in this section are based on performing exponentiation
by means of repeated multiplication. In a similar way, one can perform integer multiplication by
means of repeated addition. The following multiplication procedure (in which it is assumed that our
language can only add, not multiply) is analogous to the expt procedure:
(define (* a b)
(if (= b 0)
0
(+ a (* a (- b 1)))))
This algorithm takes a number of steps that is linear in b. Now suppose we include, together with
addition, operations double, which doubles an integer, and halve, which divides an (even) integer
by 2. Using these, design a multiplication procedure analogous to fast-expt that uses a logarithmic
number of steps.
Exercise 1.18. Using the results of exercises 1.16 and 1.17, devise a procedure that generates an
iterative process for multiplying two integers in terms of adding, doubling, and halving and uses a
logarithmic number of steps.40
Exercise 1.19. There is a clever algorithm for computing the Fibonacci numbers in a logarithmic
number of steps. Recall the transformation of the state variables a and b in the fib-iter process of
section 1.2.2: a a + b and b a. Call this transformation T, and observe that applying T over and
over again n times, starting with 1 and 0, produces the pair Fib(n + 1) and Fib(n). In other words, the
Fibonacci numbers are produced by applying Tn , the nth power of the transformation T, starting with
the pair (1,0). Now consider T to be the special case of p = 0 and q = 1 in a family of transformations
Tpq , where Tpq transforms the pair (a,b) according to a bq + aq + ap and b bp + aq. Show that if
we apply such a transformation Tpq twice, the effect is the same as using a single transformation Tp’q’
of the same form, and compute p’ and q’ in terms of p and q. This gives us an explicit way to square
these transformations, and thus we can compute Tn using successive squaring, as in the fast-expt
procedure. Put this all together to complete the following procedure, which runs in a logarithmic
number of steps:41
(define (fib n)
(fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
(cond ((= count 0) b)
((even? count)
(fib-iter a
b
?> ; compute p’
?> ; compute q’
(/ count 2)))
(else (fib-iter (+ (* b q) (* a q) (* a p))
(+ (* b p) (* a q))
p
q
(- count 1)))))
1.2.5 Greatest Common Divisors
The greatest common divisor (GCD) of two integers a and b is defined to be the largest integer that
divides both a and b with no remainder. For example, the GCD of 16 and 28 is 4. In chapter 2, when
we investigate how to implement rational-number arithmetic, we will need to be able to compute
GCDs in order to reduce rational numbers to lowest terms. (To reduce a rational number to lowest
terms, we must divide both the numerator and the denominator by their GCD. For example, 16/28
reduces to 4/7.) One way to find the GCD of two integers is to factor them and search for common
factors, but there is a famous algorithm that is much more efficient.
The idea of the algorithm is based on the observation that, if r is the remainder when a is divided by b,
then the common divisors of a and b are precisely the same as the common divisors of b and r. Thus,
we can use the equation
to successively reduce the problem of computing a GCD to the problem of computing the GCD of
smaller and smaller pairs of integers. For example,
reduces GCD(206,40) to GCD(2,0), which is 2. It is possible to show that starting with any two
positive integers and performing repeated reductions will always eventually produce a pair where the
second number is 0. Then the GCD is the other number in the pair. This method for computing the
GCD is known as Euclid’s Algorithm.42
It is easy to express Euclid’s Algorithm as a procedure:
(define (gcd a b)
(if (= b 0)
a
(gcd b (remainder a b))))
This generates an iterative process, whose number of steps grows as the logarithm of the numbers
involved.
The fact that the number of steps required by Euclid’s Algorithm has logarithmic growth bears an
interesting relation to the Fibonacci numbers:
Lamé’s Theorem: If Euclid’s Algorithm requires k steps to compute the GCD of some pair, then the
smaller number in the pair must be greater than or equal to the kth Fibonacci number.43
We can use this theorem to get an order-of-growth estimate for Euclid’s Algorithm. Let n be the
smaller of the two inputs to the procedure. If the process takes k steps, then we must have n> Fib (k)
k / 5. Therefore the number of steps k grows as the logarithm (to the base ) of n. Hence, the order
of growth is (log n).
Exercise 1.20. The process that a procedure generates is of course dependent on the rules used by the
interpreter. As an example, consider the iterative gcd procedure given above. Suppose we were to
interpret this procedure using normal-order evaluation, as discussed in section 1.1.5. (The
normal-order-evaluation rule for if is described in exercise 1.5.) Using the substitution method (for
normal order), illustrate the process generated in evaluating (gcd 206 40) and indicate the
remainder operations that are actually performed. How many remainder operations are actually
performed in the normal-order evaluation of (gcd 206 40)? In the applicative-order evaluation?
1.2.6 Example: Testing for Primality
This section describes two methods for checking the primality of an integer n, one with order of
growth ( n), and a ‘‘probabilistic’’ algorithm with order of growth (log n). The exercises at the
end of this section suggest programming projects based on these algorithms.
Searching for divisors
Since ancient times, mathematicians have been fascinated by problems concerning prime numbers, and
many people have worked on the problem of determining ways to test if numbers are prime. One way
to test if a number is prime is to find the number’s divisors. The following program finds the smallest
integral divisor (greater than 1) of a given number n. It does this in a straightforward way, by testing n
for divisibility by successive integers starting with 2.
(define (smallest-divisor n)
(find-divisor n 2))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))
(define (divides? a b)
(= (remainder b a) 0))
We can test whether a number is prime as follows: n is prime if and only if n is its own smallest
divisor.
(define (prime? n)
(= n (smallest-divisor n)))
The end test for find-divisor is based on the fact that if n is not prime it must have a divisor less
than or equal to n.44 This means that the algorithm need only test divisors between 1 and n.
Consequently, the number of steps required to identify n as prime will have order of growth ( n).
The Fermat test
The (log n) primality test is based on a result from number theory known as Fermat’s Little
Theorem.45
Fermat’s Little Theorem: If n is a prime number and a is any positive integer less than n, then a
raised to the nth power is congruent to a modulo n.
(Two numbers are said to be congruent modulo n if they both have the same remainder when divided
by n. The remainder of a number a when divided by n is also referred to as the remainder of a modulo
n, or simply as a modulo n.)
If n is not prime, then, in general, most of the numbers a< n will not satisfy the above relation. This
leads to the following algorithm for testing primality: Given a number n, pick a random number a < n
and compute the remainder of an modulo n. If the result is not equal to a, then n is certainly not prime.
If it is a, then chances are good that n is prime. Now pick another random number a and test it with the
same method. If it also satisfies the equation, then we can be even more confident that n is prime. By
trying more and more values of a, we can increase our confidence in the result. This algorithm is
known as the Fermat test.
To implement the Fermat test, we need a procedure that computes the exponential of a number modulo
another number:
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))
This is very similar to the fast-expt procedure of section 1.2.4. It uses successive squaring, so that
the number of steps grows logarithmically with the exponent.46
The Fermat test is performed by choosing at random a number a between 1 and n - 1 inclusive and
checking whether the remainder modulo n of the nth power of a is equal to a. The random number a is
chosen using the procedure random, which we assume is included as a primitive in Scheme. Random
returns a nonnegative integer less than its integer input. Hence, to obtain a random number between 1
and n - 1, we call random with an input of n - 1 and add 1 to the result:
(define (fermat-test n)
(define (try-it a)
(= (expmod a n n) a))
(try-it (+ 1 (random (- n 1)))))
The following procedure runs the test a given number of times, as specified by a parameter. Its value is
true if the test succeeds every time, and false otherwise.
(define (fast-prime? n times)
(cond ((= times 0) true)
((fermat-test n) (fast-prime? n (- times 1)))
(else false)))
Probabilistic methods
The Fermat test differs in character from most familiar algorithms, in which one computes an answer
that is guaranteed to be correct. Here, the answer obtained is only probably correct. More precisely, if
n ever fails the Fermat test, we can be certain that n is not prime. But the fact that n passes the test,
while an extremely strong indication, is still not a guarantee that n is prime. What we would like to say
is that for any number n, if we perform the test enough times and find that n always passes the test,
then the probability of error in our primality test can be made as small as we like.
Unfortunately, this assertion is not quite correct. There do exist numbers that fool the Fermat test:
numbers n that are not prime and yet have the property that an is congruent to a modulo n for all
integers a < n. Such numbers are extremely rare, so the Fermat test is quite reliable in practice.47
There are variations of the Fermat test that cannot be fooled. In these tests, as with the Fermat method,
one tests the primality of an integer n by choosing a random integer a
product
(iter (* counter product)
(+ counter 1))))
(iter 1 1))
We avoided doing this here so as to minimize the number of things to think about at once.
30 When we discuss the implementation of procedures on register machines in chapter 5, we will see
that any iterative process can be realized ‘‘in hardware’’ as a machine that has a fixed set of registers
and no auxiliary memory. In contrast, realizing a recursive process requires a machine that uses an
auxiliary data structure known as a stack.
31 Tail recursion has long been known as a compiler optimization trick. A coherent semantic basis for
tail recursion was provided by Carl Hewitt (1977), who explained it in terms of the
‘‘message-passing’’ model of computation that we shall discuss in chapter 3. Inspired by this, Gerald
Jay Sussman and Guy Lewis Steele Jr. (see Steele 1975) constructed a tail-recursive interpreter for
Scheme. Steele later showed how tail recursion is a consequence of the natural way to compile
procedure calls (Steele 1977). The IEEE standard for Scheme requires that Scheme implementations
be tail-recursive.
32 An example of this was hinted at in section 1.1.3: The interpreter itself evaluates expressions using
a tree-recursive process.
33 For example, work through in detail how the reduction rule applies to the problem of making
change for 10 cents using pennies and nickels.
34 One approach to coping with redundant computations is to arrange matters so that we automatically
construct a table of values as they are computed. Each time we are asked to apply the procedure to
some argument, we first look to see if the value is already stored in the table, in which case we avoid
performing the redundant computation. This strategy, known as tabulation or memoization, can be
implemented in a straightforward way. Tabulation can sometimes be used to transform processes that
require an exponential number of steps (such as count-change) into processes whose space and
time requirements grow linearly with the input. See exercise 3.27.
35 The elements of Pascal’s triangle are called the binomial coefficients, because the nth row consists
of the coefficients of the terms in the expansion of (x + y)n . This pattern for computing the coefficients
appeared in Blaise Pascal’s 1653 seminal work on probability theory, Traité du triangle arithmétique.
According to Knuth (1973), the same pattern appears in the Szu-yuen Yü-chien (‘‘The Precious Mirror
of the Four Elements’’), published by the Chinese mathematician Chu Shih-chieh in 1303, in the
works of the twelfth-century Persian poet and mathematician Omar Khayyam, and in the works of the
twelfth-century Hindu mathematician Bháscara Áchárya.
36 These statements mask a great deal of oversimplification. For instance, if we count process steps as
‘‘machine operations’’ we are making the assumption that the number of machine operations needed
to perform, say, a multiplication is independent of the size of the numbers to be multiplied, which is
false if the numbers are sufficiently large. Similar remarks hold for the estimates of space. Like the
design and description of a process, the analysis of a process can be carried out at various levels of
abstraction.
37 More precisely, the number of multiplications required is equal to 1 less than the log base 2 of n
plus the number of ones in the binary representation of n. This total is always less than twice the log
base 2 of n. The arbitrary constants k1 and k2 in the definition of order notation imply that, for a
logarithmic process, the base to which logarithms are taken does not matter, so all such processes are
described as (log n).
38 You may wonder why anyone would care about raising numbers to the 1000th power. See
section 1.2.6.
39 This iterative algorithm is ancient. It appears in the Chandah-sutra by Áchárya Pingala, written
before 200 B.C. See Knuth 1981, section 4.6.3, for a full discussion and analysis of this and other
methods of exponentiation.
40 This algorithm, which is sometimes known as the ‘‘Russian peasant method’’ of multiplication, is
ancient. Examples of its use are found in the Rhind Papyrus, one of the two oldest mathematical
documents in existence, written about 1700 B.C. (and copied from an even older document) by an
Egyptian scribe named A’h-mose.
41 This exercise was suggested to us by Joe Stoy, based on an example in Kaldewaij 1990.
42 Euclid’s Algorithm is so called because it appears in Euclid’s Elements (Book 7, ca. 300 B.C.).
According to Knuth (1973), it can be considered the oldest known nontrivial algorithm. The ancient
Egyptian method of multiplication (exercise 1.18) is surely older, but, as Knuth explains, Euclid’s
algorithm is the oldest known to have been presented as a general algorithm, rather than as a set of
illustrative examples.
43 This theorem was proved in 1845 by Gabriel Lamé, a French mathematician and engineer known
chiefly for his contributions to mathematical physics. To prove the theorem, we consider pairs (a k
,b k ), where a k > b k , for which Euclid’s Algorithm terminates in k steps. The proof is based on the
claim that, if (a k+1 , b k+1 ) (a k , b k ) (a k-1 , b k-1 ) are three successive pairs in the reduction
process, then we must have b k+1 > b k + b k-1 . To verify the claim, consider that a reduction step is
defined by applying the transformation a k-1 = b k , b k-1 = remainder of a k divided by b k . The second
equation means that a k = qb k + b k-1 for some positive integer q. And since q must be at least 1 we
have a k = qb k + b k-1 > b k + b k-1 . But in the previous reduction step we have b k+1 = a k . Therefore,
b k+1 = a k > b k + b k-1 . This verifies the claim. Now we can prove the theorem by induction on k, the
number of steps that the algorithm requires to terminate. The result is true for k = 1, since this merely
requires that b be at least as large as Fib(1) = 1. Now, assume that the result is true for all integers less
than or equal to k and establish the result for k + 1. Let (a k+1 , b k+1 ) (a k , b k ) (a k-1 , b k-1 ) be
successive pairs in the reduction process. By our induction hypotheses, we have b k-1 > Fib(k – 1) and
b k > Fib(k). Thus, applying the claim we just proved together with the definition of the Fibonacci
numbers gives b k+1 > b k + b k-1 > Fib(k) + Fib(k – 1) = Fib(k + 1), which completes the proof of
Lamé’s Theorem.
44 If d is a divisor of n, then so is n/d. But d and n/d cannot both be greater than n.
45 Pierre de Fermat (1601-1665) is considered to be the founder of modern number theory. He
obtained many important number-theoretic results, but he usually announced just the results, without
providing his proofs. Fermat’s Little Theorem was stated in a letter he wrote in 1640. The first
published proof was given by Euler in 1736 (and an earlier, identical proof was discovered in the
unpublished manuscripts of Leibniz). The most famous of Fermat’s results — known as Fermat’s Last
Theorem — was jotted down in 1637 in his copy of the book Arithmetic (by the third-century Greek
mathematician Diophantus) with the remark ‘‘I have discovered a truly remarkable proof, but this
margin is too small to contain it.’’ Finding a proof of Fermat’s Last Theorem became one of the most
famous challenges in number theory. A complete solution was finally given in 1995 by Andrew Wiles
of Princeton University.
46 The reduction steps in the cases where the exponent e is greater than 1 are based on the fact that,
for any integers x, y, and m, we can find the remainder of x times y modulo m by computing separately
the remainders of x modulo m and y modulo m, multiplying these, and then taking the remainder of the
result modulo m. For instance, in the case where e is even, we compute the remainder of b e/2 modulo
m, square this, and take the remainder modulo m. This technique is useful because it means we can
perform our computation without ever having to deal with numbers much larger than m. (Compare
exercise 1.25.)
47 Numbers that fool the Fermat test are called Carmichael numbers, and little is known about them
other than that they are extremely rare. There are 255 Carmichael numbers below 100,000,000. The
smallest few are 561, 1105, 1729, 2465, 2821, and 6601. In testing primality of very large numbers
chosen at random, the chance of stumbling upon a value that fools the Fermat test is less than the
chance that cosmic radiation will cause the computer to make an error in carrying out a ‘‘correct’’
algorithm. Considering an algorithm to be inadequate for the first reason but not for the second
illustrates the difference between mathematics and engineering.
48 One of the most striking applications of probabilistic prime testing has been to the field of
cryptography. Although it is now computationally infeasible to factor an arbitrary 200-digit number,
the primality of such a number can be checked in a few seconds with the Fermat test. This fact forms
the basis of a technique for constructing ‘‘unbreakable codes’’ suggested by Rivest, Shamir, and
Adleman (1977). The resulting RSA algorithm has become a widely used technique for enhancing the
security of electronic communications. Because of this and related developments, the study of prime
numbers, once considered the epitome of a topic in ‘‘pure’’ mathematics to be studied only for its own
sake, now turns out to have important practical applications to cryptography, electronic funds transfer,
and information retrieval.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
1.3 Formulating Abstractions with Higher-Order Procedures
We have seen that procedures are, in effect, abstractions that describe compound operations on
numbers independent of the particular numbers. For example, when we
(define (cube x) (* x x x))
we are not talking about the cube of a particular number, but rather about a method for obtaining the
cube of any number. Of course we could get along without ever defining this procedure, by always
writing expressions such as
(* 3 3 3)
(* x x x)
(* y y y)
and never mentioning cube explicitly. This would place us at a serious disadvantage, forcing us to
work always at the level of the particular operations that happen to be primitives in the language
(multiplication, in this case) rather than in terms of higher-level operations. Our programs would be
able to compute cubes, but our language would lack the ability to express the concept of cubing. One
of the things we should demand from a powerful programming language is the ability to build
abstractions by assigning names to common patterns and then to work in terms of the abstractions
directly. Procedures provide this ability. This is why all but the most primitive programming
languages include mechanisms for defining procedures.
Yet even in numerical processing we will be severely limited in our ability to create abstractions if we
are restricted to procedures whose parameters must be numbers. Often the same programming pattern
will be used with a number of different procedures. To express such patterns as concepts, we will need
to construct procedures that can accept procedures as arguments or return procedures as values.
Procedures that manipulate procedures are called higher-order procedures. This section shows how
higher-order procedures can serve as powerful abstraction mechanisms, vastly increasing the
expressive power of our language.
1.3.1 Procedures as Arguments
Consider the following three procedures. The first computes the sum of the integers from a through b:
(define (sum-integers a b)
(if (> a b)
0
(+ a (sum-integers (+ a 1) b))))
The second computes the sum of the cubes of the integers in the given range:
(define (sum-cubes a b)
(if (> a b)
0
(+ (cube a) (sum-cubes (+ a 1) b))))
The third computes the sum of a sequence of terms in the series
which converges to /8 (very slowly):49
(define (pi-sum a b)
(if (> a b)
0
(+ (/ 1.0 (* a (+ a 2))) (pi-sum (+ a 4) b))))
These three procedures clearly share a common underlying pattern. They are for the most part
identical, differing only in the name of the procedure, the function of a used to compute the term to be
added, and the function that provides the next value of a. We could generate each of the procedures by
filling in slots in the same template:
(define (
(if (> a b)
0
(+ (
(
The presence of such a common pattern is strong evidence that there is a useful abstraction waiting to
be brought to the surface. Indeed, mathematicians long ago identified the abstraction of summation of
a series and invented ‘‘sigma notation,’’ for example
to express this concept. The power of sigma notation is that it allows mathematicians to deal with the
concept of summation itself rather than only with particular sums — for example, to formulate general
results about sums that are independent of the particular series being summed.
Similarly, as program designers, we would like our language to be powerful enough so that we can
write a procedure that expresses the concept of summation itself rather than only procedures that
compute particular sums. We can do so readily in our procedural language by taking the common
template shown above and transforming the ‘‘slots’’ into formal parameters:
(define (sum term a next b)
(if (> a b)
0
(+ (term a)
(sum term (next a) next b))))
Notice that sum takes as its arguments the lower and upper bounds a and b together with the
procedures term and next. We can use sum just as we would any procedure. For example, we can
use it (along with a procedure inc that increments its argument by 1) to define sum-cubes:
(define (inc n) (+ n 1))
(define (sum-cubes a b)
(sum cube a inc b))
Using this, we can compute the sum of the cubes of the integers from 1 to 10:
(sum-cubes 1 10)
3025
With the aid of an identity procedure to compute the term, we can define sum-integers in terms of
sum:
(define (identity x) x)
(define (sum-integers a b)
(sum identity a inc b))
Then we can add up the integers from 1 to 10:
(sum-integers 1 10)
55
We can also define pi-sum in the same way:50
(define (pi-sum a b)
(define (pi-term x)
(/ 1.0 (* x (+ x 2))))
(define (pi-next x)
(+ x 4))
(sum pi-term a pi-next b))
Using these procedures, we can compute an approximation to :
(* 8 (pi-sum 1 1000))
3.139592655589783
Once we have sum, we can use it as a building block in formulating further concepts. For instance, the
definite integral of a function f between the limits a and b can be approximated numerically using the
formula
for small values of dx. We can express this directly as a procedure:
(define (integral f a b dx)
(define (add-dx x) (+ x dx))
(* (sum f (+ a (/ dx 2.0)) add-dx b)
dx))
(integral cube 0 1 0.01)
.24998750000000042
(integral cube 0 1 0.001)
.249999875000001
(The exact value of the integral of cube between 0 and 1 is 1/4.)
Exercise 1.29. Simpson’s Rule is a more accurate method of numerical integration than the method
illustrated above. Using Simpson’s Rule, the integral of a function f between a and b is approximated
as
where h = (b – a)/n, for some even integer n, and y k = f(a + kh). (Increasing n increases the accuracy of
the approximation.) Define a procedure that takes as arguments f, a, b, and n and returns the value of
the integral, computed using Simpson’s Rule. Use your procedure to integrate cube between 0 and 1
(with n = 100 and n = 1000), and compare the results to those of the integral procedure shown
above.
Exercise 1.30. The sum procedure above generates a linear recursion. The procedure can be rewritten
so that the sum is performed iteratively. Show how to do this by filling in the missing expressions in
the following definition:
(define (sum term a next b)
(define (iter a result)
(if ?>
?>
(iter ?> ?>)))
(iter ?> ?>))
Exercise 1.31.
a. The sum procedure is only the simplest of a vast number of similar abstractions that can be
captured as higher-order procedures.51 Write an analogous procedure called product that returns
the product of the values of a function at points over a given range. Show how to define factorial
in terms of product. Also use product to compute approximations to using the formula52
b. If your product procedure generates a recursive process, write one that generates an iterative
process. If it generates an iterative process, write one that generates a recursive process.
Exercise 1.32. a. Show that sum and product (exercise 1.31) are both special cases of a still more
general notion called accumulate that combines a collection of terms, using some general
accumulation function:
(accumulate combiner null-value term a next b)
Accumulate takes as arguments the same term and range specifications as sum and product,
together with a combiner procedure (of two arguments) that specifies how the current term is to be
combined with the accumulation of the preceding terms and a null-value that specifies what base
value to use when the terms run out. Write accumulate and show how sum and product can both
be defined as simple calls to accumulate.
b. If your accumulate procedure generates a recursive process, write one that generates an iterative
process. If it generates an iterative process, write one that generates a recursive process.
Exercise 1.33. You can obtain an even more general version of accumulate (exercise 1.32) by
introducing the notion of a filter on the terms to be combined. That is, combine only those terms
derived from values in the range that satisfy a specified condition. The resulting
filtered-accumulate abstraction takes the same arguments as accumulate, together with an
additional predicate of one argument that specifies the filter. Write filtered-accumulate as a
procedure. Show how to express the following using filtered-accumulate:
a. the sum of the squares of the prime numbers in the interval a to b (assuming that you have a
prime? predicate already written)
b. the product of all the positive integers less than n that are relatively prime to n (i.e., all positive
integers i < n such that GCD(i,n) = 1).
1.3.2 Constructing Procedures Using Lambda
In using sum as in section 1.3.1, it seems terribly awkward to have to define trivial procedures such as
pi-term and pi-next just so we can use them as arguments to our higher-order procedure. Rather
than define pi-next and pi-term, it would be more convenient to have a way to directly specify
‘‘the procedure that returns its input incremented by 4’’ and ‘‘the procedure that returns the reciprocal
of its input times its input plus 2.’’ We can do this by introducing the special form lambda, which
creates procedures. Using lambda we can describe what we want as
(lambda (x) (+ x 4))
and
(lambda (x) (/ 1.0 (* x (+ x 2))))
Then our pi-sum procedure can be expressed without defining any auxiliary procedures as
(define (pi-sum a b)
(sum (lambda (x) (/ 1.0 (* x (+ x 2))))
a
(lambda (x) (+ x 4))
b))
Again using lambda, we can write the integral procedure without having to define the auxiliary
procedure add-dx:
(define (integral f a b dx)
(* (sum f
(+ a (/ dx 2.0))
(lambda (x) (+ x dx))
b)
dx))
In general, lambda is used to create procedures in the same way as define, except that no name is
specified for the procedure:
(lambda (
The resulting procedure is just as much a procedure as one that is created using define. The only
difference is that it has not been associated with any name in the environment. In fact,
(define (plus4 x) (+ x 4))
is equivalent to
(define plus4 (lambda (x) (+ x 4)))
We can read a lambda expression as follows:
(lambda (x) (+ x 4))
the procedure of an argument x that adds x and 4
Like any expression that has a procedure as its value, a lambda expression can be used as the
operator in a combination such as
((lambda (x y z) (+ x y (square z))) 1 2 3)
12
or, more generally, in any context where we would normally use a procedure name.53
Using let to create local variables
Another use of lambda is in creating local variables. We often need local variables in our procedures
other than those that have been bound as formal parameters. For example, suppose we wish to
compute the function
which we could also express as
In writing a procedure to compute f, we would like to include as local variables not only x and y but
also the names of intermediate quantities like a and b. One way to accomplish this is to use an
auxiliary procedure to bind the local variables:
(define (f x y)
(define (f-helper a b)
(+ (* x (square a))
(* y b)
(* a b)))
(f-helper (+ 1 (* x y))
(- 1 y)))
Of course, we could use a lambda expression to specify an anonymous procedure for binding our
local variables. The body of f then becomes a single call to that procedure:
(define (f x y)
((lambda (a b)
(+ (* x (square a))
(* y b)
(* a b)))
(+ 1 (* x y))
(- 1 y)))
This construct is so useful that there is a special form called let to make its use more convenient.
Using let, the f procedure could be written as
(define (f x y)
(let ((a (+ 1 (* x y)))
(b (- 1 y)))
(+ (* x (square a))
(* y b)
(* a b))))
The general form of a let expression is
(let ((
(
(
)
which can be thought of as saying
let
in
The first part of the let expression is a list of name-expression pairs. When the let is evaluated,
each name is associated with the value of the corresponding expression. The body of the let is
evaluated with these names bound as local variables. The way this happens is that the let expression
is interpreted as an alternate syntax for
((lambda ( …)
)
No new mechanism is required in the interpreter in order to provide local variables. A let expression
is simply syntactic sugar for the underlying lambda application.
We can see from this equivalence that the scope of a variable specified by a let expression is the
body of the let. This implies that:
Let allows one to bind variables as locally as possible to where they are to be used. For example,
if the value of x is 5, the value of the expression
(+ (let ((x 3))
(+ x (* x 10)))
x)
is 38. Here, the x in the body of the let is 3, so the value of the let expression is 33. On the
other hand, the x that is the second argument to the outermost + is still 5.
The variables’ values are computed outside the let. This matters when the expressions that
provide the values for the local variables depend upon variables having the same names as the
local variables themselves. For example, if the value of x is 2, the expression
(let ((x 3)
(y (+ x 2)))
(* x y))
will have the value 12 because, inside the body of the let, x will be 3 and y will be 4 (which is
the outer x plus 2).
Sometimes we can use internal definitions to get the same effect as with let. For example, we could
have defined the procedure f above as
(define (f x y)
(define a (+ 1 (* x y)))
(define b (- 1 y))
(+ (* x (square a))
(* y b)
(* a b)))
We prefer, however, to use let in situations like this and to use internal define only for internal
procedures.54
Exercise 1.34. Suppose we define the procedure
(define (f g)
(g 2))
Then we have
(f square)
4
(f (lambda (z) (* z (+ z 1))))
6
What happens if we (perversely) ask the interpreter to evaluate the combination (f f)? Explain.
1.3.3 Procedures as General Methods
We introduced compound procedures in section 1.1.4 as a mechanism for abstracting patterns of
numerical operations so as to make them independent of the particular numbers involved. With
higher-order procedures, such as the integral procedure of section 1.3.1, we began to see a more
powerful kind of abstraction: procedures used to express general methods of computation, independent
of the particular functions involved. In this section we discuss two more elaborate examples — general
methods for finding zeros and fixed points of functions — and show how these methods can be
expressed directly as procedures.
Finding roots of equations by the half-interval method
The half-interval method is a simple but powerful technique for finding roots of an equation f(x) = 0,
where f is a continuous function. The idea is that, if we are given points a and b such that f(a) < 0 <
f(b), then f must have at least one zero between a and b. To locate a zero, let x be the average of a and
b and compute f(x). If f(x) > 0, then f must have a zero between a and x. If f(x) < 0, then f must have a
zero between x and b. Continuing in this way, we can identify smaller and smaller intervals on which f
must have a zero. When we reach a point where the interval is small enough, the process stops. Since
the interval of uncertainty is reduced by half at each step of the process, the number of steps required
grows as (log( L/T)), where L is the length of the original interval and T is the error tolerance (that
is, the size of the interval we will consider ‘‘small enough’’). Here is a procedure that implements this
strategy:
(define (search f neg-point pos-point)
(let ((midpoint (average neg-point pos-point)))
(if (close-enough? neg-point pos-point)
midpoint
(let ((test-value (f midpoint)))
(cond ((positive? test-value)
(search f neg-point midpoint))
((negative? test-value)
(search f midpoint pos-point))
(else midpoint))))))
We assume that we are initially given the function f together with points at which its values are
negative and positive. We first compute the midpoint of the two given points. Next we check to see if
the given interval is small enough, and if so we simply return the midpoint as our answer. Otherwise,
we compute as a test value the value of f at the midpoint. If the test value is positive, then we continue
the process with a new interval running from the original negative point to the midpoint. If the test
value is negative, we continue with the interval from the midpoint to the positive point. Finally, there
is the possibility that the test value is 0, in which case the midpoint is itself the root we are searching
for.
To test whether the endpoints are ‘‘close enough’’ we can use a procedure similar to the one used in
section 1.1.7 for computing square roots:55
(define (close-enough? x y)
(< (abs (- x y)) 0.001))
Search is awkward to use directly, because we can accidentally give it points at which f’s values do
not have the required sign, in which case we get a wrong answer. Instead we will use search via the
following procedure, which checks to see which of the endpoints has a negative function value and
which has a positive value, and calls the search procedure accordingly. If the function has the same
sign on the two given points, the half-interval method cannot be used, in which case the procedure
signals an error.56
(define (half-interval-method f a b)
(let ((a-value (f a))
(b-value (f b)))
(cond ((and (negative? a-value) (positive? b-value))
(search f a b))
((and (negative? b-value) (positive? a-value))
(search f b a))
(else
(error "Values are not of opposite sign" a b)))))
The following example uses the half-interval method to approximate as the root between 2 and 4 of
sin x = 0:
(half-interval-method sin 2.0 4.0)
3.14111328125
Here is another example, using the half-interval method to search for a root of the equation x3 - 2x - 3
= 0 between 1 and 2:
(half-interval-method (lambda (x) (- (* x x x) (* 2 x) 3))
1.0
2.0)
1.89306640625
Finding fixed points of functions
A number x is called a fixed point of a function f if x satisfies the equation f(x) = x. For some functions
f we can locate a fixed point by beginning with an initial guess and applying f repeatedly,
until the value does not change very much. Using this idea, we can devise a procedure
fixed-point that takes as inputs a function and an initial guess and produces an approximation to a
fixed point of the function. We apply the function repeatedly until we find two successive values
whose difference is less than some prescribed tolerance:
(define tolerance 0.00001)
(define (fixed-point f first-guess)
(define (close-enough? v1 v2)
(< (abs (- v1 v2)) tolerance))
(define (try guess)
(let ((next (f guess)))
(if (close-enough? guess next)
next
(try next))))
(try first-guess))
For example, we can use this method to approximate the fixed point of the cosine function, starting
with 1 as an initial approximation:57
(fixed-point cos 1.0)
.7390822985224023
Similarly, we can find a solution to the equation y = sin y + cos y:
(fixed-point (lambda (y) (+ (sin y) (cos y)))
1.0)
1.2587315962971173
The fixed-point process is reminiscent of the process we used for finding square roots in section 1.1.7.
Both are based on the idea of repeatedly improving a guess until the result satisfies some criterion. In
fact, we can readily formulate the square-root computation as a fixed-point search. Computing the
square root of some number x requires finding a y such that y2 = x. Putting this equation into the
equivalent form y = x/y, we recognize that we are looking for a fixed point of the function58 y x/y,
and we can therefore try to compute square roots as
(define (sqrt x)
(fixed-point (lambda (y) (/ x y))
1.0))
Unfortunately, this fixed-point search does not converge. Consider an initial guess y1 . The next guess
is y2 = x/y1 and the next guess is y3 = x/y2 = x/(x/y1 ) = y1 . This results in an infinite loop in which
the two guesses y1 and y2 repeat over and over, oscillating about the answer.
One way to control such oscillations is to prevent the guesses from changing so much. Since the
answer is always between our guess y and x/y, we can make a new guess that is not as far from y as x/y
by averaging y with x/y, so that the next guess after y is (1/2)(y + x/y) instead of x/y. The process of
making such a sequence of guesses is simply the process of looking for a fixed point of y (1/2)(y +
x/y):
(define (sqrt x)
(fixed-point (lambda (y) (average y (/ x y)))
1.0))
(Note that y = (1/2)(y + x/y) is a simple transformation of the equation y = x/y; to derive it, add y to
both sides of the equation and divide by 2.)
With this modification, the square-root procedure works. In fact, if we unravel the definitions, we can
see that the sequence of approximations to the square root generated here is precisely the same as the
one generated by our original square-root procedure of section 1.1.7. This approach of averaging
successive approximations to a solution, a technique we that we call average damping, often aids the
convergence of fixed-point searches.
Exercise 1.35. Show that the golden ratio (section 1.2.2) is a fixed point of the transformation x 1
+ 1/x, and use this fact to compute by means of the fixed-point procedure.
Exercise 1.36. Modify fixed-point so that it prints the sequence of approximations it generates,
using the newline and display primitives shown in exercise 1.22. Then find a solution to x x =
1000 by finding a fixed point of x log(1000)/log(x). (Use Scheme’s primitive log procedure,
which computes natural logarithms.) Compare the number of steps this takes with and without average
damping. (Note that you cannot start fixed-point with a guess of 1, as this would cause division
by log(1) = 0.)
Exercise 1.37. a. An infinite continued fraction is an expression of the form
As an example, one can show that the infinite continued fraction expansion with the N i and the D i all
equal to 1 produces 1/ , where is the golden ratio (described in section 1.2.2). One way to
approximate an infinite continued fraction is to truncate the expansion after a given number of terms.
Such a truncation -- a so-called k-term finite continued fraction -- has the form
Suppose that n and d are procedures of one argument (the term index i) that return the N i and D i of
the terms of the continued fraction. Define a procedure cont-frac such that evaluating
(cont-frac n d k) computes the value of the k-term finite continued fraction. Check your
procedure by approximating 1/ using
(cont-frac (lambda (i) 1.0)
(lambda (i) 1.0)
k)
for successive values of k. How large must you make k in order to get an approximation that is
accurate to 4 decimal places?
b. If your cont-frac procedure generates a recursive process, write one that generates an iterative
process. If it generates an iterative process, write one that generates a recursive process.
Exercise 1.38. In 1737, the Swiss mathematician Leonhard Euler published a memoir De
Fractionibus Continuis, which included a continued fraction expansion for e - 2, where e is the base of
the natural logarithms. In this fraction, the N i are all 1, and the D i are successively 1, 2, 1, 1, 4, 1, 1,
6, 1, 1, 8, .... Write a program that uses your cont-frac procedure from exercise 1.37 to
approximate e, based on Euler’s expansion.
Exercise 1.39. A continued fraction representation of the tangent function was published in 1770 by
the German mathematician J.H. Lambert:
where x is in radians. Define a procedure (tan-cf x k) that computes an approximation to the
tangent function based on Lambert’s formula. K specifies the number of terms to compute, as in
exercise 1.37.
1.3.4 Procedures as Returned Values
The above examples demonstrate how the ability to pass procedures as arguments significantly
enhances the expressive power of our programming language. We can achieve even more expressive
power by creating procedures whose returned values are themselves procedures.
We can illustrate this idea by looking again at the fixed-point example described at the end of
section 1.3.3. We formulated a new version of the square-root procedure as a fixed-point search,
starting with the observation that x is a fixed-point of the function y x/y. Then we used average
damping to make the approximations converge. Average damping is a useful general technique in
itself. Namely, given a function f, we consider the function whose value at x is equal to the average of
x and f(x).
We can express the idea of average damping by means of the following procedure:
(define (average-damp f)
(lambda (x) (average x (f x))))
Average-damp is a procedure that takes as its argument a procedure f and returns as its value a
procedure (produced by the lambda) that, when applied to a number x, produces the average of x and
(f x). For example, applying average-damp to the square procedure produces a procedure
whose value at some number x is the average of x and x2 . Applying this resulting procedure to 10
returns the average of 10 and 100, or 55:59
((average-damp square) 10)
55
Using average-damp, we can reformulate the square-root procedure as follows:
(define (sqrt x)
(fixed-point (average-damp (lambda (y) (/ x y)))
1.0))
Notice how this formulation makes explicit the three ideas in the method: fixed-point search, average
damping, and the function y x/y. It is instructive to compare this formulation of the square-root
method with the original version given in section 1.1.7. Bear in mind that these procedures express the
same process, and notice how much clearer the idea becomes when we express the process in terms of
these abstractions. In general, there are many ways to formulate a process as a procedure. Experienced
programmers know how to choose procedural formulations that are particularly perspicuous, and
where useful elements of the process are exposed as separate entities that can be reused in other
applications. As a simple example of reuse, notice that the cube root of x is a fixed point of the
function y x/y2 , so we can immediately generalize our square-root procedure to one that extracts
cube roots:60
(define (cube-root x)
(fixed-point (average-damp (lambda (y) (/ x (square y))))
1.0))
Newton’s method
When we first introduced the square-root procedure, in section 1.1.7, we mentioned that this was a
special case of Newton’s method. If x g(x) is a differentiable function, then a solution of the
equation g(x) = 0 is a fixed point of the function x f(x) where
and Dg(x) is the derivative of g evaluated at x. Newton’s method is the use of the fixed-point method
we saw above to approximate a solution of the equation by finding a fixed point of the function f.61
For many functions g and for sufficiently good initial guesses for x, Newton’s method converges very
rapidly to a solution of g(x) = 0.62
In order to implement Newton’s method as a procedure, we must first express the idea of derivative.
Note that ‘‘derivative,’’ like average damping, is something that transforms a function into another
function. For instance, the derivative of the function x x3 is the function x 3x2 . In general, if g is
a function and dx is a small number, then the derivative Dg of g is the function whose value at any
number x is given (in the limit of small dx) by
Thus, we can express the idea of derivative (taking dx to be, say, 0.00001) as the procedure
(define (deriv g)
(lambda (x)
(/ (- (g (+ x dx)) (g x))
dx)))
along with the definition
(define dx 0.00001)
Like average-damp, deriv is a procedure that takes a procedure as argument and returns a
procedure as value. For example, to approximate the derivative of x x3 at 5 (whose exact value is
75) we can evaluate
(define (cube x) (* x x x))
((deriv cube) 5)
75.00014999664018
With the aid of deriv, we can express Newton’s method as a fixed-point process:
(define (newton-transform g)
(lambda (x)
(- x (/ (g x) ((deriv g) x)))))
(define (newtons-method g guess)
(fixed-point (newton-transform g) guess))
The newton-transform procedure expresses the formula at the beginning of this section, and
newtons-method is readily defined in terms of this. It takes as arguments a procedure that
computes the function for which we want to find a zero, together with an initial guess. For instance, to
find the square root of x, we can use Newton’s method to find a zero of the function y y2 - x starting
with an initial guess of 1.63 This provides yet another form of the square-root procedure:
(define (sqrt x)
(newtons-method (lambda (y) (- (square y) x))
1.0))
Abstractions and first-class procedures
We’ve seen two ways to express the square-root computation as an instance of a more general method,
once as a fixed-point search and once using Newton’s method. Since Newton’s method was itself
expressed as a fixed-point process, we actually saw two ways to compute square roots as fixed points.
Each method begins with a function and finds a fixed point of some transformation of the function. We
can express this general idea itself as a procedure:
(define (fixed-point-of-transform g transform guess)
(fixed-point (transform g) guess))
This very general procedure takes as its arguments a procedure g that computes some function, a
procedure that transforms g, and an initial guess. The returned result is a fixed point of the
transformed function.
Using this abstraction, we can recast the first square-root computation from this section (where we
look for a fixed point of the average-damped version of y x/y) as an instance of this general method:
(define (sqrt x)
(fixed-point-of-transform (lambda (y) (/ x y))
average-damp
1.0))
Similarly, we can express the second square-root computation from this section (an instance of
Newton’s method that finds a fixed point of the Newton transform of y y2 - x) as
(define (sqrt x)
(fixed-point-of-transform (lambda (y) (- (square y) x))
newton-transform
1.0))
We began section 1.3 with the observation that compound procedures are a crucial abstraction
mechanism, because they permit us to express general methods of computing as explicit elements in
our programming language. Now we’ve seen how higher-order procedures permit us to manipulate
these general methods to create further abstractions.
As programmers, we should be alert to opportunities to identify the underlying abstractions in our
programs and to build upon them and generalize them to create more powerful abstractions. This is not
to say that one should always write programs in the most abstract way possible; expert programmers
know how to choose the level of abstraction appropriate to their task. But it is important to be able to
think in terms of these abstractions, so that we can be ready to apply them in new contexts. The
significance of higher-order procedures is that they enable us to represent these abstractions explicitly
as elements in our programming language, so that they can be handled just like other computational
elements.
In general, programming languages impose restrictions on the ways in which computational elements
can be manipulated. Elements with the fewest restrictions are said to have first-class status. Some of
the ‘‘rights and privileges’’ of first-class elements are:64
They may be named by variables.
They may be passed as arguments to procedures.
They may be returned as the results of procedures.
They may be included in data structures.65
Lisp, unlike other common programming languages, awards procedures full first-class status. This
poses challenges for efficient implementation, but the resulting gain in expressive power is
enormous.66
Exercise 1.40. Define a procedure cubic that can be used together with the newtons-method
procedure in expressions of the form
(newtons-method (cubic a b c) 1)
to approximate zeros of the cubic x3 + ax2 + bx + c.
Exercise 1.41. Define a procedure double that takes a procedure of one argument as argument and
returns a procedure that applies the original procedure twice. For example, if inc is a procedure that
adds 1 to its argument, then (double inc) should be a procedure that adds 2. What value is
returned by
(((double (double double)) inc) 5)
Exercise 1.42. Let f and g be two one-argument functions. The composition f after g is defined to be
the function x f(g(x)). Define a procedure compose that implements composition. For example, if
inc is a procedure that adds 1 to its argument,
((compose square inc) 6)
49
Exercise 1.43. If f is a numerical function and n is a positive integer, then we can form the nth
repeated application of f, which is defined to be the function whose value at x is f(f(...(f(x))...)).
For example, if f is the function x x + 1, then the nth repeated application of f is the function x x +
n. If f is the operation of squaring a number, then the nth repeated application of f is the function that
raises its argument to the 2n th power. Write a procedure that takes as inputs a procedure that computes
f and a positive integer n and returns the procedure that computes the nth repeated application of f.
Your procedure should be able to be used as follows:
((repeated square 2) 5)
625
Hint: You may find it convenient to use compose from exercise 1.42.
Exercise 1.44. The idea of smoothing a function is an important concept in signal processing. If f is a
function and dx is some small number, then the smoothed version of f is the function whose value at a
point x is the average of f(x - dx), f(x), and f(x + dx). Write a procedure smooth that takes as input a
procedure that computes f and returns a procedure that computes the smoothed f. It is sometimes
valuable to repeatedly smooth a function (that is, smooth the smoothed function, and so on) to
obtained the n-fold smoothed function. Show how to generate the n-fold smoothed function of any
given function using smooth and repeated from exercise 1.43.
Exercise 1.45. We saw in section 1.3.3 that attempting to compute square roots by naively finding a
fixed point of y x/y does not converge, and that this can be fixed by average damping. The same
method works for finding cube roots as fixed points of the average-damped y x/y2 . Unfortunately,
the process does not work for fourth roots -- a single average damp is not enough to make a
fixed-point search for y x/y3 converge. On the other hand, if we average damp twice (i.e., use the
average damp of the average damp of y x/y3 ) the fixed-point search does converge. Do some
experiments to determine how many average damps are required to compute nth roots as a fixed-point
search based upon repeated average damping of y x/yn-1 . Use this to implement a simple procedure
for computing nth roots using fixed-point, average-damp, and the repeated procedure of
exercise 1.43. Assume that any arithmetic operations you need are available as primitives.
Exercise 1.46. Several of the numerical methods described in this chapter are instances of an
extremely general computational strategy known as iterative improvement. Iterative improvement says
that, to compute something, we start with an initial guess for the answer, test if the guess is good
enough, and otherwise improve the guess and continue the process using the improved guess as the
new guess. Write a procedure iterative-improve that takes two procedures as arguments: a
method for telling whether a guess is good enough and a method for improving a guess.
Iterative-improve should return as its value a procedure that takes a guess as argument and
keeps improving the guess until it is good enough. Rewrite the sqrt procedure of section 1.1.7 and
the fixed-point procedure of section 1.3.3 in terms of iterative-improve.
49 This series, usually written in the equivalent form ( /4) = 1 - (1/3) + (1/5) - (1/7) + ···, is due to
Leibniz. We’ll see how to use this as the basis for some fancy numerical tricks in section 3.5.3.
50 Notice that we have used block structure (section 1.1.8) to embed the definitions of pi-next and
pi-term within pi-sum, since these procedures are unlikely to be useful for any other purpose. We
will see how to get rid of them altogether in section 1.3.2.
51 The intent of exercises 1.31-1.33 is to demonstrate the expressive power that is attained by using an
appropriate abstraction to consolidate many seemingly disparate operations. However, though
accumulation and filtering are elegant ideas, our hands are somewhat tied in using them at this point
since we do not yet have data structures to provide suitable means of combination for these
abstractions. We will return to these ideas in section 2.2.3 when we show how to use sequences as
interfaces for combining filters and accumulators to build even more powerful abstractions. We will
see there how these methods really come into their own as a powerful and elegant approach to
designing programs.
52 This formula was discovered by the seventeenth-century English mathematician John Wallis.
53 It would be clearer and less intimidating to people learning Lisp if a name more obvious than
lambda, such as make-procedure, were used. But the convention is firmly entrenched. The
notation is adopted from the calculus, a mathematical formalism introduced by the mathematical
logician Alonzo Church (1941). Church developed the calculus to provide a rigorous foundation for
studying the notions of function and function application. The calculus has become a basic tool for
mathematical investigations of the semantics of programming languages.
54 Understanding internal definitions well enough to be sure a program means what we intend it to
mean requires a more elaborate model of the evaluation process than we have presented in this
chapter. The subtleties do not arise with internal definitions of procedures, however. We will return to
this issue in section 4.1.6, after we learn more about evaluation.
55 We have used 0.001 as a representative ‘‘small’’ number to indicate a tolerance for the acceptable
error in a calculation. The appropriate tolerance for a real calculation depends upon the problem to be
solved and the limitations of the computer and the algorithm. This is often a very subtle consideration,
requiring help from a numerical analyst or some other kind of magician.
56 This can be accomplished using error, which takes as arguments a number of items that are
printed as error messages.
57 Try this during a boring lecture: Set your calculator to radians mode and then repeatedly press the
cos button until you obtain the fixed point.
58 (pronounced ‘‘maps to’’) is the mathematician’s way of writing lambda. y x/y means
(lambda(y) (/ x y)), that is, the function whose value at y is x/y.
59 Observe that this is a combination whose operator is itself a combination. Exercise 1.4 already
demonstrated the ability to form such combinations, but that was only a toy example. Here we begin to
see the real need for such combinations -- when applying a procedure that is obtained as the value
returned by a higher-order procedure.
60 See exercise 1.45 for a further generalization.
61 Elementary calculus books usually describe Newton’s method in terms of the sequence of
approximations xn+1 = xn - g(xn )/Dg(xn ). Having language for talking about processes and using the
idea of fixed points simplifies the description of the method.
62 Newton’s method does not always converge to an answer, but it can be shown that in favorable
cases each iteration doubles the number-of-digits accuracy of the approximation to the solution. In
such cases, Newton’s method will converge much more rapidly than the half-interval method.
63 For finding square roots, Newton’s method converges rapidly to the correct solution from any
starting point.
64 The notion of first-class status of programming-language elements is due to the British computer
scientist Christopher Strachey (1916-1975).
65 We’ll see examples of this after we introduce data structures in chapter 2.
66 The major implementation cost of first-class procedures is that allowing procedures to be returned
as values requires reserving storage for a procedure’s free variables even while the procedure is not
executing. In the Scheme implementation we will study in section 4.1, these variables are stored in the
procedure’s environment.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
Chapter 2
Building Abstractions with Data
We now come to the decisive step of mathematical
abstraction: we forget about what the symbols stand for.
...[The mathematician] need not be idle; there are many
operations which he may carry out with these symbols,
without ever having to look at the things they stand for.
Hermann Weyl, The Mathematical Way of Thinking
We concentrated in chapter 1 on computational processes and on the role of procedures in program
design. We saw how to use primitive data (numbers) and primitive operations (arithmetic operations),
how to combine procedures to form compound procedures through composition, conditionals, and the
use of parameters, and how to abstract procedures by using define. We saw that a procedure can be
regarded as a pattern for the local evolution of a process, and we classified, reasoned about, and
performed simple algorithmic analyses of some common patterns for processes as embodied in
procedures. We also saw that higher-order procedures enhance the power of our language by enabling
us to manipulate, and thereby to reason in terms of, general methods of computation. This is much of
the essence of programming.
In this chapter we are going to look at more complex data. All the procedures in chapter 1 operate on
simple numerical data, and simple data are not sufficient for many of the problems we wish to address
using computation. Programs are typically designed to model complex phenomena, and more often
than not one must construct computational objects that have several parts in order to model real-world
phenomena that have several aspects. Thus, whereas our focus in chapter 1 was on building
abstractions by combining procedures to form compound procedures, we turn in this chapter to another
key aspect of any programming language: the means it provides for building abstractions by
combining data objects to form compound data.
Why do we want compound data in a programming language? For the same reasons that we want
compound procedures: to elevate the conceptual level at which we can design our programs, to
increase the modularity of our designs, and to enhance the expressive power of our language. Just as
the ability to define procedures enables us to deal with processes at a higher conceptual level than that
of the primitive operations of the language, the ability to construct compound data objects enables us
to deal with data at a higher conceptual level than that of the primitive data objects of the language.
Consider the task of designing a system to perform arithmetic with rational numbers. We could
imagine an operation add-rat that takes two rational numbers and produces their sum. In terms of
simple data, a rational number can be thought of as two integers: a numerator and a denominator.
Thus, we could design a program in which each rational number would be represented by two integers
(a numerator and a denominator) and where add-rat would be implemented by two procedures (one
producing the numerator of the sum and one producing the denominator). But this would be awkward,
because we would then need to explicitly keep track of which numerators corresponded to which
denominators. In a system intended to perform many operations on many rational numbers, such
bookkeeping details would clutter the programs substantially, to say nothing of what they would do to
our minds. It would be much better if we could ‘‘glue together’’ a numerator and denominator to form
a pair -- a compound data object -- that our programs could manipulate in a way that would be
consistent with regarding a rational number as a single conceptual unit.
The use of compound data also enables us to increase the modularity of our programs. If we can
manipulate rational numbers directly as objects in their own right, then we can separate the part of our
program that deals with rational numbers per se from the details of how rational numbers may be
represented as pairs of integers. The general technique of isolating the parts of a program that deal
with how data objects are represented from the parts of a program that deal with how data objects are
used is a powerful design methodology called data abstraction. We will see how data abstraction
makes programs much easier to design, maintain, and modify.
The use of compound data leads to a real increase in the expressive power of our programming
language. Consider the idea of forming a ‘‘linear combination’’ ax + by. We might like to write a
procedure that would accept a, b, x, and y as arguments and return the value of ax + by. This presents
no difficulty if the arguments are to be numbers, because we can readily define the procedure
(define (linear-combination a b x y)
(+ (* a x) (* b y)))
But suppose we are not concerned only with numbers. Suppose we would like to express, in
procedural terms, the idea that one can form linear combinations whenever addition and multiplication
are defined -- for rational numbers, complex numbers, polynomials, or whatever. We could express
this as a procedure of the form
(define (linear-combination a b x y)
(add (mul a x) (mul b y)))
where add and mul are not the primitive procedures + and * but rather more complex things that will
perform the appropriate operations for whatever kinds of data we pass in as the arguments a, b, x, and
y. The key point is that the only thing linear-combination should need to know about a, b, x,
and y is that the procedures add and mul will perform the appropriate manipulations. From the
perspective of the procedure linear-combination, it is irrelevant what a, b, x, and y are and
even more irrelevant how they might happen to be represented in terms of more primitive data. This
same example shows why it is important that our programming language provide the ability to
manipulate compound objects directly: Without this, there is no way for a procedure such as
linear-combination to pass its arguments along to add and mul without having to know their
detailed structure.1 We begin this chapter by implementing the rational-number arithmetic system
mentioned above. This will form the background for our discussion of compound data and data
abstraction. As with compound procedures, the main issue to be addressed is that of abstraction as a
technique for coping with complexity, and we will see how data abstraction enables us to erect suitable
abstraction barriers between different parts of a program.
We will see that the key to forming compound data is that a programming language should provide
some kind of ‘‘glue’’ so that data objects can be combined to form more complex data objects. There
are many possible kinds of glue. Indeed, we will discover how to form compound data using no
special ‘‘data’’ operations at all, only procedures. This will further blur the distinction between
‘‘procedure’’ and ‘‘data,’’ which was already becoming tenuous toward the end of chapter 1. We will
also explore some conventional techniques for representing sequences and trees. One key idea in
dealing with compound data is the notion of closure -- that the glue we use for combining data objects
should allow us to combine not only primitive data objects, but compound data objects as well.
Another key idea is that compound data objects can serve as conventional interfaces for combining
program modules in mix-and-match ways. We illustrate some of these ideas by presenting a simple
graphics language that exploits closure.
We will then augment the representational power of our language by introducing symbolic expressions
-- data whose elementary parts can be arbitrary symbols rather than only numbers. We explore various
alternatives for representing sets of objects. We will find that, just as a given numerical function can
be computed by many different computational processes, there are many ways in which a given data
structure can be represented in terms of simpler objects, and the choice of representation can have
significant impact on the time and space requirements of processes that manipulate the data. We will
investigate these ideas in the context of symbolic differentiation, the representation of sets, and the
encoding of information.
Next we will take up the problem of working with data that may be represented differently by different
parts of a program. This leads to the need to implement generic operations, which must handle many
different types of data. Maintaining modularity in the presence of generic operations requires more
powerful abstraction barriers than can be erected with simple data abstraction alone. In particular, we
introduce data-directed programming as a technique that allows individual data representations to be
designed in isolation and then combined additively (i.e., without modification). To illustrate the power
of this approach to system design, we close the chapter by applying what we have learned to the
implementation of a package for performing symbolic arithmetic on polynomials, in which the
coefficients of the polynomials can be integers, rational numbers, complex numbers, and even other
polynomials.
1 The ability to directly manipulate procedures provides an analogous increase in the expressive
power of a programming language. For example, in section 1.3.1 we introduced the sum procedure,
which takes a procedure term as an argument and computes the sum of the values of term over
some specified interval. In order to define sum, it is crucial that we be able to speak of a procedure
such as term as an entity in its own right, without regard for how term might be expressed with
more primitive operations. Indeed, if we did not have the notion of ‘‘a procedure,’’ it is doubtful that
we would ever even think of the possibility of defining an operation such as sum. Moreover, insofar as
performing the summation is concerned, the details of how term may be constructed from more
primitive operations are irrelevant.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
2.1 Introduction to Data Abstraction
In section 1.1.8, we noted that a procedure used as an element in creating a more complex procedure
could be regarded not only as a collection of particular operations but also as a procedural abstraction.
That is, the details of how the procedure was implemented could be suppressed, and the particular
procedure itself could be replaced by any other procedure with the same overall behavior. In other
words, we could make an abstraction that would separate the way the procedure would be used from
the details of how the procedure would be implemented in terms of more primitive procedures. The
analogous notion for compound data is called data abstraction. Data abstraction is a methodology that
enables us to isolate how a compound data object is used from the details of how it is constructed from
more primitive data objects.
The basic idea of data abstraction is to structure the programs that are to use compound data objects so
that they operate on ‘‘abstract data.’’ That is, our programs should use data in such a way as to make
no assumptions about the data that are not strictly necessary for performing the task at hand. At the
same time, a ‘‘concrete’’ data representation is defined independent of the programs that use the data.
The interface between these two parts of our system will be a set of procedures, called selectors and
constructors, that implement the abstract data in terms of the concrete representation. To illustrate this
technique, we will consider how to design a set of procedures for manipulating rational numbers.
2.1.1 Example: Arithmetic Operations for Rational Numbers
Suppose we want to do arithmetic with rational numbers. We want to be able to add, subtract,
multiply, and divide them and to test whether two rational numbers are equal.
Let us begin by assuming that we already have a way of constructing a rational number from a
numerator and a denominator. We also assume that, given a rational number, we have a way of
extracting (or selecting) its numerator and its denominator. Let us further assume that the constructor
and selectors are available as procedures:
(make-rat
whose denominator is the integer
(numer
(denom
We are using here a powerful strategy of synthesis: wishful thinking. We haven’t yet said how a
rational number is represented, or how the procedures numer, denom, and make-rat should be
implemented. Even so, if we did have these three procedures, we could then add, subtract, multiply,
divide, and test equality by using the following relations:
We can express these rules as procedures:
(define (add-rat x y)
(make-rat (+ (* (numer x) (denom y))
(* (numer y) (denom x)))
(* (denom x) (denom y))))
(define (sub-rat x y)
(make-rat (- (* (numer x) (denom y))
(* (numer y) (denom x)))
(* (denom x) (denom y))))
(define (mul-rat x y)
(make-rat (* (numer x) (numer y))
(* (denom x) (denom y))))
(define (div-rat x y)
(make-rat (* (numer x) (denom y))
(* (denom x) (numer y))))
(define (equal-rat? x y)
(= (* (numer x) (denom y))
(* (numer y) (denom x))))
Now we have the operations on rational numbers defined in terms of the selector and constructor
procedures numer, denom, and make-rat. But we haven’t yet defined these. What we need is some
way to glue together a numerator and a denominator to form a rational number.
Pairs
To enable us to implement the concrete level of our data abstraction, our language provides a
compound structure called a pair, which can be constructed with the primitive procedure cons. This
procedure takes two arguments and returns a compound data object that contains the two arguments as
parts. Given a pair, we can extract the parts using the primitive procedures car and cdr.2 Thus, we
can use cons, car, and cdr as follows:
(define x (cons 1 2))
(car x)
1
(cdr x)
2
Notice that a pair is a data object that can be given a name and manipulated, just like a primitive data
object. Moreover, cons can be used to form pairs whose elements are pairs, and so on:
(define x (cons 1 2))
(define y (cons 3 4))
(define z (cons x y))
(car (car z))
1
(car (cdr z))
3
In section 2.2 we will see how this ability to combine pairs means that pairs can be used as
general-purpose building blocks to create all sorts of complex data structures. The single
compound-data primitive pair, implemented by the procedures cons, car, and cdr, is the only glue
we need. Data objects constructed from pairs are called list-structured data.
Representing rational numbers
Pairs offer a natural way to complete the rational-number system. Simply represent a rational number
as a pair of two integers: a numerator and a denominator. Then make-rat, numer, and denom are
readily implemented as follows:3
(define (make-rat n d) (cons n d))
(define (numer x) (car x))
(define (denom x) (cdr x))
Also, in order to display the results of our computations, we can print rational numbers by printing the
numerator, a slash, and the denominator:4
(define (print-rat x)
(newline)
(display (numer x))
(display “/”)
(display (denom x)))
Now we can try our rational-number procedures:
(define one-half (make-rat 1 2))
(print-rat one-half)
1/2
(define one-third (make-rat 1 3))
(print-rat (add-rat one-half one-third))
5/6
(print-rat (mul-rat one-half one-third))
1/6
(print-rat (add-rat one-third one-third))
6/9
As the final example shows, our rational-number implementation does not reduce rational numbers to
lowest terms. We can remedy this by changing make-rat. If we have a gcd procedure like the one
in section 1.2.5 that produces the greatest common divisor of two integers, we can use gcd to reduce
the numerator and the denominator to lowest terms before constructing the pair:
(define (make-rat n d)
(let ((g (gcd n d)))
(cons (/ n g) (/ d g))))
Now we have
(print-rat (add-rat one-third one-third))
2/3
as desired. This modification was accomplished by changing the constructor make-rat without
changing any of the procedures (such as add-rat and mul-rat) that implement the actual
operations.
Exercise 2.1. Define a better version of make-rat that handles both positive and negative
arguments. Make-rat should normalize the sign so that if the rational number is positive, both the
numerator and denominator are positive, and if the rational number is negative, only the numerator is
negative.
2.1.2 Abstraction Barriers
Before continuing with more examples of compound data and data abstraction, let us consider some of
the issues raised by the rational-number example. We defined the rational-number operations in terms
of a constructor make-rat and selectors numer and denom. In general, the underlying idea of data
abstraction is to identify for each type of data object a basic set of operations in terms of which all
manipulations of data objects of that type will be expressed, and then to use only those operations in
manipulating the data.
We can envision the structure of the rational-number system as shown in figure 2.1. The horizontal
lines represent abstraction barriers that isolate different ‘‘levels’’ of the system. At each level, the
barrier separates the programs (above) that use the data abstraction from the programs (below) that
implement the data abstraction. Programs that use rational numbers manipulate them solely in terms of
the procedures supplied ‘‘for public use’’ by the rational-number package: add-rat, sub-rat,
mul-rat, div-rat, and equal-rat?. These, in turn, are implemented solely in terms of the
constructor and selectors make-rat, numer, and denom, which themselves are implemented in
terms of pairs. The details of how pairs are implemented are irrelevant to the rest of the
rational-number package so long as pairs can be manipulated by the use of cons, car, and cdr. In
effect, procedures at each level are the interfaces that define the abstraction barriers and connect the
different levels.
Figure 2.1: Data-abstraction barriers in the rational-number package.
Figure 2.1: Data-abstraction barriers in the rational-number package.
This simple idea has many advantages. One advantage is that it makes programs much easier to
maintain and to modify. Any complex data structure can be represented in a variety of ways with the
primitive data structures provided by a programming language. Of course, the choice of representation
influences the programs that operate on it; thus, if the representation were to be changed at some later
time, all such programs might have to be modified accordingly. This task could be time-consuming
and expensive in the case of large programs unless the dependence on the representation were to be
confined by design to a very few program modules.
For example, an alternate way to address the problem of reducing rational numbers to lowest terms is
to perform the reduction whenever we access the parts of a rational number, rather than when we
construct it. This leads to different constructor and selector procedures:
(define (make-rat n d)
(cons n d))
(define (numer x)
(let ((g (gcd (car x) (cdr x))))
(/ (car x) g)))
(define (denom x)
(let ((g (gcd (car x) (cdr x))))
(/ (cdr x) g)))
The difference between this implementation and the previous one lies in when we compute the gcd. If
in our typical use of rational numbers we access the numerators and denominators of the same rational
numbers many times, it would be preferable to compute the gcd when the rational numbers are
constructed. If not, we may be better off waiting until access time to compute the gcd. In any case,
when we change from one representation to the other, the procedures add-rat, sub-rat, and so on
do not have to be modified at all.
Constraining the dependence on the representation to a few interface procedures helps us design
programs as well as modify them, because it allows us to maintain the flexibility to consider alternate
implementations. To continue with our simple example, suppose we are designing a rational-number
package and we can’t decide initially whether to perform the gcd at construction time or at selection
time. The data-abstraction methodology gives us a way to defer that decision without losing the ability
to make progress on the rest of the system.
Exercise 2.2. Consider the problem of representing line segments in a plane. Each segment is
represented as a pair of points: a starting point and an ending point. Define a constructor
make-segment and selectors start-segment and end-segment that define the representation
of segments in terms of points. Furthermore, a point can be represented as a pair of numbers: the x
coordinate and the y coordinate. Accordingly, specify a constructor make-point and selectors
x-point and y-point that define this representation. Finally, using your selectors and
constructors, define a procedure midpoint-segment that takes a line segment as argument and
returns its midpoint (the point whose coordinates are the average of the coordinates of the endpoints).
To try your procedures, you’ll need a way to print points:
(define (print-point p)
(newline)
(display “(“)
(display (x-point p))
(display “,”)
(display (y-point p))
(display “)”))
Exercise 2.3. Implement a representation for rectangles in a plane. (Hint: You may want to make use
of exercise 2.2.) In terms of your constructors and selectors, create procedures that compute the
perimeter and the area of a given rectangle. Now implement a different representation for rectangles.
Can you design your system with suitable abstraction barriers, so that the same perimeter and area
procedures will work using either representation?
2.1.3 What Is Meant by Data?
We began the rational-number implementation in section 2.1.1 by implementing the rational-number
operations add-rat, sub-rat, and so on in terms of three unspecified procedures: make-rat,
numer, and denom. At that point, we could think of the operations as being defined in terms of data
objects — numerators, denominators, and rational numbers — whose behavior was specified by the
latter three procedures.
But exactly what is meant by data? It is not enough to say ‘‘whatever is implemented by the given
selectors and constructors.’’ Clearly, not every arbitrary set of three procedures can serve as an
appropriate basis for the rational-number implementation. We need to guarantee that, if we construct a
rational number x from a pair of integers n and d, then extracting the numer and the denom of x and
dividing them should yield the same result as dividing n by d. In other words, make-rat, numer,
and denom must satisfy the condition that, for any integer n and any non-zero integer d, if x is
(make-rat n d), then
In fact, this is the only condition make-rat, numer, and denom must fulfill in order to form a
suitable basis for a rational-number representation. In general, we can think of data as defined by some
collection of selectors and constructors, together with specified conditions that these procedures must
fulfill in order to be a valid representation.5
This point of view can serve to define not only ‘‘high-level’’ data objects, such as rational numbers,
but lower-level objects as well. Consider the notion of a pair, which we used in order to define our
rational numbers. We never actually said what a pair was, only that the language supplied procedures
cons, car, and cdr for operating on pairs. But the only thing we need to know about these three
operations is that if we glue two objects together using cons we can retrieve the objects using car
and cdr. That is, the operations satisfy the condition that, for any objects x and y, if z is (cons x
y) then (car z) is x and (cdr z) is y. Indeed, we mentioned that these three procedures are
included as primitives in our language. However, any triple of procedures that satisfies the above
condition can be used as the basis for implementing pairs. This point is illustrated strikingly by the fact
that we could implement cons, car, and cdr without using any data structures at all but only using
procedures. Here are the definitions:
(define (cons x y)
(define (dispatch m)
(cond ((= m 0) x)
((= m 1) y)
(else (error “Argument not 0 or 1 — CONS” m))))
dispatch)
(define (car z) (z 0))
(define (cdr z) (z 1))
This use of procedures corresponds to nothing like our intuitive notion of what data should be.
Nevertheless, all we need to do to show that this is a valid way to represent pairs is to verify that these
procedures satisfy the condition given above.
The subtle point to notice is that the value returned by (cons x y) is a procedure — namely the
internally defined procedure dispatch, which takes one argument and returns either x or y
depending on whether the argument is 0 or 1. Correspondingly, (car z) is defined to apply z to 0.
Hence, if z is the procedure formed by (cons x y), then z applied to 0 will yield x. Thus, we have
shown that (car (cons x y)) yields x, as desired. Similarly, (cdr (cons x y)) applies the
procedure returned by (cons x y) to 1, which returns y. Therefore, this procedural implementation
of pairs is a valid implementation, and if we access pairs using only cons, car, and cdr we cannot
distinguish this implementation from one that uses ‘‘real’’ data structures.
The point of exhibiting the procedural representation of pairs is not that our language works this way
(Scheme, and Lisp systems in general, implement pairs directly, for efficiency reasons) but that it
could work this way. The procedural representation, although obscure, is a perfectly adequate way to
represent pairs, since it fulfills the only conditions that pairs need to fulfill. This example also
demonstrates that the ability to manipulate procedures as objects automatically provides the ability to
represent compound data. This may seem a curiosity now, but procedural representations of data will
play a central role in our programming repertoire. This style of programming is often called message
passing, and we will be using it as a basic tool in chapter 3 when we address the issues of modeling
and simulation.
Exercise 2.4. Here is an alternative procedural representation of pairs. For this representation, verify
that (car (cons x y)) yields x for any objects x and y.
(define (cons x y)
(lambda (m) (m x y)))
(define (car z)
(z (lambda (p q) p)))
What is the corresponding definition of cdr? (Hint: To verify that this works, make use of the
substitution model of section 1.1.5.)
Exercise 2.5. Show that we can represent pairs of nonnegative integers using only numbers and
arithmetic operations if we represent the pair a and b as the integer that is the product 2a 3b . Give the
corresponding definitions of the procedures cons, car, and cdr.
Exercise 2.6. In case representing pairs as procedures wasn’t mind-boggling enough, consider that, in
a language that can manipulate procedures, we can get by without numbers (at least insofar as
nonnegative integers are concerned) by implementing 0 and the operation of adding 1 as
(define zero (lambda (f) (lambda (x) x)))
(define (add-1 n)
(lambda (f) (lambda (x) (f ((n f) x)))))
This representation is known as Church numerals, after its inventor, Alonzo Church, the logician who
invented the calculus.
Define one and two directly (not in terms of zero and add-1). (Hint: Use substitution to evaluate
(add-1 zero)). Give a direct definition of the addition procedure + (not in terms of repeated
application of add-1).
2.1.4 Extended Exercise: Interval Arithmetic
Alyssa P. Hacker is designing a system to help people solve engineering problems. One feature she
wants to provide in her system is the ability to manipulate inexact quantities (such as measured
parameters of physical devices) with known precision, so that when computations are done with such
approximate quantities the results will be numbers of known precision.
Electrical engineers will be using Alyssa’s system to compute electrical quantities. It is sometimes
necessary for them to compute the value of a parallel equivalent resistance Rp of two resistors R1 and
R2 using the formula
Resistance values are usually known only up to some tolerance guaranteed by the manufacturer of the
resistor. For example, if you buy a resistor labeled ‘‘6.8 ohms with 10% tolerance’’ you can only be
sure that the resistor has a resistance between 6.8 – 0.68 = 6.12 and 6.8 + 0.68 = 7.48 ohms. Thus, if
you have a 6.8-ohm 10% resistor in parallel with a 4.7-ohm 5% resistor, the resistance of the
combination can range from about 2.58 ohms (if the two resistors are at the lower bounds) to about
2.97 ohms (if the two resistors are at the upper bounds).
Alyssa’s idea is to implement ‘‘interval arithmetic’’ as a set of arithmetic operations for combining
‘‘intervals’’ (objects that represent the range of possible values of an inexact quantity). The result of
adding, subtracting, multiplying, or dividing two intervals is itself an interval, representing the range
of the result.
Alyssa postulates the existence of an abstract object called an ‘‘interval’’ that has two endpoints: a
lower bound and an upper bound. She also presumes that, given the endpoints of an interval, she can
construct the interval using the data constructor make-interval. Alyssa first writes a procedure for
adding two intervals. She reasons that the minimum value the sum could be is the sum of the two
lower bounds and the maximum value it could be is the sum of the two upper bounds:
(define (add-interval x y)
(make-interval (+ (lower-bound x) (lower-bound y))
(+ (upper-bound x) (upper-bound y))))
Alyssa also works out the product of two intervals by finding the minimum and the maximum of the
products of the bounds and using them as the bounds of the resulting interval. (Min and max are
primitives that find the minimum or maximum of any number of arguments.)
(define (mul-interval x y)
(let ((p1 (* (lower-bound x) (lower-bound y)))
(p2 (* (lower-bound x) (upper-bound y)))
(p3 (* (upper-bound x) (lower-bound y)))
(p4 (* (upper-bound x) (upper-bound y))))
(make-interval (min p1 p2 p3 p4)
(max p1 p2 p3 p4))))
To divide two intervals, Alyssa multiplies the first by the reciprocal of the second. Note that the
bounds of the reciprocal interval are the reciprocal of the upper bound and the reciprocal of the lower
bound, in that order.
(define (div-interval x y)
(mul-interval x
(make-interval (/ 1.0 (upper-bound y))
(/ 1.0 (lower-bound y)))))
Exercise 2.7. Alyssa’s program is incomplete because she has not specified the implementation of the
interval abstraction. Here is a definition of the interval constructor:
(define (make-interval a b) (cons a b))
Define selectors upper-bound and lower-bound to complete the implementation.
Exercise 2.8. Using reasoning analogous to Alyssa’s, describe how the difference of two intervals
may be computed. Define a corresponding subtraction procedure, called sub-interval.
Exercise 2.9. The width of an interval is half of the difference between its upper and lower bounds.
The width is a measure of the uncertainty of the number specified by the interval. For some arithmetic
operations the width of the result of combining two intervals is a function only of the widths of the
argument intervals, whereas for others the width of the combination is not a function of the widths of
the argument intervals. Show that the width of the sum (or difference) of two intervals is a function
only of the widths of the intervals being added (or subtracted). Give examples to show that this is not
true for multiplication or division.
Exercise 2.10. Ben Bitdiddle, an expert systems programmer, looks over Alyssa’s shoulder and
comments that it is not clear what it means to divide by an interval that spans zero. Modify Alyssa’s
code to check for this condition and to signal an error if it occurs.
Exercise 2.11. In passing, Ben also cryptically comments: ‘‘By testing the signs of the endpoints of
the intervals, it is possible to break mul-interval into nine cases, only one of which requires more
than two multiplications.’’ Rewrite this procedure using Ben’s suggestion.
After debugging her program, Alyssa shows it to a potential user, who complains that her program
solves the wrong problem. He wants a program that can deal with numbers represented as a center
value and an additive tolerance; for example, he wants to work with intervals such as 3.5± 0.15 rather
than [3.35, 3.65]. Alyssa returns to her desk and fixes this problem by supplying an alternate
constructor and alternate selectors:
(define (make-center-width c w)
(make-interval (- c w) (+ c w)))
(define (center i)
(/ (+ (lower-bound i) (upper-bound i)) 2))
(define (width i)
(/ (- (upper-bound i) (lower-bound i)) 2))
Unfortunately, most of Alyssa’s users are engineers. Real engineering situations usually involve
measurements with only a small uncertainty, measured as the ratio of the width of the interval to the
midpoint of the interval. Engineers usually specify percentage tolerances on the parameters of devices,
as in the resistor specifications given earlier.
Exercise 2.12. Define a constructor make-center-percent that takes a center and a percentage
tolerance and produces the desired interval. You must also define a selector percent that produces
the percentage tolerance for a given interval. The center selector is the same as the one shown
above.
Exercise 2.13. Show that under the assumption of small percentage tolerances there is a simple
formula for the approximate percentage tolerance of the product of two intervals in terms of the
tolerances of the factors. You may simplify the problem by assuming that all numbers are positive.
After considerable work, Alyssa P. Hacker delivers her finished system. Several years later, after she
has forgotten all about it, she gets a frenzied call from an irate user, Lem E. Tweakit. It seems that
Lem has noticed that the formula for parallel resistors can be written in two algebraically equivalent
ways:
and
He has written the following two programs, each of which computes the parallel-resistors formula
differently:
(define (par1 r1 r2)
(div-interval (mul-interval r1 r2)
(add-interval r1 r2)))
(define (par2 r1 r2)
(let ((one (make-interval 1 1)))
(div-interval one
(add-interval (div-interval one r1)
(div-interval one r2)))))
Lem complains that Alyssa’s program gives different answers for the two ways of computing. This is a
serious complaint.
Exercise 2.14. Demonstrate that Lem is right. Investigate the behavior of the system on a variety of
arithmetic expressions. Make some intervals A and B, and use them in computing the expressions A/A
and A/B. You will get the most insight by using intervals whose width is a small percentage of the
center value. Examine the results of the computation in center-percent form (see exercise 2.12).
Exercise 2.15. Eva Lu Ator, another user, has also noticed the different intervals computed by
different but algebraically equivalent expressions. She says that a formula to compute with intervals
using Alyssa’s system will produce tighter error bounds if it can be written in such a form that no
variable that represents an uncertain number is repeated. Thus, she says, par2 is a ‘‘better’’ program
for parallel resistances than par1. Is she right? Why?
Exercise 2.16. Explain, in general, why equivalent algebraic expressions may lead to different
answers. Can you devise an interval-arithmetic package that does not have this shortcoming, or is this
task impossible? (Warning: This problem is very difficult.)
2 The name cons stands for ‘‘construct.’’ The names car and cdr derive from the original
implementation of Lisp on the IBM 704. That machine had an addressing scheme that allowed one to
reference the ‘‘address’’ and ‘‘decrement’’ parts of a memory location. Car stands for ‘‘Contents of
Address part of Register’’ and cdr (pronounced ‘‘could-er’’) stands for ‘‘Contents of Decrement part
of Register.’’
3 Another way to define the selectors and constructor is
(define make-rat cons)
(define numer car)
(define denom cdr)
The first definition associates the name make-rat with the value of the expression cons, which is
the primitive procedure that constructs pairs. Thus make-rat and cons are names for the same
primitive constructor.
Defining selectors and constructors in this way is efficient: Instead of make-rat calling cons,
make-rat is cons, so there is only one procedure called, not two, when make-rat is called. On
the other hand, doing this defeats debugging aids that trace procedure calls or put breakpoints on
procedure calls: You may want to watch make-rat being called, but you certainly don’t want to
watch every call to cons.
We have chosen not to use this style of definition in this book.
4 Display is the Scheme primitive for printing data. The Scheme primitive newline starts a new
line for printing. Neither of these procedures returns a useful value, so in the uses of print-rat
below, we show only what print-rat prints, not what the interpreter prints as the value returned by
print-rat.
5 Surprisingly, this idea is very difficult to formulate rigorously. There are two approaches to giving
such a formulation. One, pioneered by C. A. R. Hoare (1972), is known as the method of abstract
models. It formalizes the ‘‘procedures plus conditions’’ specification as outlined in the
rational-number example above. Note that the condition on the rational-number representation was
stated in terms of facts about integers (equality and division). In general, abstract models define new
kinds of data objects in terms of previously defined types of data objects. Assertions about data objects
can therefore be checked by reducing them to assertions about previously defined data objects.
Another approach, introduced by Zilles at MIT, by Goguen, Thatcher, Wagner, and Wright at IBM
(see Thatcher, Wagner, and Wright 1978), and by Guttag at Toronto (see Guttag 1977), is called
algebraic specification. It regards the ‘‘procedures’’ as elements of an abstract algebraic system whose
behavior is specified by axioms that correspond to our ‘‘conditions,’’ and uses the techniques of
abstract algebra to check assertions about data objects. Both methods are surveyed in the paper by
Liskov and Zilles (1975).
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
2.2 Hierarchical Data and the Closure Property
As we have seen, pairs provide a primitive ‘‘glue’’ that we can use to construct compound data
objects. Figure 2.2 shows a standard way to visualize a pair — in this case, the pair formed by (cons
1 2). In this representation, which is called box-and-pointer notation, each object is shown as a
pointer to a box. The box for a primitive object contains a representation of the object. For example,
the box for a number contains a numeral. The box for a pair is actually a double box, the left part
containing (a pointer to) the car of the pair and the right part containing the cdr.
We have already seen that cons can be used to combine not only numbers but pairs as well. (You
made use of this fact, or should have, in doing exercises 2.2 and 2.3.) As a consequence, pairs provide
a universal building block from which we can construct all sorts of data structures. Figure 2.3 shows
two ways to use pairs to combine the numbers 1, 2, 3, and 4.
Figure 2.2: Box-and-pointer representation of (cons 1 2).
Figure 2.2: Box-and-pointer representation of (cons 1 2).
Figure 2.3: Two ways to combine 1, 2, 3, and 4 using pairs.
Figure 2.3: Two ways to combine 1, 2, 3, and 4 using pairs.
The ability to create pairs whose elements are pairs is the essence of list structure’s importance as a
representational tool. We refer to this ability as the closure property of cons. In general, an operation
for combining data objects satisfies the closure property if the results of combining things with that
operation can themselves be combined using the same operation.6 Closure is the key to power in any
means of combination because it permits us to create hierarchical structures — structures made up of
parts, which themselves are made up of parts, and so on.
From the outset of chapter 1, we’ve made essential use of closure in dealing with procedures, because
all but the very simplest programs rely on the fact that the elements of a combination can themselves
be combinations. In this section, we take up the consequences of closure for compound data. We
describe some conventional techniques for using pairs to represent sequences and trees, and we exhibit
a graphics language that illustrates closure in a vivid way.7
2.2.1 Representing Sequences
Figure 2.4: The sequence 1, 2, 3, 4 represented as a chain of pairs.
Figure 2.4: The sequence 1, 2, 3, 4 represented as a chain of pairs.
One of the useful structures we can build with pairs is a sequence — an ordered collection of data
objects. There are, of course, many ways to represent sequences in terms of pairs. One particularly
straightforward representation is illustrated in figure 2.4, where the sequence 1, 2, 3, 4 is represented
as a chain of pairs. The car of each pair is the corresponding item in the chain, and the cdr of the
pair is the next pair in the chain. The cdr of the final pair signals the end of the sequence by pointing
to a distinguished value that is not a pair, represented in box-and-pointer diagrams as a diagonal line
and in programs as the value of the variable nil. The entire sequence is constructed by nested cons
operations:
(cons 1
(cons 2
(cons 3
(cons 4 nil))))
Such a sequence of pairs, formed by nested conses, is called a list, and Scheme provides a primitive
called list to help in constructing lists.8 The above sequence could be produced by (list 1 2 3
4). In general,
is equivalent to
(cons (cons (cons … (cons nil) …)))
Lisp systems conventionally print lists by printing the sequence of elements, enclosed in parentheses.
Thus, the data object in figure 2.4 is printed as (1 2 3 4):
(define one-through-four (list 1 2 3 4))
one-through-four
(1 2 3 4)
Be careful not to confuse the expression (list 1 2 3 4) with the list (1 2 3 4), which is the
result obtained when the expression is evaluated. Attempting to evaluate the expression (1 2 3 4)
will signal an error when the interpreter tries to apply the procedure 1 to arguments 2, 3, and 4.
We can think of car as selecting the first item in the list, and of cdr as selecting the sublist
consisting of all but the first item. Nested applications of car and cdr can be used to extract the
second, third, and subsequent items in the list.9 The constructor cons makes a list like the original
one, but with an additional item at the beginning.
(car one-through-four)
1
(cdr one-through-four)
(2 3 4)
(car (cdr one-through-four))
2
(cons 10 one-through-four)
(10 1 2 3 4)
(cons 5 one-through-four)
(5 1 2 3 4)
The value of nil, used to terminate the chain of pairs, can be thought of as a sequence of no elements,
the empty list. The word nil is a contraction of the Latin word nihil, which means ‘‘nothing.’’10
List operations
The use of pairs to represent sequences of elements as lists is accompanied by conventional
programming techniques for manipulating lists by successively ‘‘cdring down’’ the lists. For
example, the procedure list-ref takes as arguments a list and a number n and returns the nth item
of the list. It is customary to number the elements of the list beginning with 0. The method for
computing list-ref is the following:
For n = 0, list-ref should return the car of the list.
Otherwise, list-ref should return the (n – 1)st item of the cdr of the list.
(define (list-ref items n)
(if (= n 0)
(car items)
(list-ref (cdr items) (- n 1))))
(define squares (list 1 4 9 16 25))
(list-ref squares 3)
16
Often we cdr down the whole list. To aid in this, Scheme includes a primitive predicate null?,
which tests whether its argument is the empty list. The procedure length, which returns the number
of items in a list, illustrates this typical pattern of use:
(define (length items)
(if (null? items)
0
(+ 1 (length (cdr items)))))
(define odds (list 1 3 5 7))
(length odds)
4
The length procedure implements a simple recursive plan. The reduction step is:
The length of any list is 1 plus the length of the cdr of the list.
This is applied successively until we reach the base case:
The length of the empty list is 0.
We could also compute length in an iterative style:
(define (length items)
(define (length-iter a count)
(if (null? a)
count
(length-iter (cdr a) (+ 1 count))))
(length-iter items 0))
Another conventional programming technique is to ‘‘cons up’’ an answer list while cdring down a
list, as in the procedure append, which takes two lists as arguments and combines their elements to
make a new list:
(append squares odds)
(1 4 9 16 25 1 3 5 7)
(append odds squares)
(1 3 5 7 1 4 9 16 25)
Append is also implemented using a recursive plan. To append lists list1 and list2, do the
following:
If list1 is the empty list, then the result is just list2.
Otherwise, append the cdr of list1 and list2, and cons the car of list1 onto the
result:
(define (append list1 list2)
(if (null? list1)
list2
(cons (car list1) (append (cdr list1) list2))))
Exercise 2.17. Define a procedure last-pair that returns the list that contains only the last
element of a given (nonempty) list:
(last-pair (list 23 72 149 34))
(34)
Exercise 2.18. Define a procedure reverse that takes a list as argument and returns a list of the
same elements in reverse order:
(reverse (list 1 4 9 16 25))
(25 16 9 4 1)
Exercise 2.19. Consider the change-counting program of section 1.2.2. It would be nice to be able to
easily change the currency used by the program, so that we could compute the number of ways to
change a British pound, for example. As the program is written, the knowledge of the currency is
distributed partly into the procedure first-denomination and partly into the procedure
count-change (which knows that there are five kinds of U.S. coins). It would be nicer to be able to
supply a list of coins to be used for making change.
We want to rewrite the procedure cc so that its second argument is a list of the values of the coins to
use rather than an integer specifying which coins to use. We could then have lists that defined each
kind of currency:
(define us-coins (list 50 25 10 5 1))
(define uk-coins (list 100 50 20 10 5 2 1 0.5))
We could then call cc as follows:
(cc 100 us-coins)
292
To do this will require changing the program cc somewhat. It will still have the same form, but it will
access its second argument differently, as follows:
(define (cc amount coin-values)
(cond ((= amount 0) 1)
((or (< amount 0) (no-more? coin-values)) 0)
(else
(+ (cc amount
(except-first-denomination coin-values))
(cc (- amount
(first-denomination coin-values))
coin-values)))))
Define the procedures first-denomination, except-first-denomination, and
no-more? in terms of primitive operations on list structures. Does the order of the list
coin-values affect the answer produced by cc? Why or why not?
Exercise 2.20. The procedures +, *, and list take arbitrary numbers of arguments. One way to
define such procedures is to use define with dotted-tail notation. In a procedure definition, a
parameter list that has a dot before the last parameter name indicates that, when the procedure is
called, the initial parameters (if any) will have as values the initial arguments, as usual, but the final
parameter’s value will be a list of any remaining arguments. For instance, given the definition
(define (f x y . z)
the procedure f can be called with two or more arguments. If we evaluate
(f 1 2 3 4 5 6)
then in the body of f, x will be 1, y will be 2, and z will be the list (3 4 5 6). Given the definition
(define (g . w)
)the procedure g can be called with zero or more arguments. If we evaluate
(g 1 2 3 4 5 6)
then in the body of g, w will be the list (1 2 3 4 5 6).11
Use this notation to write a procedure same-parity that takes one or more integers and returns a
list of all the arguments that have the same even-odd parity as the first argument. For example,
(same-parity 1 2 3 4 5 6 7)
(1 3 5 7)
(same-parity 2 3 4 5 6 7)
(2 4 6)
Mapping over lists
One extremely useful operation is to apply some transformation to each element in a list and generate
the list of results. For instance, the following procedure scales each number in a list by a given factor:
(define (scale-list items factor)
(if (null? items)
nil
(cons (* (car items) factor)
(scale-list (cdr items) factor))))
(scale-list (list 1 2 3 4 5) 10)
(10 20 30 40 50)
We can abstract this general idea and capture it as a common pattern expressed as a higher-order
procedure, just as in section 1.3. The higher-order procedure here is called map. Map takes as
arguments a procedure of one argument and a list, and returns a list of the results produced by
applying the procedure to each element in the list:12
(define (map proc items)
(if (null? items)
nil
(cons (proc (car items))
(map proc (cdr items)))))
(map abs (list -10 2.5 -11.6 17))
(10 2.5 11.6 17)
(map (lambda (x) (* x x))
(list 1 2 3 4))
(1 4 9 16)
Now we can give a new definition of scale-list in terms of map:
(define (scale-list items factor)
(map (lambda (x) (* x factor))
items))
Map is an important construct, not only because it captures a common pattern, but because it
establishes a higher level of abstraction in dealing with lists. In the original definition of
scale-list, the recursive structure of the program draws attention to the element-by-element
processing of the list. Defining scale-list in terms of map suppresses that level of detail and
emphasizes that scaling transforms a list of elements to a list of results. The difference between the
two definitions is not that the computer is performing a different process (it isn’t) but that we think
about the process differently. In effect, map helps establish an abstraction barrier that isolates the
implementation of procedures that transform lists from the details of how the elements of the list are
extracted and combined. Like the barriers shown in figure 2.1, this abstraction gives us the flexibility
to change the low-level details of how sequences are implemented, while preserving the conceptual
framework of operations that transform sequences to sequences. Section 2.2.3 expands on this use of
sequences as a framework for organizing programs.
Exercise 2.21. The procedure square-list takes a list of numbers as argument and returns a list
of the squares of those numbers.
(square-list (list 1 2 3 4))
(1 4 9 16)
Here are two different definitions of square-list. Complete both of them by filling in the missing
expressions:
(define (square-list items)
(if (null? items)
nil
(cons ?> ?>)))
(define (square-list items)
(map ?> ?>))
Exercise 2.22. Louis Reasoner tries to rewrite the first square-list procedure of exercise 2.21 so
that it evolves an iterative process:
(define (square-list items)
(define (iter things answer)
(if (null? things)
answer
(iter (cdr things)
(cons (square (car things))
answer))))
(iter items nil))
Unfortunately, defining square-list this way produces the answer list in the reverse order of the
one desired. Why?
Louis then tries to fix his bug by interchanging the arguments to cons:
(define (square-list items)
(define (iter things answer)
(if (null? things)
answer
(iter (cdr things)
(cons answer
(square (car things))))))
(iter items nil))
This doesn’t work either. Explain.
Exercise 2.23. The procedure for-each is similar to map. It takes as arguments a procedure and a
list of elements. However, rather than forming a list of the results, for-each just applies the
procedure to each of the elements in turn, from left to right. The values returned by applying the
procedure to the elements are not used at all — for-each is used with procedures that perform an
action, such as printing. For example,
(for-each (lambda (x) (newline) (display x))
(list 57 321 88))
57
321
88
The value returned by the call to for-each (not illustrated above) can be something arbitrary, such
as true. Give an implementation of for-each.
2.2.2 Hierarchical Structures
The representation of sequences in terms of lists generalizes naturally to represent sequences whose
elements may themselves be sequences. For example, we can regard the object ((1 2) 3 4)
constructed by
(cons (list 1 2) (list 3 4))
as a list of three items, the first of which is itself a list, (1 2). Indeed, this is suggested by the form in
which the result is printed by the interpreter. Figure 2.5 shows the representation of this structure in
terms of pairs.
Figure 2.5: Structure formed by (cons (list 1 2) (list 3 4)).
Figure 2.5: Structure formed by (cons (list 1 2) (list 3 4)).
Another way to think of sequences whose elements are sequences is as trees. The elements of the
sequence are the branches of the tree, and elements that are themselves sequences are subtrees.
Figure 2.6 shows the structure in figure 2.5 viewed as a tree.
Figure 2.6: The list structure in figure 2.5 viewed as a tree.
Figure 2.6: The list structure in figure 2.5 viewed as a tree.
Recursion is a natural tool for dealing with tree structures, since we can often reduce operations on
trees to operations on their branches, which reduce in turn to operations on the branches of the
branches, and so on, until we reach the leaves of the tree. As an example, compare the length
procedure of section 2.2.1 with the count-leaves procedure, which returns the total number of
leaves of a tree:
(define x (cons (list 1 2) (list 3 4)))
(length x)
3
(count-leaves x)
4
(list x x)
(((1 2) 3 4) ((1 2) 3 4))
(length (list x x))
2
(count-leaves (list x x))
8
To implement count-leaves, recall the recursive plan for computing length:
Length of a list x is 1 plus length of the cdr of x.
Length of the empty list is 0.
Count-leaves is similar. The value for the empty list is the same:
Count-leaves of the empty list is 0.
But in the reduction step, where we strip off the car of the list, we must take into account that the
car may itself be a tree whose leaves we need to count. Thus, the appropriate reduction step is
Count-leaves of a tree x is count-leaves of the car of x plus count-leaves of the
cdr of x.
Finally, by taking cars we reach actual leaves, so we need another base case:
Count-leaves of a leaf is 1.
To aid in writing recursive procedures on trees, Scheme provides the primitive predicate pair?,
which tests whether its argument is a pair. Here is the complete procedure:13
(define (count-leaves x)
(cond ((null? x) 0)
((not (pair? x)) 1)
(else (+ (count-leaves (car x))
(count-leaves (cdr x))))))
Exercise 2.24. Suppose we evaluate the expression (list 1 (list 2 (list 3 4))). Give
the result printed by the interpreter, the corresponding box-and-pointer structure, and the interpretation
of this as a tree (as in figure 2.6).
Exercise 2.25. Give combinations of cars and cdrs that will pick 7 from each of the following lists:
(1 3 (5 7) 9)
((7))
(1 (2 (3 (4 (5 (6 7))))))
Exercise 2.26. Suppose we define x and y to be two lists:
(define x (list 1 2 3))
(define y (list 4 5 6))
What result is printed by the interpreter in response to evaluating each of the following expressions:
(append x y)
(cons x y)
(list x y)
Exercise 2.27. Modify your reverse procedure of exercise 2.18 to produce a deep-reverse
procedure that takes a list as argument and returns as its value the list with its elements reversed and
with all sublists deep-reversed as well. For example,
(define x (list (list 1 2) (list 3 4)))
x
((1 2) (3 4))
(reverse x)
((3 4) (1 2))
(deep-reverse x)
((4 3) (2 1))
Exercise 2.28. Write a procedure fringe that takes as argument a tree (represented as a list) and
returns a list whose elements are all the leaves of the tree arranged in left-to-right order. For example,
(define x (list (list 1 2) (list 3 4)))
(fringe x)
(1 2 3 4)
(fringe (list x x))
(1 2 3 4 1 2 3 4)
Exercise 2.29. A binary mobile consists of two branches, a left branch and a right branch. Each
branch is a rod of a certain length, from which hangs either a weight or another binary mobile. We can
represent a binary mobile using compound data by constructing it from two branches (for example,
using list):
(define (make-mobile left right)
(list left right))
A branch is constructed from a length (which must be a number) together with a structure,
which may be either a number (representing a simple weight) or another mobile:
(define (make-branch length structure)
(list length structure))
a. Write the corresponding selectors left-branch and right-branch, which return the
branches of a mobile, and branch-length and branch-structure, which return the
components of a branch.
b. Using your selectors, define a procedure total-weight that returns the total weight of a mobile.
c. A mobile is said to be balanced if the torque applied by its top-left branch is equal to that applied
by its top-right branch (that is, if the length of the left rod multiplied by the weight hanging from that
rod is equal to the corresponding product for the right side) and if each of the submobiles hanging off
its branches is balanced. Design a predicate that tests whether a binary mobile is balanced.
d. Suppose we change the representation of mobiles so that the constructors are
(define (make-mobile left right)
(cons left right))
(define (make-branch length structure)
(cons length structure))
How much do you need to change your programs to convert to the new representation?
Mapping over trees
Just as map is a powerful abstraction for dealing with sequences, map together with recursion is a
powerful abstraction for dealing with trees. For instance, the scale-tree procedure, analogous to
scale-list of section 2.2.1, takes as arguments a numeric factor and a tree whose leaves are
numbers. It returns a tree of the same shape, where each number is multiplied by the factor. The
recursive plan for scale-tree is similar to the one for count-leaves:
(define (scale-tree tree factor)
(cond ((null? tree) nil)
((not (pair? tree)) (* tree factor))
(else (cons (scale-tree (car tree) factor)
(scale-tree (cdr tree) factor)))))
(scale-tree (list 1 (list 2 (list 3 4) 5) (list 6 7))
10)
(10 (20 (30 40) 50) (60 70))
Another way to implement scale-tree is to regard the tree as a sequence of sub-trees and use map.
We map over the sequence, scaling each sub-tree in turn, and return the list of results. In the base case,
where the tree is a leaf, we simply multiply by the factor:
(define (scale-tree tree factor)
(map (lambda (sub-tree)
(if (pair? sub-tree)
(scale-tree sub-tree factor)
(* sub-tree factor)))
tree))
Many tree operations can be implemented by similar combinations of sequence operations and
recursion.
Exercise 2.30. Define a procedure square-tree analogous to the square-list procedure of
exercise 2.21. That is, square-list should behave as follows:
(square-tree
(list 1
(list 2 (list 3 4) 5)
(list 6 7)))
(1 (4 (9 16) 25) (36 49))
Define square-tree both directly (i.e., without using any higher-order procedures) and also by
using map and recursion.
Exercise 2.31. Abstract your answer to exercise 2.30 to produce a procedure tree-map with the
property that square-tree could be defined as
(define (square-tree tree) (tree-map square tree))
Exercise 2.32. We can represent a set as a list of distinct elements, and we can represent the set of all
subsets of the set as a list of lists. For example, if the set is (1 2 3), then the set of all subsets is
(() (3) (2) (2 3) (1) (1 3) (1 2) (1 2 3)). Complete the following definition of a
procedure that generates the set of subsets of a set and give a clear explanation of why it works:
(define (subsets s)
(if (null? s)
(list nil)
(let ((rest (subsets (cdr s))))
(append rest (map ?> rest)))))
2.2.3 Sequences as Conventional Interfaces
In working with compound data, we’ve stressed how data abstraction permits us to design programs
without becoming enmeshed in the details of data representations, and how abstraction preserves for
us the flexibility to experiment with alternative representations. In this section, we introduce another
powerful design principle for working with data structures — the use of conventional interfaces.
In section 1.3 we saw how program abstractions, implemented as higher-order procedures, can capture
common patterns in programs that deal with numerical data. Our ability to formulate analogous
operations for working with compound data depends crucially on the style in which we manipulate our
data structures. Consider, for example, the following procedure, analogous to the count-leaves
procedure of section 2.2.2, which takes a tree as argument and computes the sum of the squares of the
leaves that are odd:
(define (sum-odd-squares tree)
(cond ((null? tree) 0)
((not (pair? tree))
(if (odd? tree) (square tree) 0))
(else (+ (sum-odd-squares (car tree))
(sum-odd-squares (cdr tree))))))
On the surface, this procedure is very different from the following one, which constructs a list of all
the even Fibonacci numbers Fib(k), where k is less than or equal to a given integer n:
(define (even-fibs n)
(define (next k)
(if (> k n)
nil
(let ((f (fib k)))
(if (even? f)
(cons f (next (+ k 1)))
(next (+ k 1))))))
(next 0))
Despite the fact that these two procedures are structurally very different, a more abstract description of
the two computations reveals a great deal of similarity. The first program
enumerates the leaves of a tree;
filters them, selecting the odd ones;
squares each of the selected ones; and
accumulates the results using +, starting with 0.
The second program
enumerates the integers from 0 to n;
computes the Fibonacci number for each integer;
filters them, selecting the even ones; and
accumulates the results using cons, starting with the empty list.
A signal-processing engineer would find it natural to conceptualize these processes in terms of signals
flowing through a cascade of stages, each of which implements part of the program plan, as shown in
figure 2.7. In sum-odd-squares, we begin with an enumerator, which generates a ‘‘signal’’
consisting of the leaves of a given tree. This signal is passed through a filter, which eliminates all but
the odd elements. The resulting signal is in turn passed through a map, which is a ‘‘transducer’’ that
applies the square procedure to each element. The output of the map is then fed to an accumulator,
which combines the elements using +, starting from an initial 0. The plan for even-fibs is
analogous.
Figure 2.7: The signal-flow plans for the procedures sum-odd-squares (top) and
even-fibs (bottom) reveal the commonality between the two programs.
Figure 2.7: The signal-flow plans for the procedures sum-odd-squares (top) and even-fibs
(bottom) reveal the commonality between the two programs.
Unfortunately, the two procedure definitions above fail to exhibit this signal-flow structure. For
instance, if we examine the sum-odd-squares procedure, we find that the enumeration is
implemented partly by the null? and pair? tests and partly by the tree-recursive structure of the
procedure. Similarly, the accumulation is found partly in the tests and partly in the addition used in the
recursion. In general, there are no distinct parts of either procedure that correspond to the elements in
the signal-flow description. Our two procedures decompose the computations in a different way,
spreading the enumeration over the program and mingling it with the map, the filter, and the
accumulation. If we could organize our programs to make the signal-flow structure manifest in the
procedures we write, this would increase the conceptual clarity of the resulting code.
Sequence Operations
The key to organizing programs so as to more clearly reflect the signal-flow structure is to concentrate
on the ‘‘signals’’ that flow from one stage in the process to the next. If we represent these signals as
lists, then we can use list operations to implement the processing at each of the stages. For instance,
we can implement the mapping stages of the signal-flow diagrams using the map procedure from
section 2.2.1:
(map square (list 1 2 3 4 5))
(1 4 9 16 25)
Filtering a sequence to select only those elements that satisfy a given predicate is accomplished by
(define (filter predicate sequence)
(cond ((null? sequence) nil)
((predicate (car sequence))
(cons (car sequence)
(filter predicate (cdr sequence))))
(else (filter predicate (cdr sequence)))))
For example,
(filter odd? (list 1 2 3 4 5))
(1 3 5)
Accumulations can be implemented by
(define (accumulate op initial sequence)
(if (null? sequence)
initial
(op (car sequence)
(accumulate op initial (cdr sequence)))))
(accumulate + 0 (list 1 2 3 4 5))
15
(accumulate * 1 (list 1 2 3 4 5))
120
(accumulate cons nil (list 1 2 3 4 5))
(1 2 3 4 5)
All that remains to implement signal-flow diagrams is to enumerate the sequence of elements to be
processed. For even-fibs, we need to generate the sequence of integers in a given range, which we
can do as follows:
(define (enumerate-interval low high)
(if (> low high)
nil
(cons low (enumerate-interval (+ low 1) high))))
(enumerate-interval 2 7)
(2 3 4 5 6 7)
To enumerate the leaves of a tree, we can use14
(define (enumerate-tree tree)
(cond ((null? tree) nil)
((not (pair? tree)) (list tree))
(else (append (enumerate-tree (car tree))
(enumerate-tree (cdr tree))))))
(enumerate-tree (list 1 (list 2 (list 3 4)) 5))
(1 2 3 4 5)
Now we can reformulate sum-odd-squares and even-fibs as in the signal-flow diagrams. For
sum-odd-squares, we enumerate the sequence of leaves of the tree, filter this to keep only the odd
numbers in the sequence, square each element, and sum the results:
(define (sum-odd-squares tree)
(accumulate +
0
(map square
(filter odd?
(enumerate-tree tree)))))
For even-fibs, we enumerate the integers from 0 to n, generate the Fibonacci number for each of
these integers, filter the resulting sequence to keep only the even elements, and accumulate the results
into a list:
(define (even-fibs n)
(accumulate cons
nil
(filter even?
(map fib
(enumerate-interval 0 n)))))
The value of expressing programs as sequence operations is that this helps us make program designs
that are modular, that is, designs that are constructed by combining relatively independent pieces. We
can encourage modular design by providing a library of standard components together with a
conventional interface for connecting the components in flexible ways.
Modular construction is a powerful strategy for controlling complexity in engineering design. In real
signal-processing applications, for example, designers regularly build systems by cascading elements
selected from standardized families of filters and transducers. Similarly, sequence operations provide a
library of standard program elements that we can mix and match. For instance, we can reuse pieces
from the sum-odd-squares and even-fibs procedures in a program that constructs a list of the
squares of the first n + 1 Fibonacci numbers:
(define (list-fib-squares n)
(accumulate cons
nil
(map square
(map fib
(enumerate-interval 0 n)))))
(list-fib-squares 10)
(0 1 1 4 9 25 64 169 441 1156 3025)
We can rearrange the pieces and use them in computing the product of the odd integers in a sequence:
(define (product-of-squares-of-odd-elements sequence)
(accumulate *
1
(map square
(filter odd? sequence))))
(product-of-squares-of-odd-elements (list 1 2 3 4 5))
225
We can also formulate conventional data-processing applications in terms of sequence operations.
Suppose we have a sequence of personnel records and we want to find the salary of the highest-paid
programmer. Assume that we have a selector salary that returns the salary of a record, and a
predicate programmer? that tests if a record is for a programmer. Then we can write
(define (salary-of-highest-paid-programmer records)
(accumulate max
0
(map salary
(filter programmer? records))))
These examples give just a hint of the vast range of operations that can be expressed as sequence
operations.15
Sequences, implemented here as lists, serve as a conventional interface that permits us to combine
processing modules. Additionally, when we uniformly represent structures as sequences, we have
localized the data-structure dependencies in our programs to a small number of sequence operations.
By changing these, we can experiment with alternative representations of sequences, while leaving the
overall design of our programs intact. We will exploit this capability in section 3.5, when we
generalize the sequence-processing paradigm to admit infinite sequences.
Exercise 2.33. Fill in the missing expressions to complete the following definitions of some basic
list-manipulation operations as accumulations:
(define (map p sequence)
(accumulate (lambda (x y) ?>) nil sequence))
(define (append seq1 seq2)
(accumulate cons ?> ?>))
(define (length sequence)
(accumulate ?> 0 sequence))
Exercise 2.34. Evaluating a polynomial in x at a given value of x can be formulated as an
accumulation. We evaluate the polynomial
using a well-known algorithm called Horner’s rule, which structures the computation as
In other words, we start with an , multiply by x, add an-1 , multiply by x, and so on, until we reach
a0 .
16 Fill in the following template to produce a procedure that evaluates a polynomial using
Horner’s rule. Assume that the coefficients of the polynomial are arranged in a sequence, from a0
through an .
(define (horner-eval x coefficient-sequence)
(accumulate (lambda (this-coeff higher-terms) ?>)
0
coefficient-sequence))
For example, to compute 1 + 3x + 5×3 + x5 at x = 2 you would evaluate
(horner-eval 2 (list 1 3 0 5 0 1))
Exercise 2.35. Redefine count-leaves from section 2.2.2 as an accumulation:
(define (count-leaves t)
(accumulate ?> ?> (map ?> ?>)))
Exercise 2.36. The procedure accumulate-n is similar to accumulate except that it takes as its
third argument a sequence of sequences, which are all assumed to have the same number of elements.
It applies the designated accumulation procedure to combine all the first elements of the sequences, all
the second elements of the sequences, and so on, and returns a sequence of the results. For instance, if
s is a sequence containing four sequences, ((1 2 3) (4 5 6) (7 8 9) (10 11 12)),
then the value of (accumulate-n + 0 s) should be the sequence (22 26 30). Fill in the
missing expressions in the following definition of accumulate-n:
(define (accumulate-n op init seqs)
(if (null? (car seqs))
nil
(cons (accumulate op init ?>)
(accumulate-n op init ?>))))
Exercise 2.37. Suppose we represent vectors v = (v i ) as sequences of numbers, and matrices m =
(m ij ) as sequences of vectors (the rows of the matrix). For example, the matrix
is represented as the sequence ((1 2 3 4) (4 5 6 6) (6 7 8 9)). With this representation,
we can use sequence operations to concisely express the basic matrix and vector operations. These
operations (which are described in any book on matrix algebra) are the following:
We can define the dot product as17
(define (dot-product v w)
(accumulate + 0 (map * v w)))
Fill in the missing expressions in the following procedures for computing the other matrix operations.
(The procedure accumulate-n is defined in exercise 2.36.)
(define (matrix-*-vector m v)
(map ?> m))
(define (transpose mat)
(accumulate-n ?> ?> mat))
(define (matrix-*-matrix m n)
(let ((cols (transpose n)))
(map ?> m)))
Exercise 2.38. The accumulate procedure is also known as fold-right, because it combines
the first element of the sequence with the result of combining all the elements to the right. There is
also a fold-left, which is similar to fold-right, except that it combines elements working in
the opposite direction:
(define (fold-left op initial sequence)
(define (iter result rest)
(if (null? rest)
result
(iter (op result (car rest))
(cdr rest))))
(iter initial sequence))
What are the values of
(fold-right / 1 (list 1 2 3))
(fold-left / 1 (list 1 2 3))
(fold-right list nil (list 1 2 3))
(fold-left list nil (list 1 2 3))
Give a property that op should satisfy to guarantee that fold-right and fold-left will produce
the same values for any sequence.
Exercise 2.39. Complete the following definitions of reverse (exercise 2.18) in terms of
fold-right and fold-left from exercise 2.38:
(define (reverse sequence)
(fold-right (lambda (x y) ?>) nil sequence))
(define (reverse sequence)
(fold-left (lambda (x y) ?>) nil sequence))
Nested Mappings
We can extend the sequence paradigm to include many computations that are commonly expressed
using nested loops.18 Consider this problem: Given a positive integer n, find all ordered pairs of
distinct positive integers i and j, where 1< j< i< n, such that i + j is prime. For example, if n is 6, then
the pairs are the following:
A natural way to organize this computation is to generate the sequence of all ordered pairs of positive
integers less than or equal to n, filter to select those pairs whose sum is prime, and then, for each pair
(i, j) that passes through the filter, produce the triple (i,j,i + j).
Here is a way to generate the sequence of pairs: For each integer i< n, enumerate the integers jpainter segment-list)
(lambda (frame)
(for-each
(lambda (segment)
(draw-line
((frame-coord-map frame) (start-segment segment))
((frame-coord-map frame) (end-segment segment))))
segment-list)))
The segments are given using coordinates with respect to the unit square. For each segment in the list,
the painter transforms the segment endpoints with the frame coordinate map and draws a line between
the transformed points.
Representing painters as procedures erects a powerful abstraction barrier in the picture language. We
can create and intermix all sorts of primitive painters, based on a variety of graphics capabilities. The
details of their implementation do not matter. Any procedure can serve as a painter, provided that it
takes a frame as argument and draws something scaled to fit the frame.28
Exercise 2.48. A directed line segment in the plane can be represented as a pair of vectors — the
vector running from the origin to the start-point of the segment, and the vector running from the origin
to the end-point of the segment. Use your vector representation from exercise 2.46 to define a
representation for segments with a constructor make-segment and selectors start-segment and
end-segment.
Exercise 2.49. Use segments->painter to define the following primitive painters:
a. The painter that draws the outline of the designated frame.
b. The painter that draws an ‘‘X’’ by connecting opposite corners of the frame.
c. The painter that draws a diamond shape by connecting the midpoints of the sides of the frame.
d. The wave painter.
Transforming and combining painters
An operation on painters (such as flip-vert or beside) works by creating a painter that invokes
the original painters with respect to frames derived from the argument frame. Thus, for example,
flip-vert doesn’t have to know how a painter works in order to flip it — it just has to know how to
turn a frame upside down: The flipped painter just uses the original painter, but in the inverted frame.
Painter operations are based on the procedure transform-painter, which takes as arguments a
painter and information on how to transform a frame and produces a new painter. The transformed
painter, when called on a frame, transforms the frame and calls the original painter on the transformed
frame. The arguments to transform-painter are points (represented as vectors) that specify the
corners of the new frame: When mapped into the frame, the first point specifies the new frame’s origin
and the other two specify the ends of its edge vectors. Thus, arguments within the unit square specify a
frame contained within the original frame.
(define (transform-painter painter origin corner1 corner2)
(lambda (frame)
(let ((m (frame-coord-map frame)))
(let ((new-origin (m origin)))
(painter
(make-frame new-origin
(sub-vect (m corner1) new-origin)
(sub-vect (m corner2) new-origin)))))))
Here’s how to flip painter images vertically:
(define (flip-vert painter)
(transform-painter painter
(make-vect 0.0 1.0) ; new origin
(make-vect 1.0 1.0) ; new end of edge1
(make-vect 0.0 0.0))) ; new end of edge2
Using transform-painter, we can easily define new transformations. For example, we can
define a painter that shrinks its image to the upper-right quarter of the frame it is given:
(define (shrink-to-upper-right painter)
(transform-painter painter
(make-vect 0.5 0.5)
(make-vect 1.0 0.5)
(make-vect 0.5 1.0)))
Other transformations rotate images counterclockwise by 90 degrees29
(define (rotate90 painter)
(transform-painter painter
(make-vect 1.0 0.0)
(make-vect 1.0 1.0)
(make-vect 0.0 0.0)))
or squash images towards the center of the frame:30
(define (squash-inwards painter)
(transform-painter painter
(make-vect 0.0 0.0)
(make-vect 0.65 0.35)
(make-vect 0.35 0.65)))
Frame transformation is also the key to defining means of combining two or more painters. The
beside procedure, for example, takes two painters, transforms them to paint in the left and right
halves of an argument frame respectively, and produces a new, compound painter. When the
compound painter is given a frame, it calls the first transformed painter to paint in the left half of the
frame and calls the second transformed painter to paint in the right half of the frame:
(define (beside painter1 painter2)
(let ((split-point (make-vect 0.5 0.0)))
(let ((paint-left
(transform-painter painter1
(make-vect 0.0 0.0)
split-point
(make-vect 0.0 1.0)))
(paint-right
(transform-painter painter2
split-point
(make-vect 1.0 0.0)
(make-vect 0.5 1.0))))
(lambda (frame)
(paint-left frame)
(paint-right frame)))))
Observe how the painter data abstraction, and in particular the representation of painters as procedures,
makes beside easy to implement. The beside procedure need not know anything about the details
of the component painters other than that each painter will draw something in its designated frame.
Exercise 2.50. Define the transformation flip-horiz, which flips painters horizontally, and
transformations that rotate painters counterclockwise by 180 degrees and 270 degrees.
Exercise 2.51. Define the below operation for painters. Below takes two painters as arguments. The
resulting painter, given a frame, draws with the first painter in the bottom of the frame and with the
second painter in the top. Define below in two different ways — first by writing a procedure that is
analogous to the beside procedure given above, and again in terms of beside and suitable rotation
operations (from exercise 2.50).
Levels of language for robust design
The picture language exercises some of the critical ideas we’ve introduced about abstraction with
procedures and data. The fundamental data abstractions, painters, are implemented using procedural
representations, which enables the language to handle different basic drawing capabilities in a uniform
way. The means of combination satisfy the closure property, which permits us to easily build up
complex designs. Finally, all the tools for abstracting procedures are available to us for abstracting
means of combination for painters.
We have also obtained a glimpse of another crucial idea about languages and program design. This is
the approach of stratified design, the notion that a complex system should be structured as a sequence
of levels that are described using a sequence of languages. Each level is constructed by combining
parts that are regarded as primitive at that level, and the parts constructed at each level are used as
primitives at the next level. The language used at each level of a stratified design has primitives,
means of combination, and means of abstraction appropriate to that level of detail.
Stratified design pervades the engineering of complex systems. For example, in computer engineering,
resistors and transistors are combined (and described using a language of analog circuits) to produce
parts such as and-gates and or-gates, which form the primitives of a language for digital-circuit
design.31 These parts are combined to build processors, bus structures, and memory systems, which
are in turn combined to form computers, using languages appropriate to computer architecture.
Computers are combined to form distributed systems, using languages appropriate for describing
network interconnections, and so on.
As a tiny example of stratification, our picture language uses primitive elements (primitive painters)
that are created using a language that specifies points and lines to provide the lists of line segments for
segments->painter, or the shading details for a painter like rogers. The bulk of our
description of the picture language focused on combining these primitives, using geometric combiners
such as beside and below. We also worked at a higher level, regarding beside and below as
primitives to be manipulated in a language whose operations, such as square-of-four, capture
common patterns of combining geometric combiners.
Stratified design helps make programs robust, that is, it makes it likely that small changes in a
specification will require correspondingly small changes in the program. For instance, suppose we
wanted to change the image based on wave shown in figure 2.9. We could work at the lowest level to
change the detailed appearance of the wave element; we could work at the middle level to change the
way corner-split replicates the wave; we could work at the highest level to change how
square-limit arranges the four copies of the corner. In general, each level of a stratified design
provides a different vocabulary for expressing the characteristics of the system, and a different kind of
ability to change it.
Exercise 2.52. Make changes to the square limit of wave shown in figure 2.9 by working at each of
the levels described above. In particular:
a. Add some segments to the primitive wave painter of exercise 2.49 (to add a smile, for example).
b. Change the pattern constructed by corner-split (for example, by using only one copy of the
up-split and right-split images instead of two).
c. Modify the version of square-limit that uses square-of-four so as to assemble the
corners in a different pattern. (For example, you might make the big Mr. Rogers look outward from
each corner of the square.)
6 The use of the word ‘‘closure’’ here comes from abstract algebra, where a set of elements is said to
be closed under an operation if applying the operation to elements in the set produces an element that
is again an element of the set. The Lisp community also (unfortunately) uses the word ‘‘closure’’ to
describe a totally unrelated concept: A closure is an implementation technique for representing
procedures with free variables. We do not use the word ‘‘closure’’ in this second sense in this book.
7 The notion that a means of combination should satisfy closure is a straightforward idea.
Unfortunately, the data combiners provided in many popular programming languages do not satisfy
closure, or make closure cumbersome to exploit. In Fortran or Basic, one typically combines data
elements by assembling them into arrays — but one cannot form arrays whose elements are themselves
arrays. Pascal and C admit structures whose elements are structures. However, this requires that the
programmer manipulate pointers explicitly, and adhere to the restriction that each field of a structure
can contain only elements of a prespecified form. Unlike Lisp with its pairs, these languages have no
built-in general-purpose glue that makes it easy to manipulate compound data in a uniform way. This
limitation lies behind Alan Perlis’s comment in his foreword to this book: ‘‘In Pascal the plethora of
declarable data structures induces a specialization within functions that inhibits and penalizes casual
cooperation. It is better to have 100 functions operate on one data structure than to have 10 functions
operate on 10 data structures.’’
8 In this book, we use list to mean a chain of pairs terminated by the end-of-list marker. In contrast,
the term list structure refers to any data structure made out of pairs, not just to lists.
9 Since nested applications of car and cdr are cumbersome to write, Lisp dialects provide
abbreviations for them — for instance,
The names of all such procedures start with c and end with r. Each a between them stands for a car
operation and each d for a cdr operation, to be applied in the same order in which they appear in the
name. The names car and cdr persist because simple combinations like cadr are pronounceable.
10 It’s remarkable how much energy in the standardization of Lisp dialects has been dissipated in
arguments that are literally over nothing: Should nil be an ordinary name? Should the value of nil
be a symbol? Should it be a list? Should it be a pair? In Scheme, nil is an ordinary name, which we
use in this section as a variable whose value is the end-of-list marker (just as true is an ordinary
variable that has a true value). Other dialects of Lisp, including Common Lisp, treat nil as a special
symbol. The authors of this book, who have endured too many language standardization brawls, would
like to avoid the entire issue. Once we have introduced quotation in section 2.3, we will denote the
empty list as ’() and dispense with the variable nil entirely.
11 To define f and g using lambda we would write
(define f (lambda (x y . z)
))(define g (lambda w ))
12 Scheme standardly provides a map procedure that is more general than the one described here. This
more general map takes a procedure of n arguments, together with n lists, and applies the procedure to
all the first elements of the lists, all the second elements of the lists, and so on, returning a list of the
results. For example:
(map + (list 1 2 3) (list 40 50 60) (list 700 800 900))
(741 852 963)
(map (lambda (x y) (+ x (* 2 y)))
(list 1 2 3)
(list 4 5 6))
(9 12 15)
13 The order of the first two clauses in the cond matters, since the empty list satisfies null? and
also is not a pair.
14 This is, in fact, precisely the fringe procedure from exercise 2.28. Here we’ve renamed it to
emphasize that it is part of a family of general sequence-manipulation procedures.
15 Richard Waters (1979) developed a program that automatically analyzes traditional Fortran
programs, viewing them in terms of maps, filters, and accumulations. He found that fully 90 percent of
the code in the Fortran Scientific Subroutine Package fits neatly into this paradigm. One of the reasons
for the success of Lisp as a programming language is that lists provide a standard medium for
expressing ordered collections so that they can be manipulated using higher-order operations. The
programming language APL owes much of its power and appeal to a similar choice. In APL all data
are represented as arrays, and there is a universal and convenient set of generic operators for all sorts
of array operations.
16 According to Knuth (1981), this rule was formulated by W. G. Horner early in the nineteenth
century, but the method was actually used by Newton over a hundred years earlier. Horner’s rule
evaluates the polynomial using fewer additions and multiplications than does the straightforward
method of first computing an x
n , then adding an-1 x
n-1 , and so on. In fact, it is possible to prove that
any algorithm for evaluating arbitrary polynomials must use at least as many additions and
multiplications as does Horner’s rule, and thus Horner’s rule is an optimal algorithm for polynomial
evaluation. This was proved (for the number of additions) by A. M. Ostrowski in a 1954 paper that
essentially founded the modern study of optimal algorithms. The analogous statement for
multiplications was proved by V. Y. Pan in 1966. The book by Borodin and Munro (1975) provides an
overview of these and other results about optimal algorithms.
17 This definition uses the extended version of map described in footnote 12.
18 This approach to nested mappings was shown to us by David Turner, whose languages KRC and
Miranda provide elegant formalisms for dealing with these constructs. The examples in this section
(see also exercise 2.42) are adapted from Turner 1981. In section 3.5.3, we’ll see how this approach
generalizes to infinite sequences.
19 We’re representing a pair here as a list of two elements rather than as a Lisp pair. Thus, the ‘‘pair’’
(i,j) is represented as (list i j), not (cons i j).
20 The set S – x is the set of all elements of S, excluding x.
21 Semicolons in Scheme code are used to introduce comments. Everything from the semicolon to the
end of the line is ignored by the interpreter. In this book we don’t use many comments; we try to make
our programs self-documenting by using descriptive names.
22 The picture language is based on the language Peter Henderson created to construct images like
M.C. Escher’s ‘‘Square Limit’’ woodcut (see Henderson 1982). The woodcut incorporates a repeated
scaled pattern, similar to the arrangements drawn using the square-limit procedure in this
section.
23 William Barton Rogers (1804-1882) was the founder and first president of MIT. A geologist and
talented teacher, he taught at William and Mary College and at the University of Virginia. In 1859 he
moved to Boston, where he had more time for research, worked on a plan for establishing a
‘‘polytechnic institute,’’ and served as Massachusetts’s first State Inspector of Gas Meters.
When MIT was established in 1861, Rogers was elected its first president. Rogers espoused an ideal of
‘‘useful learning’’ that was different from the university education of the time, with its overemphasis
on the classics, which, as he wrote, ‘‘stand in the way of the broader, higher and more practical
instruction and discipline of the natural and social sciences.’’ This education was likewise to be
different from narrow trade-school education. In Rogers’s words:
The world-enforced distinction between the practical and the scientific worker is utterly futile,
and the whole experience of modern times has demonstrated its utter worthlessness.
Rogers served as president of MIT until 1870, when he resigned due to ill health. In 1878 the second
president of MIT, John Runkle, resigned under the pressure of a financial crisis brought on by the
Panic of 1873 and strain of fighting off attempts by Harvard to take over MIT. Rogers returned to hold
the office of president until 1881.
Rogers collapsed and died while addressing MIT’s graduating class at the commencement exercises of
1882. Runkle quoted Rogers’s last words in a memorial address delivered that same year:
‘‘As I stand here today and see what the Institute is, … I call to mind the beginnings of science.
I remember one hundred and fifty years ago Stephen Hales published a pamphlet on the subject of
illuminating gas, in which he stated that his researches had demonstrated that 128 grains of
bituminous coal — ’’
‘‘Bituminous coal,’’ these were his last words on earth. Here he bent forward, as if consulting
some notes on the table before him, then slowly regaining an erect position, threw up his hands,
and was translated from the scene of his earthly labors and triumphs to ‘‘the tomorrow of death,’’
where the mysteries of life are solved, and the disembodied spirit finds unending satisfaction in
contemplating the new and still unfathomable mysteries of the infinite future.
In the words of Francis A. Walker (MIT’s third president):
All his life he had borne himself most faithfully and heroically, and he died as so good a knight
would surely have wished, in harness, at his post, and in the very part and act of public duty.
24 Equivalently, we could write
(define flipped-pairs
(square-of-four identity flip-vert identity flip-vert))
25 Rotate180 rotates a painter by 180 degrees (see exercise 2.50). Instead of rotate180 we
could say (compose flip-vert flip-horiz), using the compose procedure from
exercise 1.42.
26 Frame-coord-map uses the vector operations described in exercise 2.46 below, which we
assume have been implemented using some representation for vectors. Because of data abstraction, it
doesn’t matter what this vector representation is, so long as the vector operations behave correctly.
27 Segments->painter uses the representation for line segments described in exercise 2.48
below. It also uses the for-each procedure described in exercise 2.23.
28 For example, the rogers painter of figure 2.11 was constructed from a gray-level image. For each
point in a given frame, the rogers painter determines the point in the image that is mapped to it
under the frame coordinate map, and shades it accordingly. By allowing different types of painters, we
are capitalizing on the abstract data idea discussed in section 2.1.3, where we argued that a
rational-number representation could be anything at all that satisfies an appropriate condition. Here
we’re using the fact that a painter can be implemented in any way at all, so long as it draws something
in the designated frame. Section 2.1.3 also showed how pairs could be implemented as procedures.
Painters are our second example of a procedural representation for data.
29 Rotate90 is a pure rotation only for square frames, because it also stretches and shrinks the
image to fit into the rotated frame.
30 The diamond-shaped images in figures 2.10 and 2.11 were created with squash-inwards
applied to wave and rogers.
31 Section 3.3.4 describes one such language.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
2.3 Symbolic Data
All the compound data objects we have used so far were constructed ultimately from numbers. In this
section we extend the representational capability of our language by introducing the ability to work
with arbitrary symbols as data.
2.3.1 Quotation
If we can form compound data using symbols, we can have lists such as
(a b c d)
(23 45 17)
((Norah 12) (Molly 9) (Anna 7) (Lauren 6) (Charlotte 4))
Lists containing symbols can look just like the expressions of our language:
(* (+ 23 45) (+ x 9))
(define (fact n) (if (= n 1) 1 (* n (fact (- n 1)))))
In order to manipulate symbols we need a new element in our language: the ability to quote a data
object. Suppose we want to construct the list (a b). We can’t accomplish this with (list a b),
because this expression constructs a list of the values of a and b rather than the symbols themselves.
This issue is well known in the context of natural languages, where words and sentences may be
regarded either as semantic entities or as character strings (syntactic entities). The common practice in
natural languages is to use quotation marks to indicate that a word or a sentence is to be treated
literally as a string of characters. For instance, the first letter of ‘‘John’’ is clearly ‘‘J.’’ If we tell
somebody ‘‘say your name aloud,’’ we expect to hear that person’s name. However, if we tell
somebody ‘‘say ‘your name’ aloud,’’ we expect to hear the words ‘‘your name.’’ Note that we are
forced to nest quotation marks to describe what somebody else might say.32
We can follow this same practice to identify lists and symbols that are to be treated as data objects
rather than as expressions to be evaluated. However, our format for quoting differs from that of natural
languages in that we place a quotation mark (traditionally, the single quote symbol ’) only at the
beginning of the object to be quoted. We can get away with this in Scheme syntax because we rely on
blanks and parentheses to delimit objects. Thus, the meaning of the single quote character is to quote
the next object.33
Now we can distinguish between symbols and their values:
(define a 1)
(define b 2)
(list a b)
(1 2)
(list ’a ’b)
(a b)
(list ’a b)
(a 2)
Quotation also allows us to type in compound objects, using the conventional printed representation
for lists:34
(car ’(a b c))
a
(cdr ’(a b c))
(b c)
In keeping with this, we can obtain the empty list by evaluating ’(), and thus dispense with the
variable nil.
One additional primitive used in manipulating symbols is eq?, which takes two symbols as arguments
and tests whether they are the same.35 Using eq?, we can implement a useful procedure called memq.
This takes two arguments, a symbol and a list. If the symbol is not contained in the list (i.e., is not eq?
to any item in the list), then memq returns false. Otherwise, it returns the sublist of the list beginning
with the first occurrence of the symbol:
(define (memq item x)
(cond ((null? x) false)
((eq? item (car x)) x)
(else (memq item (cdr x)))))
For example, the value of
(memq ’apple ’(pear banana prune))
is false, whereas the value of
(memq ’apple ’(x (apple sauce) y apple pear))
is (apple pear).
Exercise 2.53. What would the interpreter print in response to evaluating each of the following
expressions?
(list ’a ’b ’c)
(list (list ’george))
(cdr ’((x1 x2) (y1 y2)))
(cadr ’((x1 x2) (y1 y2)))
(pair? (car ’(a short list)))
(memq ’red ’((red shoes) (blue socks)))
(memq ’red ’(red shoes blue socks))
Exercise 2.54. Two lists are said to be equal? if they contain equal elements arranged in the same
order. For example,
(equal? ’(this is a list) ’(this is a list))
is true, but
(equal? ’(this is a list) ’(this (is a) list))
is false. To be more precise, we can define equal? recursively in terms of the basic eq? equality of
symbols by saying that a and b are equal? if they are both symbols and the symbols are eq?, or if
they are both lists such that (car a) is equal? to (car b) and (cdr a) is equal? to (cdr
b). Using this idea, implement equal? as a procedure.36
Exercise 2.55. Eva Lu Ator types to the interpreter the expression
(car ’’abracadabra)
To her surprise, the interpreter prints back quote. Explain.
2.3.2 Example: Symbolic Differentiation
As an illustration of symbol manipulation and a further illustration of data abstraction, consider the
design of a procedure that performs symbolic differentiation of algebraic expressions. We would like
the procedure to take as arguments an algebraic expression and a variable and to return the derivative
of the expression with respect to the variable. For example, if the arguments to the procedure are ax2
+ bx + c and x, the procedure should return 2ax + b. Symbolic differentiation is of special historical
significance in Lisp. It was one of the motivating examples behind the development of a computer
language for symbol manipulation. Furthermore, it marked the beginning of the line of research that
led to the development of powerful systems for symbolic mathematical work, which are currently
being used by a growing number of applied mathematicians and physicists.
In developing the symbolic-differentiation program, we will follow the same strategy of data
abstraction that we followed in developing the rational-number system of section 2.1.1. That is, we
will first define a differentiation algorithm that operates on abstract objects such as ‘‘sums,’’
‘‘products,’’ and ‘‘variables’’ without worrying about how these are to be represented. Only afterward
will we address the representation problem.
The differentiation program with abstract data
In order to keep things simple, we will consider a very simple symbolic-differentiation program that
handles expressions that are built up using only the operations of addition and multiplication with two
arguments. Differentiation of any such expression can be carried out by applying the following
reduction rules:
Observe that the latter two rules are recursive in nature. That is, to obtain the derivative of a sum we
first find the derivatives of the terms and add them. Each of the terms may in turn be an expression
that needs to be decomposed. Decomposing into smaller and smaller pieces will eventually produce
pieces that are either constants or variables, whose derivatives will be either 0 or 1.
To embody these rules in a procedure we indulge in a little wishful thinking, as we did in designing
the rational-number implementation. If we had a means for representing algebraic expressions, we
should be able to tell whether an expression is a sum, a product, a constant, or a variable. We should
be able to extract the parts of an expression. For a sum, for example we want to be able to extract the
addend (first term) and the augend (second term). We should also be able to construct expressions
from parts. Let us assume that we already have procedures to implement the following selectors,
constructors, and predicates:
(variable? e) Is e a variable?
(same-variable? v1 v2) Are v1 and v2 the same variable?
(sum? e) Is e a sum?
(addend e) Addend of the sum e.
(augend e) Augend of the sum e.
(make-sum a1 a2) Construct the sum of a1 and a2.
(product? e) Is e a product?
(multiplier e) Multiplier of the product e.
(multiplicand e) Multiplicand of the product e.
(make-product m1 m2) Construct the product of m1 and m2.
Using these, and the primitive predicate number?, which identifies numbers, we can express the
differentiation rules as the following procedure:
(define (deriv exp var)
(cond ((number? exp) 0)
((variable? exp)
(if (same-variable? exp var) 1 0))
((sum? exp)
(make-sum (deriv (addend exp) var)
(deriv (augend exp) var)))
((product? exp)
(make-sum
(make-product (multiplier exp)
(deriv (multiplicand exp) var))
(make-product (deriv (multiplier exp) var)
(multiplicand exp))))
(else
(error “unknown expression type — DERIV” exp))))
This deriv procedure incorporates the complete differentiation algorithm. Since it is expressed in
terms of abstract data, it will work no matter how we choose to represent algebraic expressions, as
long as we design a proper set of selectors and constructors. This is the issue we must address next.
Representing algebraic expressions
We can imagine many ways to use list structure to represent algebraic expressions. For example, we
could use lists of symbols that mirror the usual algebraic notation, representing ax + b as the list (a *
x + b). However, one especially straightforward choice is to use the same parenthesized prefix
notation that Lisp uses for combinations; that is, to represent ax + b as (+ (* a x) b). Then our
data representation for the differentiation problem is as follows:
The variables are symbols. They are identified by the primitive predicate symbol?:
(define (variable? x) (symbol? x))
Two variables are the same if the symbols representing them are eq?:
(define (same-variable? v1 v2)
(and (variable? v1) (variable? v2) (eq? v1 v2)))
Sums and products are constructed as lists:
(define (make-sum a1 a2) (list ’+ a1 a2))
(define (make-product m1 m2) (list ’* m1 m2))
A sum is a list whose first element is the symbol +:
(define (sum? x)
(and (pair? x) (eq? (car x) ’+)))
The addend is the second item of the sum list:
(define (addend s) (cadr s))
The augend is the third item of the sum list:
(define (augend s) (caddr s))
A product is a list whose first element is the symbol *:
(define (product? x)
(and (pair? x) (eq? (car x) ’*)))
The multiplier is the second item of the product list:
(define (multiplier p) (cadr p))
The multiplicand is the third item of the product list:
(define (multiplicand p) (caddr p))
Thus, we need only combine these with the algorithm as embodied by deriv in order to have a
working symbolic-differentiation program. Let us look at some examples of its behavior:
(deriv ’(+ x 3) ’x)
(+ 1 0)
(deriv ’(* x y) ’x)
(+ (* x 0) (* 1 y))
(deriv ’(* (* x y) (+ x 3)) ’x)
(+ (* (* x y) (+ 1 0))
(* (+ (* x 0) (* 1 y))
(+ x 3)))
The program produces answers that are correct; however, they are unsimplified. It is true that
but we would like the program to know that x · 0 = 0, 1 · y = y, and 0 + y = y. The answer for the
second example should have been simply y. As the third example shows, this becomes a serious issue
when the expressions are complex.
Our difficulty is much like the one we encountered with the rational-number implementation: we
haven’t reduced answers to simplest form. To accomplish the rational-number reduction, we needed to
change only the constructors and the selectors of the implementation. We can adopt a similar strategy
here. We won’t change deriv at all. Instead, we will change make-sum so that if both summands
are numbers, make-sum will add them and return their sum. Also, if one of the summands is 0, then
make-sum will return the other summand.
(define (make-sum a1 a2)
(cond ((=number? a1 0) a2)
((=number? a2 0) a1)
((and (number? a1) (number? a2)) (+ a1 a2))
(else (list ’+ a1 a2))))
This uses the procedure =number?, which checks whether an expression is equal to a given number:
(define (=number? exp num)
(and (number? exp) (= exp num)))
Similarly, we will change make-product to build in the rules that 0 times anything is 0 and 1 times
anything is the thing itself:
(define (make-product m1 m2)
(cond ((or (=number? m1 0) (=number? m2 0)) 0)
((=number? m1 1) m2)
((=number? m2 1) m1)
((and (number? m1) (number? m2)) (* m1 m2))
(else (list ’* m1 m2))))
Here is how this version works on our three examples:
(deriv ’(+ x 3) ’x)
1
(deriv ’(* x y) ’x)
y
(deriv ’(* (* x y) (+ x 3)) ’x)
(+ (* x y) (* y (+ x 3)))
Although this is quite an improvement, the third example shows that there is still a long way to go
before we get a program that puts expressions into a form that we might agree is ‘‘simplest.’’ The
problem of algebraic simplification is complex because, among other reasons, a form that may be
simplest for one purpose may not be for another.
Exercise 2.56. Show how to extend the basic differentiator to handle more kinds of expressions. For
instance, implement the differentiation rule
by adding a new clause to the deriv program and defining appropriate procedures
exponentiation?, base, exponent, and make-exponentiation. (You may use the
symbol ** to denote exponentiation.) Build in the rules that anything raised to the power 0 is 1 and
anything raised to the power 1 is the thing itself.
Exercise 2.57. Extend the differentiation program to handle sums and products of arbitrary numbers
of (two or more) terms. Then the last example above could be expressed as
(deriv ’(* x y (+ x 3)) ’x)
Try to do this by changing only the representation for sums and products, without changing the
deriv procedure at all. For example, the addend of a sum would be the first term, and the augend
would be the sum of the rest of the terms.
Exercise 2.58. Suppose we want to modify the differentiation program so that it works with ordinary
mathematical notation, in which + and * are infix rather than prefix operators. Since the differentiation
program is defined in terms of abstract data, we can modify it to work with different representations of
expressions solely by changing the predicates, selectors, and constructors that define the representation
of the algebraic expressions on which the differentiator is to operate.
a. Show how to do this in order to differentiate algebraic expressions presented in infix form, such as
(x + (3 * (x + (y + 2)))). To simplify the task, assume that + and * always take two
arguments and that expressions are fully parenthesized.
b. The problem becomes substantially harder if we allow standard algebraic notation, such as (x + 3
* (x + y + 2)), which drops unnecessary parentheses and assumes that multiplication is done
before addition. Can you design appropriate predicates, selectors, and constructors for this notation
such that our derivative program still works?
2.3.3 Example: Representing Sets
In the previous examples we built representations for two kinds of compound data objects: rational
numbers and algebraic expressions. In one of these examples we had the choice of simplifying
(reducing) the expressions at either construction time or selection time, but other than that the choice
of a representation for these structures in terms of lists was straightforward. When we turn to the
representation of sets, the choice of a representation is not so obvious. Indeed, there are a number of
possible representations, and they differ significantly from one another in several ways.
Informally, a set is simply a collection of distinct objects. To give a more precise definition we can
employ the method of data abstraction. That is, we define ‘‘set’’ by specifying the operations that are
to be used on sets. These are union-set, intersection-set, element-of-set?, and
adjoin-set. Element-of-set? is a predicate that determines whether a given element is a
member of a set. Adjoin-set takes an object and a set as arguments and returns a set that contains
the elements of the original set and also the adjoined element. Union-set computes the union of two
sets, which is the set containing each element that appears in either argument. Intersection-set
computes the intersection of two sets, which is the set containing only elements that appear in both
arguments. From the viewpoint of data abstraction, we are free to design any representation that
implements these operations in a way consistent with the interpretations given above.37
Sets as unordered lists
One way to represent a set is as a list of its elements in which no element appears more than once. The
empty set is represented by the empty list. In this representation, element-of-set? is similar to
the procedure memq of section 2.3.1. It uses equal? instead of eq? so that the set elements need not
be symbols:
(define (element-of-set? x set)
(cond ((null? set) false)
((equal? x (car set)) true)
(else (element-of-set? x (cdr set)))))
Using this, we can write adjoin-set. If the object to be adjoined is already in the set, we just return
the set. Otherwise, we use cons to add the object to the list that represents the set:
(define (adjoin-set x set)
(if (element-of-set? x set)
set
(cons x set)))
For intersection-set we can use a recursive strategy. If we know how to form the intersection
of set2 and the cdr of set1, we only need to decide whether to include the car of set1 in this.
But this depends on whether (car set1) is also in set2. Here is the resulting procedure:
(define (intersection-set set1 set2)
(cond ((or (null? set1) (null? set2)) ’())
((element-of-set? (car set1) set2)
(cons (car set1)
(intersection-set (cdr set1) set2)))
(else (intersection-set (cdr set1) set2))))
In designing a representation, one of the issues we should be concerned with is efficiency. Consider
the number of steps required by our set operations. Since they all use element-of-set?, the speed
of this operation has a major impact on the efficiency of the set implementation as a whole. Now, in
order to check whether an object is a member of a set, element-of-set? may have to scan the
entire set. (In the worst case, the object turns out not to be in the set.) Hence, if the set has n elements,
element-of-set? might take up to n steps. Thus, the number of steps required grows as (n).
The number of steps required by adjoin-set, which uses this operation, also grows as (n). For
intersection-set, which does an element-of-set? check for each element of set1, the
number of steps required grows as the product of the sizes of the sets involved, or (n2 ) for two sets
of size n. The same will be true of union-set.
Exercise 2.59. Implement the union-set operation for the unordered-list representation of sets.
Exercise 2.60. We specified that a set would be represented as a list with no duplicates. Now suppose
we allow duplicates. For instance, the set {1,2,3} could be represented as the list (2 3 2 1 3 2
2). Design procedures element-of-set?, adjoin-set, union-set, and
intersection-set that operate on this representation. How does the efficiency of each compare
with the corresponding procedure for the non-duplicate representation? Are there applications for
which you would use this representation in preference to the non-duplicate one?
Sets as ordered lists
One way to speed up our set operations is to change the representation so that the set elements are
listed in increasing order. To do this, we need some way to compare two objects so that we can say
which is bigger. For example, we could compare symbols lexicographically, or we could agree on
some method for assigning a unique number to an object and then compare the elements by comparing
the corresponding numbers. To keep our discussion simple, we will consider only the case where the
set elements are numbers, so that we can compare elements using > and <. We will represent a set of
numbers by listing its elements in increasing order. Whereas our first representation above allowed us
to represent the set {1,3,6,10} by listing the elements in any order, our new representation allows only
the list (1 3 6 10).
One advantage of ordering shows up in element-of-set?: In checking for the presence of an
item, we no longer have to scan the entire set. If we reach a set element that is larger than the item we
are looking for, then we know that the item is not in the set:
(define (element-of-set? x set)
(cond ((null? set) false)
((= x (car set)) true)
((< x (car set)) false)
(else (element-of-set? x (cdr set)))))
How many steps does this save? In the worst case, the item we are looking for may be the largest one
in the set, so the number of steps is the same as for the unordered representation. On the other hand, if
we search for items of many different sizes we can expect that sometimes we will be able to stop
searching at a point near the beginning of the list and that other times we will still need to examine
most of the list. On the average we should expect to have to examine about half of the items in the set.
Thus, the average number of steps required will be about n/2. This is still (n) growth, but it does
save us, on the average, a factor of 2 in number of steps over the previous implementation.
We obtain a more impressive speedup with intersection-set. In the unordered representation
this operation required (n2 ) steps, because we performed a complete scan of set2 for each element
of set1. But with the ordered representation, we can use a more clever method. Begin by comparing
the initial elements, x1 and x2, of the two sets. If x1 equals x2, then that gives an element of the
intersection, and the rest of the intersection is the intersection of the cdrs of the two sets. Suppose,
however, that x1 is less than x2. Since x2 is the smallest element in set2, we can immediately
conclude that x1 cannot appear anywhere in set2 and hence is not in the intersection. Hence, the
intersection is equal to the intersection of set2 with the cdr of set1. Similarly, if x2 is less than
x1, then the intersection is given by the intersection of set1 with the cdr of set2. Here is the
procedure:
(define (intersection-set set1 set2)
(if (or (null? set1) (null? set2))
’()
(let ((x1 (car set1)) (x2 (car set2)))
(cond ((= x1 x2)
(cons x1
(intersection-set (cdr set1)
(cdr set2))))
((< x1 x2)
(intersection-set (cdr set1) set2))
((< x2 x1)
(intersection-set set1 (cdr set2)))))))
To estimate the number of steps required by this process, observe that at each step we reduce the
intersection problem to computing intersections of smaller sets -- removing the first element from
set1 or set2 or both. Thus, the number of steps required is at most the sum of the sizes of set1
and set2, rather than the product of the sizes as with the unordered representation. This is (n)
growth rather than (n2 ) -- a considerable speedup, even for sets of moderate size.
Exercise 2.61. Give an implementation of adjoin-set using the ordered representation. By
analogy with element-of-set? show how to take advantage of the ordering to produce a
procedure that requires on the average about half as many steps as with the unordered representation.
Exercise 2.62. Give a (n) implementation of union-set for sets represented as ordered lists.
Sets as binary trees
We can do better than the ordered-list representation by arranging the set elements in the form of a
tree. Each node of the tree holds one element of the set, called the ‘‘entry’’ at that node, and a link to
each of two other (possibly empty) nodes. The ‘‘left’’ link points to elements smaller than the one at
the node, and the ‘‘right’’ link to elements greater than the one at the node. Figure 2.16 shows some
trees that represent the set {1,3,5,7,9,11}. The same set may be represented by a tree in a number of
different ways. The only thing we require for a valid representation is that all elements in the left
subtree be smaller than the node entry and that all elements in the right subtree be larger.
Figure 2.16: Various binary trees that represent the set { 1,3,5,7,9,11 }.
Figure 2.16: Various binary trees that represent the set { 1,3,5,7,9,11 }.
The advantage of the tree representation is this: Suppose we want to check whether a number x is
contained in a set. We begin by comparing x with the entry in the top node. If x is less than this, we
know that we need only search the left subtree; if x is greater, we need only search the right subtree.
Now, if the tree is ‘‘balanced,’’ each of these subtrees will be about half the size of the original. Thus,
in one step we have reduced the problem of searching a tree of size n to searching a tree of size n/2.
Since the size of the tree is halved at each step, we should expect that the number of steps needed to
search a tree of size n grows as (log n).38 For large sets, this will be a significant speedup over the
previous representations.
We can represent trees by using lists. Each node will be a list of three items: the entry at the node, the
left subtree, and the right subtree. A left or a right subtree of the empty list will indicate that there is no
subtree connected there. We can describe this representation by the following procedures:39
(define (entry tree) (car tree))
(define (left-branch tree) (cadr tree))
(define (right-branch tree) (caddr tree))
(define (make-tree entry left right)
(list entry left right))
Now we can write the element-of-set? procedure using the strategy described above:
(define (element-of-set? x set)
(cond ((null? set) false)
((= x (entry set)) true)
((< x (entry set))
(element-of-set? x (left-branch set)))
((> x (entry set))
(element-of-set? x (right-branch set)))))
Adjoining an item to a set is implemented similarly and also requires (log n) steps. To adjoin an
item x, we compare x with the node entry to determine whether x should be added to the right or to
the left branch, and having adjoined x to the appropriate branch we piece this newly constructed
branch together with the original entry and the other branch. If x is equal to the entry, we just return
the node. If we are asked to adjoin x to an empty tree, we generate a tree that has x as the entry and
empty right and left branches. Here is the procedure:
(define (adjoin-set x set)
(cond ((null? set) (make-tree x ’() ’()))
((= x (entry set)) set)
((< x (entry set))
(make-tree (entry set)
(adjoin-set x (left-branch set))
(right-branch set)))
((> x (entry set))
(make-tree (entry set)
(left-branch set)
(adjoin-set x (right-branch set))))))
The above claim that searching the tree can be performed in a logarithmic number of steps rests on the
assumption that the tree is ‘‘balanced,’’ i.e., that the left and the right subtree of every tree have
approximately the same number of elements, so that each subtree contains about half the elements of
its parent. But how can we be certain that the trees we construct will be balanced? Even if we start
with a balanced tree, adding elements with adjoin-set may produce an unbalanced result. Since
the position of a newly adjoined element depends on how the element compares with the items already
in the set, we can expect that if we add elements ‘‘randomly’’ the tree will tend to be balanced on the
average. But this is not a guarantee. For example, if we start with an empty set and adjoin the numbers
1 through 7 in sequence we end up with the highly unbalanced tree shown in figure 2.17. In this tree
all the left subtrees are empty, so it has no advantage over a simple ordered list. One way to solve this
problem is to define an operation that transforms an arbitrary tree into a balanced tree with the same
elements. Then we can perform this transformation after every few adjoin-set operations to keep
our set in balance. There are also other ways to solve this problem, most of which involve designing
new data structures for which searching and insertion both can be done in (log n) steps.40
Figure 2.17: Unbalanced tree produced by adjoining 1 through 7 in sequence.
Figure 2.17: Unbalanced tree produced by adjoining 1 through 7 in sequence.
Exercise 2.63. Each of the following two procedures converts a binary tree to a list.
(define (tree->list-1 tree)
(if (null? tree)
’()
(append (tree->list-1 (left-branch tree))
(cons (entry tree)
(tree->list-1 (right-branch tree))))))
(define (tree->list-2 tree)
(define (copy-to-list tree result-list)
(if (null? tree)
result-list
(copy-to-list (left-branch tree)
(cons (entry tree)
(copy-to-list (right-branch tree)
result-list)))))
(copy-to-list tree ’()))
a. Do the two procedures produce the same result for every tree? If not, how do the results differ?
What lists do the two procedures produce for the trees in figure 2.16?
b. Do the two procedures have the same order of growth in the number of steps required to convert a
balanced tree with n elements to a list? If not, which one grows more slowly?
Exercise 2.64. The following procedure list->tree converts an ordered list to a balanced binary
tree. The helper procedure partial-tree takes as arguments an integer n and list of at least n
elements and constructs a balanced tree containing the first n elements of the list. The result returned
by partial-tree is a pair (formed with cons) whose car is the constructed tree and whose cdr
is the list of elements not included in the tree.
(define (list->tree elements)
(car (partial-tree elements (length elements))))
(define (partial-tree elts n)
(if (= n 0)
(cons ’() elts)
(let ((left-size (quotient (- n 1) 2)))
(let ((left-result (partial-tree elts left-size)))
(let ((left-tree (car left-result))
(non-left-elts (cdr left-result))
(right-size (- n (+ left-size 1))))
(let ((this-entry (car non-left-elts))
(right-result (partial-tree (cdr non-left-elts)
right-size)))
(let ((right-tree (car right-result))
(remaining-elts (cdr right-result)))
(cons (make-tree this-entry left-tree right-tree)
remaining-elts))))))))
a. Write a short paragraph explaining as clearly as you can how partial-tree works. Draw the
tree produced by list->tree for the list (1 3 5 7 9 11).
b. What is the order of growth in the number of steps required by list->tree to convert a list of n
elements?
Exercise 2.65. Use the results of exercises 2.63 and 2.64 to give (n) implementations of
union-set and intersection-set for sets implemented as (balanced) binary trees.41
Sets and information retrieval
We have examined options for using lists to represent sets and have seen how the choice of
representation for a data object can have a large impact on the performance of the programs that use
the data. Another reason for concentrating on sets is that the techniques discussed here appear again
and again in applications involving information retrieval.
Consider a data base containing a large number of individual records, such as the personnel files for a
company or the transactions in an accounting system. A typical data-management system spends a
large amount of time accessing or modifying the data in the records and therefore requires an efficient
method for accessing records. This is done by identifying a part of each record to serve as an
identifying key. A key can be anything that uniquely identifies the record. For a personnel file, it might
be an employee’s ID number. For an accounting system, it might be a transaction number. Whatever
the key is, when we define the record as a data structure we should include a key selector procedure
that retrieves the key associated with a given record.
Now we represent the data base as a set of records. To locate the record with a given key we use a
procedure lookup, which takes as arguments a key and a data base and which returns the record that
has that key, or false if there is no such record. Lookup is implemented in almost the same way as
element-of-set?. For example, if the set of records is implemented as an unordered list, we
could use
(define (lookup given-key set-of-records)
(cond ((null? set-of-records) false)
((equal? given-key (key (car set-of-records)))
(car set-of-records))
(else (lookup given-key (cdr set-of-records)))))
Of course, there are better ways to represent large sets than as unordered lists. Information-retrieval
systems in which records have to be ‘‘randomly accessed’’ are typically implemented by a tree-based
method, such as the binary-tree representation discussed previously. In designing such a system the
methodology of data abstraction can be a great help. The designer can create an initial implementation
using a simple, straightforward representation such as unordered lists. This will be unsuitable for the
eventual system, but it can be useful in providing a ‘‘quick and dirty’’ data base with which to test the
rest of the system. Later on, the data representation can be modified to be more sophisticated. If the
data base is accessed in terms of abstract selectors and constructors, this change in representation will
not require any changes to the rest of the system.
Exercise 2.66. Implement the lookup procedure for the case where the set of records is structured as
a binary tree, ordered by the numerical values of the keys.
2.3.4 Example: Huffman Encoding Trees
This section provides practice in the use of list structure and data abstraction to manipulate sets and
trees. The application is to methods for representing data as sequences of ones and zeros (bits). For
example, the ASCII standard code used to represent text in computers encodes each character as a
sequence of seven bits. Using seven bits allows us to distinguish 27 , or 128, possible different
characters. In general, if we want to distinguish n different symbols, we will need to use log2 n bits
per symbol. If all our messages are made up of the eight symbols A, B, C, D, E, F, G, and H, we can
choose a code with three bits per character, for example
A 000 C 010 E 100 G 110
B 001 D 011 F 101 H 111
With this code, the message
BACADAEAFABBAAAGAH
is encoded as the string of 54 bits
001000010000011000100000101000001001000000000110000111
Codes such as ASCII and the A-through-H code above are known as fixed-length codes, because they
represent each symbol in the message with the same number of bits. It is sometimes advantageous to
use variable-length codes, in which different symbols may be represented by different numbers of bits.
For example, Morse code does not use the same number of dots and dashes for each letter of the
alphabet. In particular, E, the most frequent letter, is represented by a single dot. In general, if our
messages are such that some symbols appear very frequently and some very rarely, we can encode
data more efficiently (i.e., using fewer bits per message) if we assign shorter codes to the frequent
symbols. Consider the following alternative code for the letters A through H:
A 0 C 1010 E 1100 G 1110
B 100 D 1011 F 1101 H 1111
With this code, the same message as above is encoded as the string
100010100101101100011010100100000111001111
This string contains 42 bits, so it saves more than 20% in space in comparison with the fixed-length
code shown above.
One of the difficulties of using a variable-length code is knowing when you have reached the end of a
symbol in reading a sequence of zeros and ones. Morse code solves this problem by using a special
separator code (in this case, a pause) after the sequence of dots and dashes for each letter. Another
solution is to design the code in such a way that no complete code for any symbol is the beginning (or
prefix) of the code for another symbol. Such a code is called a prefix code. In the example above, A is
encoded by 0 and B is encoded by 100, so no other symbol can have a code that begins with 0 or with
100.
In general, we can attain significant savings if we use variable-length prefix codes that take advantage
of the relative frequencies of the symbols in the messages to be encoded. One particular scheme for
doing this is called the Huffman encoding method, after its discoverer, David Huffman. A Huffman
code can be represented as a binary tree whose leaves are the symbols that are encoded. At each
non-leaf node of the tree there is a set containing all the symbols in the leaves that lie below the node.
In addition, each symbol at a leaf is assigned a weight (which is its relative frequency), and each
non-leaf node contains a weight that is the sum of all the weights of the leaves lying below it. The
weights are not used in the encoding or the decoding process. We will see below how they are used to
help construct the tree.
Figure 2.18: A Huffman encoding tree.
Figure 2.18: A Huffman encoding tree.
Figure 2.18 shows the Huffman tree for the A-through-H code given above. The weights at the leaves
indicate that the tree was designed for messages in which A appears with relative frequency 8, B with
relative frequency 3, and the other letters each with relative frequency 1.
Given a Huffman tree, we can find the encoding of any symbol by starting at the root and moving
down until we reach the leaf that holds the symbol. Each time we move down a left branch we add a 0
to the code, and each time we move down a right branch we add a 1. (We decide which branch to
follow by testing to see which branch either is the leaf node for the symbol or contains the symbol in
its set.) For example, starting from the root of the tree in figure 2.18, we arrive at the leaf for D by
following a right branch, then a left branch, then a right branch, then a right branch; hence, the code
for D is 1011.
To decode a bit sequence using a Huffman tree, we begin at the root and use the successive zeros and
ones of the bit sequence to determine whether to move down the left or the right branch. Each time we
come to a leaf, we have generated a new symbol in the message, at which point we start over from the
root of the tree to find the next symbol. For example, suppose we are given the tree above and the
sequence 10001010. Starting at the root, we move down the right branch, (since the first bit of the
string is 1), then down the left branch (since the second bit is 0), then down the left branch (since the
third bit is also 0). This brings us to the leaf for B, so the first symbol of the decoded message is B.
Now we start again at the root, and we make a left move because the next bit in the string is 0. This
brings us to the leaf for A. Then we start again at the root with the rest of the string 1010, so we move
right, left, right, left and reach C. Thus, the entire message is BAC.
Generating Huffman trees
Given an ‘‘alphabet’’ of symbols and their relative frequencies, how do we construct the ‘‘best’’ code?
(In other words, which tree will encode messages with the fewest bits?) Huffman gave an algorithm
for doing this and showed that the resulting code is indeed the best variable-length code for messages
where the relative frequency of the symbols matches the frequencies with which the code was
constructed. We will not prove this optimality of Huffman codes here, but we will show how Huffman
trees are constructed.42
The algorithm for generating a Huffman tree is very simple. The idea is to arrange the tree so that the
symbols with the lowest frequency appear farthest away from the root. Begin with the set of leaf
nodes, containing symbols and their frequencies, as determined by the initial data from which the code
is to be constructed. Now find two leaves with the lowest weights and merge them to produce a node
that has these two nodes as its left and right branches. The weight of the new node is the sum of the
two weights. Remove the two leaves from the original set and replace them by this new node. Now
continue this process. At each step, merge two nodes with the smallest weights, removing them from
the set and replacing them with a node that has these two as its left and right branches. The process
stops when there is only one node left, which is the root of the entire tree. Here is how the Huffman
tree of figure 2.18 was generated:
Initial leaves {(A 8) (B 3) (C 1) (D 1) (E 1) (F 1) (G 1) (H 1)}
Merge {(A 8) (B 3) ({C D} 2) (E 1) (F 1) (G 1) (H 1)}
Merge {(A 8) (B 3) ({C D} 2) ({E F} 2) (G 1) (H 1)}
Merge {(A 8) (B 3) ({C D} 2) ({E F} 2) ({G H} 2)}
Merge {(A 8) (B 3) ({C D} 2) ({E F G H} 4)}
Merge {(A 8) ({B C D} 5) ({E F G H} 4)}
Merge {(A 8) ({B C D E F G H} 9)}
Final merge {({A B C D E F G H} 17)}
The algorithm does not always specify a unique tree, because there may not be unique smallest-weight
nodes at each step. Also, the choice of the order in which the two nodes are merged (i.e., which will be
the right branch and which will be the left branch) is arbitrary.
Representing Huffman trees
In the exercises below we will work with a system that uses Huffman trees to encode and decode
messages and generates Huffman trees according to the algorithm outlined above. We will begin by
discussing how trees are represented.
Leaves of the tree are represented by a list consisting of the symbol leaf, the symbol at the leaf, and
the weight:
(define (make-leaf symbol weight)
(list ’leaf symbol weight))
(define (leaf? object)
(eq? (car object) ’leaf))
(define (symbol-leaf x) (cadr x))
(define (weight-leaf x) (caddr x))
A general tree will be a list of a left branch, a right branch, a set of symbols, and a weight. The set of
symbols will be simply a list of the symbols, rather than some more sophisticated set representation.
When we make a tree by merging two nodes, we obtain the weight of the tree as the sum of the
weights of the nodes, and the set of symbols as the union of the sets of symbols for the nodes. Since
our symbol sets are represented as lists, we can form the union by using the append procedure we
defined in section 2.2.1:
(define (make-code-tree left right)
(list left
right
(append (symbols left) (symbols right))
(+ (weight left) (weight right))))
If we make a tree in this way, we have the following selectors:
(define (left-branch tree) (car tree))
(define (right-branch tree) (cadr tree))
(define (symbols tree)
(if (leaf? tree)
(list (symbol-leaf tree))
(caddr tree)))
(define (weight tree)
(if (leaf? tree)
(weight-leaf tree)
(cadddr tree)))
The procedures symbols and weight must do something slightly different depending on whether
they are called with a leaf or a general tree. These are simple examples of generic procedures
(procedures that can handle more than one kind of data), which we will have much more to say about
in sections 2.4 and 2.5.
The decoding procedure
The following procedure implements the decoding algorithm. It takes as arguments a list of zeros and
ones, together with a Huffman tree.
(define (decode bits tree)
(define (decode-1 bits current-branch)
(if (null? bits)
’()
(let ((next-branch
(choose-branch (car bits) current-branch)))
(if (leaf? next-branch)
(cons (symbol-leaf next-branch)
(decode-1 (cdr bits) tree))
(decode-1 (cdr bits) next-branch)))))
(decode-1 bits tree))
(define (choose-branch bit branch)
(cond ((= bit 0) (left-branch branch))
((= bit 1) (right-branch branch))
(else (error “bad bit — CHOOSE-BRANCH” bit))))
The procedure decode-1 takes two arguments: the list of remaining bits and the current position in
the tree. It keeps moving ‘‘down’’ the tree, choosing a left or a right branch according to whether the
next bit in the list is a zero or a one. (This is done with the procedure choose-branch.) When it
reaches a leaf, it returns the symbol at that leaf as the next symbol in the message by consing it onto
the result of decoding the rest of the message, starting at the root of the tree. Note the error check in
the final clause of choose-branch, which complains if the procedure finds something other than a
zero or a one in the input data.
Sets of weighted elements
In our representation of trees, each non-leaf node contains a set of symbols, which we have
represented as a simple list. However, the tree-generating algorithm discussed above requires that we
also work with sets of leaves and trees, successively merging the two smallest items. Since we will be
required to repeatedly find the smallest item in a set, it is convenient to use an ordered representation
for this kind of set.
We will represent a set of leaves and trees as a list of elements, arranged in increasing order of weight.
The following adjoin-set procedure for constructing sets is similar to the one described in
exercise 2.61; however, items are compared by their weights, and the element being added to the set is
never already in it.
(define (adjoin-set x set)
(cond ((null? set) (list x))
((< (weight x) (weight (car set))) (cons x set))
(else (cons (car set)
(adjoin-set x (cdr set))))))
The following procedure takes a list of symbol-frequency pairs such as ((A 4) (B 2) (C 1) (D
1)) and constructs an initial ordered set of leaves, ready to be merged according to the Huffman
algorithm:
(define (make-leaf-set pairs)
(if (null? pairs)
’()
(let ((pair (car pairs)))
(adjoin-set (make-leaf (car pair) ; symbol
(cadr pair)) ; frequency
(make-leaf-set (cdr pairs))))))
Exercise 2.67. Define an encoding tree and a sample message:
(define sample-tree
(make-code-tree (make-leaf ’A 4)
(make-code-tree
(make-leaf ’B 2)
(make-code-tree (make-leaf ’D 1)
(make-leaf ’C 1)))))
(define sample-message ’(0 1 1 0 0 1 0 1 0 1 1 1 0))
Use the decode procedure to decode the message, and give the result.
Exercise 2.68. The encode procedure takes as arguments a message and a tree and produces the list
of bits that gives the encoded message.
(define (encode message tree)
(if (null? message)
’()
(append (encode-symbol (car message) tree)
(encode (cdr message) tree))))
Encode-symbol is a procedure, which you must write, that returns the list of bits that encodes a
given symbol according to a given tree. You should design encode-symbol so that it signals an
error if the symbol is not in the tree at all. Test your procedure by encoding the result you obtained in
exercise 2.67 with the sample tree and seeing whether it is the same as the original sample message.
Exercise 2.69. The following procedure takes as its argument a list of symbol-frequency pairs (where
no symbol appears in more than one pair) and generates a Huffman encoding tree according to the
Huffman algorithm.
(define (generate-huffman-tree pairs)
(successive-merge (make-leaf-set pairs)))
Make-leaf-set is the procedure given above that transforms the list of pairs into an ordered set of
leaves. Successive-merge is the procedure you must write, using make-code-tree to
successively merge the smallest-weight elements of the set until there is only one element left, which
is the desired Huffman tree. (This procedure is slightly tricky, but not really complicated. If you find
yourself designing a complex procedure, then you are almost certainly doing something wrong. You
can take significant advantage of the fact that we are using an ordered set representation.)
Exercise 2.70. The following eight-symbol alphabet with associated relative frequencies was
designed to efficiently encode the lyrics of 1950s rock songs. (Note that the ‘‘symbols’’ of an
‘‘alphabet’’ need not be individual letters.)
A 2 NA 16
BOOM 1 SHA 3
GET 2 YIP 9
JOB 2 WAH 1
Use generate-huffman-tree (exercise 2.69) to generate a corresponding Huffman tree, and use
encode (exercise 2.68) to encode the following message:
Get a job
Sha na na na na na na na na
Get a job
Sha na na na na na na na na
Wah yip yip yip yip yip yip yip yip yip
Sha boom
How many bits are required for the encoding? What is the smallest number of bits that would be
needed to encode this song if we used a fixed-length code for the eight-symbol alphabet?
Exercise 2.71. Suppose we have a Huffman tree for an alphabet of n symbols, and that the relative
frequencies of the symbols are 1, 2, 4, ..., 2n-1 . Sketch the tree for n=5; for n=10. In such a tree (for
general n) how may bits are required to encode the most frequent symbol? the least frequent symbol?
Exercise 2.72. Consider the encoding procedure that you designed in exercise 2.68. What is the order
of growth in the number of steps needed to encode a symbol? Be sure to include the number of steps
needed to search the symbol list at each node encountered. To answer this question in general is
difficult. Consider the special case where the relative frequencies of the n symbols are as described in
exercise 2.71, and give the order of growth (as a function of n) of the number of steps needed to
encode the most frequent and least frequent symbols in the alphabet.
32 Allowing quotation in a language wreaks havoc with the ability to reason about the language in
simple terms, because it destroys the notion that equals can be substituted for equals. For example,
three is one plus two, but the word ‘‘three’’ is not the phrase ‘‘one plus two.’’ Quotation is powerful
because it gives us a way to build expressions that manipulate other expressions (as we will see when
we write an interpreter in chapter 4). But allowing statements in a language that talk about other
statements in that language makes it very difficult to maintain any coherent principle of what ‘‘equals
can be substituted for equals’’ should mean. For example, if we know that the evening star is the
morning star, then from the statement ‘‘the evening star is Venus’’ we can deduce ‘‘the morning star is
Venus.’’ However, given that ‘‘John knows that the evening star is Venus’’ we cannot infer that ‘‘John
knows that the morning star is Venus.’’
33 The single quote is different from the double quote we have been using to enclose character strings
to be printed. Whereas the single quote can be used to denote lists or symbols, the double quote is used
only with character strings. In this book, the only use for character strings is as items to be printed.
34 Strictly, our use of the quotation mark violates the general rule that all compound expressions in
our language should be delimited by parentheses and look like lists. We can recover this consistency
by introducing a special form quote, which serves the same purpose as the quotation mark. Thus, we
would type (quote a) instead of ’a, and we would type (quote (a b c)) instead of ’(a b
c). This is precisely how the interpreter works. The quotation mark is just a single-character
abbreviation for wrapping the next complete expression with quote to form (quote
the interpreter can be manipulated as a data object. For instance, we could construct the expression
(car ’(a b c)), which is the same as (car (quote (a b c))), by evaluating
(list ’car (list ’quote ’(a b c))).
35 We can consider two symbols to be ‘‘the same’’ if they consist of the same characters in the same
order. Such a definition skirts a deep issue that we are not yet ready to address: the meaning of
‘‘sameness’’ in a programming language. We will return to this in chapter 3 (section 3.1.3).
36 In practice, programmers use equal? to compare lists that contain numbers as well as symbols.
Numbers are not considered to be symbols. The question of whether two numerically equal numbers
(as tested by =) are also eq? is highly implementation-dependent. A better definition of equal?
(such as the one that comes as a primitive in Scheme) would also stipulate that if a and b are both
numbers, then a and b are equal? if they are numerically equal.
37 If we want to be more formal, we can specify ‘‘consistent with the interpretations given above’’ to
mean that the operations satisfy a collection of rules such as these:
For any set S and any object x, (element-of-set? x (adjoin-set x S)) is true
(informally: ‘‘Adjoining an object to a set produces a set that contains the object’’).
For any sets S and T and any object x, (element-of-set? x (union-set S T)) is
equal to (or (element-of-set? x S) (element-of-set? x T)) (informally:
‘‘The elements of (union S T) are the elements that are in S or in T’’).
For any object x, (element-of-set? x ’()) is false (informally: ‘‘No object is an
element of the empty set’’).
38 Halving the size of the problem at each step is the distinguishing characteristic of logarithmic
growth, as we saw with the fast-exponentiation algorithm of section 1.2.4 and the half-interval search
method of section 1.3.3.
39 We are representing sets in terms of trees, and trees in terms of lists — in effect, a data abstraction
built upon a data abstraction. We can regard the procedures entry, left-branch,
right-branch, and make-tree as a way of isolating the abstraction of a ‘‘binary tree’’ from the
particular way we might wish to represent such a tree in terms of list structure.
40 Examples of such structures include B-trees and red-black trees. There is a large literature on data
structures devoted to this problem. See Cormen, Leiserson, and Rivest 1990.
41 Exercises 2.63-2.65 are due to Paul Hilfinger.
42 See Hamming 1980 for a discussion of the mathematical properties of Huffman codes.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
2.4 Multiple Representations for Abstract Data
We have introduced data abstraction, a methodology for structuring systems in such a way that much
of a program can be specified independent of the choices involved in implementing the data objects
that the program manipulates. For example, we saw in section 2.1.1 how to separate the task of
designing a program that uses rational numbers from the task of implementing rational numbers in
terms of the computer language’s primitive mechanisms for constructing compound data. The key idea
was to erect an abstraction barrier — in this case, the selectors and constructors for rational numbers
(make-rat, numer, denom) — that isolates the way rational numbers are used from their underlying
representation in terms of list structure. A similar abstraction barrier isolates the details of the
procedures that perform rational arithmetic (add-rat, sub-rat, mul-rat, and div-rat) from
the ‘‘higher-level’’ procedures that use rational numbers. The resulting program has the structure
shown in figure 2.1.
These data-abstraction barriers are powerful tools for controlling complexity. By isolating the
underlying representations of data objects, we can divide the task of designing a large program into
smaller tasks that can be performed separately. But this kind of data abstraction is not yet powerful
enough, because it may not always make sense to speak of ‘‘the underlying representation’’ for a data
object.
For one thing, there might be more than one useful representation for a data object, and we might like
to design systems that can deal with multiple representations. To take a simple example, complex
numbers may be represented in two almost equivalent ways: in rectangular form (real and imaginary
parts) and in polar form (magnitude and angle). Sometimes rectangular form is more appropriate and
sometimes polar form is more appropriate. Indeed, it is perfectly plausible to imagine a system in
which complex numbers are represented in both ways, and in which the procedures for manipulating
complex numbers work with either representation.
More importantly, programming systems are often designed by many people working over extended
periods of time, subject to requirements that change over time. In such an environment, it is simply not
possible for everyone to agree in advance on choices of data representation. So in addition to the
data-abstraction barriers that isolate representation from use, we need abstraction barriers that isolate
different design choices from each other and permit different choices to coexist in a single program.
Furthermore, since large programs are often created by combining pre-existing modules that were
designed in isolation, we need conventions that permit programmers to incorporate modules into larger
systems additively, that is, without having to redesign or reimplement these modules.
In this section, we will learn how to cope with data that may be represented in different ways by
different parts of a program. This requires constructing generic procedures — procedures that can
operate on data that may be represented in more than one way. Our main technique for building
generic procedures will be to work in terms of data objects that have type tags, that is, data objects that
include explicit information about how they are to be processed. We will also discuss data-directed
programming, a powerful and convenient implementation strategy for additively assembling systems
with generic operations.
We begin with the simple complex-number example. We will see how type tags and data-directed
style enable us to design separate rectangular and polar representations for complex numbers while
maintaining the notion of an abstract ‘‘complex-number’’ data object. We will accomplish this by
defining arithmetic procedures for complex numbers (add-complex, sub-complex,
mul-complex, and div-complex) in terms of generic selectors that access parts of a complex
number independent of how the number is represented. The resulting complex-number system, as
shown in figure 2.19, contains two different kinds of abstraction barriers. The ‘‘horizontal’’
abstraction barriers play the same role as the ones in figure 2.1. They isolate ‘‘higher-level’’
operations from ‘‘lower-level’’ representations. In addition, there is a ‘‘vertical’’ barrier that gives us
the ability to separately design and install alternative representations.
Figure 2.19: Data-abstraction barriers in the complex-number system.
Figure 2.19: Data-abstraction barriers in the complex-number system.
In section 2.5 we will show how to use type tags and data-directed style to develop a generic
arithmetic package. This provides procedures (add, mul, and so on) that can be used to manipulate all
sorts of ‘‘numbers’’ and can be easily extended when a new kind of number is needed. In
section 2.5.3, we’ll show how to use generic arithmetic in a system that performs symbolic algebra.
2.4.1 Representations for Complex Numbers
We will develop a system that performs arithmetic operations on complex numbers as a simple but
unrealistic example of a program that uses generic operations. We begin by discussing two plausible
representations for complex numbers as ordered pairs: rectangular form (real part and imaginary part)
and polar form (magnitude and angle).43 Section 2.4.2 will show how both representations can be
made to coexist in a single system through the use of type tags and generic operations.
Like rational numbers, complex numbers are naturally represented as ordered pairs. The set of
complex numbers can be thought of as a two-dimensional space with two orthogonal axes, the ‘‘real’’
axis and the ‘‘imaginary’’ axis. (See figure 2.20.) From this point of view, the complex number z = x +
iy (where i2 = – 1) can be thought of as the point in the plane whose real coordinate is x and whose
imaginary coordinate is y. Addition of complex numbers reduces in this representation to addition of
coordinates:
When multiplying complex numbers, it is more natural to think in terms of representing a complex
number in polar form, as a magnitude and an angle (r and A in figure 2.20). The product of two
complex numbers is the vector obtained by stretching one complex number by the length of the other
and then rotating it through the angle of the other:
Thus, there are two different representations for complex numbers, which are appropriate for different
operations. Yet, from the viewpoint of someone writing a program that uses complex numbers, the
principle of data abstraction suggests that all the operations for manipulating complex numbers should
be available regardless of which representation is used by the computer. For example, it is often useful
to be able to find the magnitude of a complex number that is specified by rectangular coordinates.
Similarly, it is often useful to be able to determine the real part of a complex number that is specified
by polar coordinates.
Figure 2.20: Complex numbers as points in the plane.
Figure 2.20: Complex numbers as points in the plane.
To design such a system, we can follow the same data-abstraction strategy we followed in designing
the rational-number package in section 2.1.1. Assume that the operations on complex numbers are
implemented in terms of four selectors: real-part, imag-part, magnitude, and angle. Also
assume that we have two procedures for constructing complex numbers: make-from-real-imag
returns a complex number with specified real and imaginary parts, and make-from-mag-ang
returns a complex number with specified magnitude and angle. These procedures have the property
that, for any complex number z, both
(make-from-real-imag (real-part z) (imag-part z))
and
(make-from-mag-ang (magnitude z) (angle z))
produce complex numbers that are equal to z.
Using these constructors and selectors, we can implement arithmetic on complex numbers using the
‘‘abstract data’’ specified by the constructors and selectors, just as we did for rational numbers in
section 2.1.1. As shown in the formulas above, we can add and subtract complex numbers in terms of
real and imaginary parts while multiplying and dividing complex numbers in terms of magnitudes and
angles:
(define (add-complex z1 z2)
(make-from-real-imag (+ (real-part z1) (real-part z2))
(+ (imag-part z1) (imag-part z2))))
(define (sub-complex z1 z2)
(make-from-real-imag (- (real-part z1) (real-part z2))
(- (imag-part z1) (imag-part z2))))
(define (mul-complex z1 z2)
(make-from-mag-ang (* (magnitude z1) (magnitude z2))
(+ (angle z1) (angle z2))))
(define (div-complex z1 z2)
(make-from-mag-ang (/ (magnitude z1) (magnitude z2))
(- (angle z1) (angle z2))))
To complete the complex-number package, we must choose a representation and we must implement
the constructors and selectors in terms of primitive numbers and primitive list structure. There are two
obvious ways to do this: We can represent a complex number in ‘‘rectangular form’’ as a pair (real
part, imaginary part) or in ‘‘polar form’’ as a pair (magnitude, angle). Which shall we choose?
In order to make the different choices concrete, imagine that there are two programmers, Ben
Bitdiddle and Alyssa P. Hacker, who are independently designing representations for the
complex-number system. Ben chooses to represent complex numbers in rectangular form. With this
choice, selecting the real and imaginary parts of a complex number is straightforward, as is
constructing a complex number with given real and imaginary parts. To find the magnitude and the
angle, or to construct a complex number with a given magnitude and angle, he uses the trigonometric
relations
which relate the real and imaginary parts (x, y) to the magnitude and the angle (r, A).44 Ben’s
representation is therefore given by the following selectors and constructors:
(define (real-part z) (car z))
(define (imag-part z) (cdr z))
(define (magnitude z)
(sqrt (+ (square (real-part z)) (square (imag-part z)))))
(define (angle z)
(atan (imag-part z) (real-part z)))
(define (make-from-real-imag x y) (cons x y))
(define (make-from-mag-ang r a)
(cons (* r (cos a)) (* r (sin a))))
Alyssa, in contrast, chooses to represent complex numbers in polar form. For her, selecting the
magnitude and angle is straightforward, but she has to use the trigonometric relations to obtain the real
and imaginary parts. Alyssa’s representation is:
(define (real-part z)
(* (magnitude z) (cos (angle z))))
(define (imag-part z)
(* (magnitude z) (sin (angle z))))
(define (magnitude z) (car z))
(define (angle z) (cdr z))
(define (make-from-real-imag x y)
(cons (sqrt (+ (square x) (square y)))
(atan y x)))
(define (make-from-mag-ang r a) (cons r a))
The discipline of data abstraction ensures that the same implementation of add-complex,
sub-complex, mul-complex, and div-complex will work with either Ben’s representation or
Alyssa’s representation.
2.4.2 Tagged data
One way to view data abstraction is as an application of the ‘‘principle of least commitment.’’ In
implementing the complex-number system in section 2.4.1, we can use either Ben’s rectangular
representation or Alyssa’s polar representation. The abstraction barrier formed by the selectors and
constructors permits us to defer to the last possible moment the choice of a concrete representation for
our data objects and thus retain maximum flexibility in our system design.
The principle of least commitment can be carried to even further extremes. If we desire, we can
maintain the ambiguity of representation even after we have designed the selectors and constructors,
and elect to use both Ben’s representation and Alyssa’s representation. If both representations are
included in a single system, however, we will need some way to distinguish data in polar form from
data in rectangular form. Otherwise, if we were asked, for instance, to find the magnitude of the
pair (3,4), we wouldn’t know whether to answer 5 (interpreting the number in rectangular form) or 3
(interpreting the number in polar form). A straightforward way to accomplish this distinction is to
include a type tag — the symbol rectangular or polar — as part of each complex number. Then
when we need to manipulate a complex number we can use the tag to decide which selector to apply.
In order to manipulate tagged data, we will assume that we have procedures type-tag and
contents that extract from a data object the tag and the actual contents (the polar or rectangular
coordinates, in the case of a complex number). We will also postulate a procedure attach-tag that
takes a tag and contents and produces a tagged data object. A straightforward way to implement this is
to use ordinary list structure:
(define (attach-tag type-tag contents)
(cons type-tag contents))
(define (type-tag datum)
(if (pair? datum)
(car datum)
(error “Bad tagged datum — TYPE-TAG” datum)))
(define (contents datum)
(if (pair? datum)
(cdr datum)
(error “Bad tagged datum — CONTENTS” datum)))
Using these procedures, we can define predicates rectangular? and polar?, which recognize
polar and rectangular numbers, respectively:
(define (rectangular? z)
(eq? (type-tag z) ’rectangular))
(define (polar? z)
(eq? (type-tag z) ’polar))
With type tags, Ben and Alyssa can now modify their code so that their two different representations
can coexist in the same system. Whenever Ben constructs a complex number, he tags it as rectangular.
Whenever Alyssa constructs a complex number, she tags it as polar. In addition, Ben and Alyssa must
make sure that the names of their procedures do not conflict. One way to do this is for Ben to append
the suffix rectangular to the name of each of his representation procedures and for Alyssa to
append polar to the names of hers. Here is Ben’s revised rectangular representation from
section 2.4.1:
(define (real-part-rectangular z) (car z))
(define (imag-part-rectangular z) (cdr z))
(define (magnitude-rectangular z)
(sqrt (+ (square (real-part-rectangular z))
(square (imag-part-rectangular z)))))
(define (angle-rectangular z)
(atan (imag-part-rectangular z)
(real-part-rectangular z)))
(define (make-from-real-imag-rectangular x y)
(attach-tag ’rectangular (cons x y)))
(define (make-from-mag-ang-rectangular r a)
(attach-tag ’rectangular
(cons (* r (cos a)) (* r (sin a)))))
and here is Alyssa’s revised polar representation:
(define (real-part-polar z)
(* (magnitude-polar z) (cos (angle-polar z))))
(define (imag-part-polar z)
(* (magnitude-polar z) (sin (angle-polar z))))
(define (magnitude-polar z) (car z))
(define (angle-polar z) (cdr z))
(define (make-from-real-imag-polar x y)
(attach-tag ’polar
(cons (sqrt (+ (square x) (square y)))
(atan y x))))
(define (make-from-mag-ang-polar r a)
(attach-tag ’polar (cons r a)))
Each generic selector is implemented as a procedure that checks the tag of its argument and calls the
appropriate procedure for handling data of that type. For example, to obtain the real part of a complex
number, real-part examines the tag to determine whether to use Ben’s
real-part-rectangular or Alyssa’s real-part-polar. In either case, we use contents
to extract the bare, untagged datum and send this to the rectangular or polar procedure as required:
(define (real-part z)
(cond ((rectangular? z)
(real-part-rectangular (contents z)))
((polar? z)
(real-part-polar (contents z)))
(else (error “Unknown type — REAL-PART” z))))
(define (imag-part z)
(cond ((rectangular? z)
(imag-part-rectangular (contents z)))
((polar? z)
(imag-part-polar (contents z)))
(else (error “Unknown type — IMAG-PART” z))))
(define (magnitude z)
(cond ((rectangular? z)
(magnitude-rectangular (contents z)))
((polar? z)
(magnitude-polar (contents z)))
(else (error “Unknown type — MAGNITUDE” z))))
(define (angle z)
(cond ((rectangular? z)
(angle-rectangular (contents z)))
((polar? z)
(angle-polar (contents z)))
(else (error “Unknown type — ANGLE” z))))
To implement the complex-number arithmetic operations, we can use the same procedures
add-complex, sub-complex, mul-complex, and div-complex from section 2.4.1, because
the selectors they call are generic, and so will work with either representation. For example, the
procedure add-complex is still
(define (add-complex z1 z2)
(make-from-real-imag (+ (real-part z1) (real-part z2))
(+ (imag-part z1) (imag-part z2))))
Finally, we must choose whether to construct complex numbers using Ben’s representation or Alyssa’s
representation. One reasonable choice is to construct rectangular numbers whenever we have real and
imaginary parts and to construct polar numbers whenever we have magnitudes and angles:
(define (make-from-real-imag x y)
(make-from-real-imag-rectangular x y))
(define (make-from-mag-ang r a)
(make-from-mag-ang-polar r a))
Figure 2.21: Structure of the generic complex-arithmetic system.
Figure 2.21: Structure of the generic complex-arithmetic system.
The resulting complex-number system has the structure shown in figure 2.21. The system has been
decomposed into three relatively independent parts: the complex-number-arithmetic operations,
Alyssa’s polar implementation, and Ben’s rectangular implementation. The polar and rectangular
implementations could have been written by Ben and Alyssa working separately, and both of these can
be used as underlying representations by a third programmer implementing the complex-arithmetic
procedures in terms of the abstract constructor/selector interface.
Since each data object is tagged with its type, the selectors operate on the data in a generic manner.
That is, each selector is defined to have a behavior that depends upon the particular type of data it is
applied to. Notice the general mechanism for interfacing the separate representations: Within a given
representation implementation (say, Alyssa’s polar package) a complex number is an untyped pair
(magnitude, angle). When a generic selector operates on a number of polar type, it strips off the tag
and passes the contents on to Alyssa’s code. Conversely, when Alyssa constructs a number for general
use, she tags it with a type so that it can be appropriately recognized by the higher-level procedures.
This discipline of stripping off and attaching tags as data objects are passed from level to level can be
an important organizational strategy, as we shall see in section 2.5.
2.4.3 Data-Directed Programming and Additivity
The general strategy of checking the type of a datum and calling an appropriate procedure is called
dispatching on type. This is a powerful strategy for obtaining modularity in system design. Oh the
other hand, implementing the dispatch as in section 2.4.2 has two significant weaknesses. One
weakness is that the generic interface procedures (real-part, imag-part, magnitude, and
angle) must know about all the different representations. For instance, suppose we wanted to
incorporate a new representation for complex numbers into our complex-number system. We would
need to identify this new representation with a type, and then add a clause to each of the generic
interface procedures to check for the new type and apply the appropriate selector for that
representation.
Another weakness of the technique is that even though the individual representations can be designed
separately, we must guarantee that no two procedures in the entire system have the same name. This is
why Ben and Alyssa had to change the names of their original procedures from section 2.4.1.
The issue underlying both of these weaknesses is that the technique for implementing generic
interfaces is not additive. The person implementing the generic selector procedures must modify those
procedures each time a new representation is installed, and the people interfacing the individual
representations must modify their code to avoid name conflicts. In each of these cases, the changes
that must be made to the code are straightforward, but they must be made nonetheless, and this is a
source of inconvenience and error. This is not much of a problem for the complex-number system as it
stands, but suppose there were not two but hundreds of different representations for complex numbers.
And suppose that there were many generic selectors to be maintained in the abstract-data interface.
Suppose, in fact, that no one programmer knew all the interface procedures or all the representations.
The problem is real and must be addressed in such programs as large-scale data-base-management
systems.
What we need is a means for modularizing the system design even further. This is provided by the
programming technique known as data-directed programming. To understand how data-directed
programming works, begin with the observation that whenever we deal with a set of generic operations
that are common to a set of different types we are, in effect, dealing with a two-dimensional table that
contains the possible operations on one axis and the possible types on the other axis. The entries in the
table are the procedures that implement each operation for each type of argument presented. In the
complex-number system developed in the previous section, the correspondence between operation
name, data type, and actual procedure was spread out among the various conditional clauses in the
generic interface procedures. But the same information could have been organized in a table, as shown
in figure 2.22.
Data-directed programming is the technique of designing programs to work with such a table directly.
Previously, we implemented the mechanism that interfaces the complex-arithmetic code with the two
representation packages as a set of procedures that each perform an explicit dispatch on type. Here we
will implement the interface as a single procedure that looks up the combination of the operation name
and argument type in the table to find the correct procedure to apply, and then applies it to the contents
of the argument. If we do this, then to add a new representation package to the system we need not
change any existing procedures; we need only add new entries to the table.
Figure 2.22: Table of operations for the complex-number system.
Figure 2.22: Table of operations for the complex-number system.
To implement this plan, assume that we have two procedures, put and get, for manipulating the
operation-and-type table:
(put
installs the
(get
looks up the
found, get returns false.
For now, we can assume that put and get are included in our language. In chapter 3 (section 3.3.3,
exercise 3.24) we will see how to implement these and other operations for manipulating tables.
Here is how data-directed programming can be used in the complex-number system. Ben, who
developed the rectangular representation, implements his code just as he did originally. He defines a
collection of procedures, or a package, and interfaces these to the rest of the system by adding entries
to the table that tell the system how to operate on rectangular numbers. This is accomplished by calling
the following procedure:
(define (install-rectangular-package)
;; internal procedures
(define (real-part z) (car z))
(define (imag-part z) (cdr z))
(define (make-from-real-imag x y) (cons x y))
(define (magnitude z)
(sqrt (+ (square (real-part z))
(square (imag-part z)))))
(define (angle z)
(atan (imag-part z) (real-part z)))
(define (make-from-mag-ang r a)
(cons (* r (cos a)) (* r (sin a))))
;; interface to the rest of the system
(define (tag x) (attach-tag ’rectangular x))
(put ’real-part ’(rectangular) real-part)
(put ’imag-part ’(rectangular) imag-part)
(put ’magnitude ’(rectangular) magnitude)
(put ’angle ’(rectangular) angle)
(put ’make-from-real-imag ’rectangular
(lambda (x y) (tag (make-from-real-imag x y))))
(put ’make-from-mag-ang ’rectangular
(lambda (r a) (tag (make-from-mag-ang r a))))
’done)
Notice that the internal procedures here are the same procedures from section 2.4.1 that Ben wrote
when he was working in isolation. No changes are necessary in order to interface them to the rest of
the system. Moreover, since these procedure definitions are internal to the installation procedure, Ben
needn’t worry about name conflicts with other procedures outside the rectangular package. To
interface these to the rest of the system, Ben installs his real-part procedure under the operation
name real-part and the type (rectangular), and similarly for the other selectors.45 The
interface also defines the constructors to be used by the external system.46 These are identical to
Ben’s internally defined constructors, except that they attach the tag.
Alyssa’s polar package is analogous:
(define (install-polar-package)
;; internal procedures
(define (magnitude z) (car z))
(define (angle z) (cdr z))
(define (make-from-mag-ang r a) (cons r a))
(define (real-part z)
(* (magnitude z) (cos (angle z))))
(define (imag-part z)
(* (magnitude z) (sin (angle z))))
(define (make-from-real-imag x y)
(cons (sqrt (+ (square x) (square y)))
(atan y x)))
;; interface to the rest of the system
(define (tag x) (attach-tag ’polar x))
(put ’real-part ’(polar) real-part)
(put ’imag-part ’(polar) imag-part)
(put ’magnitude ’(polar) magnitude)
(put ’angle ’(polar) angle)
(put ’make-from-real-imag ’polar
(lambda (x y) (tag (make-from-real-imag x y))))
(put ’make-from-mag-ang ’polar
(lambda (r a) (tag (make-from-mag-ang r a))))
’done)
Even though Ben and Alyssa both still use their original procedures defined with the same names as
each other’s (e.g., real-part), these definitions are now internal to different procedures (see
section 1.1.8), so there is no name conflict.
The complex-arithmetic selectors access the table by means of a general ‘‘operation’’ procedure called
apply-generic, which applies a generic operation to some arguments. Apply-generic looks
in the table under the name of the operation and the types of the arguments and applies the resulting
procedure if one is present:47
(define (apply-generic op . args)
(let ((type-tags (map type-tag args)))
(let ((proc (get op type-tags)))
(if proc
(apply proc (map contents args))
(error
“No method for these types — APPLY-GENERIC”
(list op type-tags))))))
Using apply-generic, we can define our generic selectors as follows:
(define (real-part z) (apply-generic ’real-part z))
(define (imag-part z) (apply-generic ’imag-part z))
(define (magnitude z) (apply-generic ’magnitude z))
(define (angle z) (apply-generic ’angle z))
Observe that these do not change at all if a new representation is added to the system.
We can also extract from the table the constructors to be used by the programs external to the packages
in making complex numbers from real and imaginary parts and from magnitudes and angles. As in
section 2.4.2, we construct rectangular numbers whenever we have real and imaginary parts, and polar
numbers whenever we have magnitudes and angles:
(define (make-from-real-imag x y)
((get ’make-from-real-imag ’rectangular) x y))
(define (make-from-mag-ang r a)
((get ’make-from-mag-ang ’polar) r a))
Exercise 2.73. Section 2.3.2 described a program that performs symbolic differentiation:
(define (deriv exp var)
(cond ((number? exp) 0)
((variable? exp) (if (same-variable? exp var) 1 0))
((sum? exp)
(make-sum (deriv (addend exp) var)
(deriv (augend exp) var)))
((product? exp)
(make-sum
(make-product (multiplier exp)
(deriv (multiplicand exp) var))
(make-product (deriv (multiplier exp) var)
(multiplicand exp))))
(else (error “unknown expression type — DERIV” exp))))
We can regard this program as performing a dispatch on the type of the expression to be differentiated.
In this situation the ‘‘type tag’’ of the datum is the algebraic operator symbol (such as +) and the
operation being performed is deriv. We can transform this program into data-directed style by
rewriting the basic derivative procedure as
(define (deriv exp var)
(cond ((number? exp) 0)
((variable? exp) (if (same-variable? exp var) 1 0))
(else ((get ’deriv (operator exp)) (operands exp)
var))))
(define (operator exp) (car exp))
(define (operands exp) (cdr exp))
a. Explain what was done above. Why can’t we assimilate the predicates number? and
same-variable? into the data-directed dispatch?
b. Write the procedures for derivatives of sums and products, and the auxiliary code required to install
them in the table used by the program above.
c. Choose any additional differentiation rule that you like, such as the one for exponents
(exercise 2.56), and install it in this data-directed system.
d. In this simple algebraic manipulator the type of an expression is the algebraic operator that binds it
together. Suppose, however, we indexed the procedures in the opposite way, so that the dispatch line
in deriv looked like
((get (operator exp) ’deriv) (operands exp) var)
What corresponding changes to the derivative system are required?
Exercise 2.74. Insatiable Enterprises, Inc., is a highly decentralized conglomerate company consisting
of a large number of independent divisions located all over the world. The company’s computer
facilities have just been interconnected by means of a clever network-interfacing scheme that makes
the entire network appear to any user to be a single computer. Insatiable’s president, in her first
attempt to exploit the ability of the network to extract administrative information from division files, is
dismayed to discover that, although all the division files have been implemented as data structures in
Scheme, the particular data structure used varies from division to division. A meeting of division
managers is hastily called to search for a strategy to integrate the files that will satisfy headquarters’
needs while preserving the existing autonomy of the divisions.
Show how such a strategy can be implemented with data-directed programming. As an example,
suppose that each division’s personnel records consist of a single file, which contains a set of records
keyed on employees’ names. The structure of the set varies from division to division. Furthermore,
each employee’s record is itself a set (structured differently from division to division) that contains
information keyed under identifiers such as address and salary. In particular:
a. Implement for headquarters a get-record procedure that retrieves a specified employee’s record
from a specified personnel file. The procedure should be applicable to any division’s file. Explain how
the individual divisions’ files should be structured. In particular, what type information must be
supplied?
b. Implement for headquarters a get-salary procedure that returns the salary information from a
given employee’s record from any division’s personnel file. How should the record be structured in
order to make this operation work?
c. Implement for headquarters a find-employee-record procedure. This should search all the
divisions’ files for the record of a given employee and return the record. Assume that this procedure
takes as arguments an employee’s name and a list of all the divisions’ files.
d. When Insatiable takes over a new company, what changes must be made in order to incorporate the
new personnel information into the central system?
Message passing
The key idea of data-directed programming is to handle generic operations in programs by dealing
explicitly with operation-and-type tables, such as the table in figure 2.22. The style of programming
we used in section 2.4.2 organized the required dispatching on type by having each operation take care
of its own dispatching. In effect, this decomposes the operation-and-type table into rows, with each
generic operation procedure representing a row of the table.
An alternative implementation strategy is to decompose the table into columns and, instead of using
‘‘intelligent operations’’ that dispatch on data types, to work with ‘‘intelligent data objects’’ that
dispatch on operation names. We can do this by arranging things so that a data object, such as a
rectangular number, is represented as a procedure that takes as input the required operation name and
performs the operation indicated. In such a discipline, make-from-real-imag could be written as
(define (make-from-real-imag x y)
(define (dispatch op)
(cond ((eq? op ’real-part) x)
((eq? op ’imag-part) y)
((eq? op ’magnitude)
(sqrt (+ (square x) (square y))))
((eq? op ’angle) (atan y x))
(else
(error “Unknown op — MAKE-FROM-REAL-IMAG” op))))
dispatch)
The corresponding apply-generic procedure, which applies a generic operation to an argument,
now simply feeds the operation’s name to the data object and lets the object do the work:48
(define (apply-generic op arg) (arg op))
Note that the value returned by make-from-real-imag is a procedure — the internal dispatch
procedure. This is the procedure that is invoked when apply-generic requests an operation to be
performed.
This style of programming is called message passing. The name comes from the image that a data
object is an entity that receives the requested operation name as a ‘‘message.’’ We have already seen
an example of message passing in section 2.1.3, where we saw how cons, car, and cdr could be
defined with no data objects but only procedures. Here we see that message passing is not a
mathematical trick but a useful technique for organizing systems with generic operations. In the
remainder of this chapter we will continue to use data-directed programming, rather than message
passing, to discuss generic arithmetic operations. In chapter 3 we will return to message passing, and
we will see that it can be a powerful tool for structuring simulation programs.
Exercise 2.75. Implement the constructor make-from-mag-ang in message-passing style. This
procedure should be analogous to the make-from-real-imag procedure given above.
Exercise 2.76. As a large system with generic operations evolves, new types of data objects or new
operations may be needed. For each of the three strategies — generic operations with explicit dispatch,
data-directed style, and message-passing-style — describe the changes that must be made to a system in
order to add new types or new operations. Which organization would be most appropriate for a system
in which new types must often be added? Which would be most appropriate for a system in which new
operations must often be added?
43 In actual computational systems, rectangular form is preferable to polar form most of the time
because of roundoff errors in conversion between rectangular and polar form. This is why the
complex-number example is unrealistic. Nevertheless, it provides a clear illustration of the design of a
system using generic operations and a good introduction to the more substantial systems to be
developed later in this chapter.
44 The arctangent function referred to here, computed by Scheme’s atan procedure, is defined so as
to take two arguments y and x and to return the angle whose tangent is y/x. The signs of the arguments
determine the quadrant of the angle.
45 We use the list (rectangular) rather than the symbol rectangular to allow for the
possibility of operations with multiple arguments, not all of the same type.
46 The type the constructors are installed under needn’t be a list because a constructor is always used
to make an object of one particular type.
47 Apply-generic uses the dotted-tail notation described in exercise 2.20, because different
generic operations may take different numbers of arguments. In apply-generic, op has as its
value the first argument to apply-generic and args has as its value a list of the remaining
arguments.
Apply-generic also uses the primitive procedure apply, which takes two arguments, a procedure
and a list. Apply applies the procedure, using the elements in the list as arguments. For example,
(apply + (list 1 2 3 4))
returns 10.
48 One limitation of this organization is it permits only generic procedures of one argument.
[Go to first, previous, next page; contents; index]
[Go to first, previous, next page; contents; index]
2.5 Systems with Generic Operations
In the previous section, we saw how to design systems in which data objects can be represented in
more than one way. The key idea is to link the code that specifies the data operations to the several
representations by means of generic interface procedures. Now we will see how to use this same idea
not only to define operations that are generic over different representations but also to define
operations that are generic over different kinds of arguments. We have already seen several different
packages of arithmetic operations: the primitive arithmetic (+, -, *, /) built into our language, the
rational-number arithmetic (add-rat, sub-rat, mul-rat, div-rat) of section 2.1.1, and the
complex-number arithmetic that we implemented in section 2.4.3. We will now use data-directed
techniques to construct a package of arithmetic operations that incorporates all the arithmetic packages
we have already constructed.
Figure 2.23 shows the structure of the system we shall build. Notice the abstraction barriers. From the
perspective of someone using ‘‘numbers,’’ there is a single procedure add that operates on whatever
numbers are supplied. Add is part of a generic interface that allows the separate ordinary-arithmetic,
rational-arithmetic, and complex-arithmetic packages to be accessed uniformly by programs that use
numbers. Any individual arithmetic package (such as the complex package) may itself be accessed
through generic procedures (such as add-complex) that combine packages designed for different
representations (such as rectangular and polar). Moreover, the structure of the system is additive, so
that one can design the individual arithmetic packages separately and combine them to produce a
generic arithmetic system.
Figure 2.23: Generic arithmetic system.
Figure 2.23: Generic arithmetic system.
2.5.1 Generic Arithmetic Operations
The task of designing generic arithmetic operations is analogous to that of designing the generic
complex-number operations. We would like, for instance, to have a generic addition procedure add
that acts like ordinary primitive addition + on ordinary numbers, like add-rat on rational numbers,
and like add-complex on complex numbers. We can implement add, and the other generic
arithmetic operations, by following the same strategy we used in section 2.4.3 to implement the
generic selectors for complex numbers. We will attach a type tag to each kind of number and cause the
generic procedure to dispatch to an appropriate package according to the data type of its arguments.
The generic arithmetic procedures are defined as follows:
(define (add x y) (apply-generic ’add x y))
(define (sub x y) (apply-generic ’sub x y))
(define (mul x y) (apply-generic ’mul x y))
(define (div x y) (apply-generic ’div x y))
We begin by installing a package for handling ordinary numbers, that is, the primitive numbers of our
language. We will tag these with the symbol scheme-number. The arithmetic operations in this
package are the primitive arithmetic procedures (so there is no need to define extra procedures to
handle the untagged numbers). Since these operations each take two arguments, they are installed in
the table keyed by the list (scheme-number scheme-number):
(define (install-scheme-number-package)
(define (tag x)
(attach-tag ’scheme-number x))
(put ’add ’(scheme-number scheme-number)
(lambda (x y) (tag (+ x y))))
(put ’sub ’(scheme-number scheme-number)
(lambda (x y) (tag (- x y))))
(put ’mul ’(scheme-number scheme-number)
(lambda (x y) (tag (* x y))))
(put ’div ’(scheme-number scheme-number)
(lambda (x y) (tag (/ x y))))
(put ’make ’scheme-number
(lambda (x) (tag x)))
’done)
Users of the Scheme-number package will create (tagged) ordinary numbers by means of the
procedure:
(define (make-scheme-number n)
((get ’make ’scheme-number) n))
Now that the framework of the generic arithmetic system is in place, we can readily include new kinds
of numbers. Here is a package that performs rational arithmetic. Notice that, as a benefit of additivity,
we can use without modification the rational-number code from section 2.1.1 as the internal
procedures in the package:
(define (install-rational-package)
;; internal procedures
(define (numer x) (car x))
(define (denom x) (cdr x))
(define (make-rat n d)
(let ((g (gcd n d)))
(cons (/ n g) (/ d g))))
(define (add-rat x y)
(make-rat (+ (* (numer x) (denom y))
(* (numer y) (denom x)))
(* (denom x) (denom y))))
(define (sub-rat x y)
(make-rat (- (* (numer x) (denom y))
(* (numer y) (denom x)))
(* (denom x) (denom y))))
(define (mul-rat x y)
(make-rat (* (numer x) (numer y))
(* (denom x) (denom y))))
(define (div-rat x y)
(make-rat (* (numer x) (denom y))
(* (denom x) (numer y))))
;; interface to rest of the system
(define (tag x) (attach-tag ’rational x))
(put ’add ’(rational rational)
(lambda (x y) (tag (add-rat x y))))
(put ’sub ’(rational rational)
(lambda (x y) (tag (sub-rat x y))))
(put ’mul ’(rational rational)
(lambda (x y) (tag (mul-rat x y))))
(put ’div ’(rational rational)
(lambda (x y) (tag (div-rat x y))))
(put ’make ’rational
(lambda (n d) (tag (make-rat n d))))
’done)
(define (make-rational n d)
((get ’make ’rational) n d))
We can install a similar package to handle complex numbers, using the tag complex. In creating the
package, we extract from the table the operations make-from-real-imag and
make-from-mag-ang that were defined by the rectangular and polar packages. Additivity permits
us to use, as the internal operations, the same add-complex, sub-complex, mul-complex, and
div-complex procedures from section 2.4.1.
(define (install-complex-package)
;; imported procedures from rectangular and polar packages
(define (make-from-real-imag x y)
((get ’make-from-real-imag ’rectangular) x y))
(define (make-from-mag-ang r a)
((get ’make-from-mag-ang ’polar) r a))
;; internal procedures
(define (add-complex z1 z2)
(make-from-real-imag (+ (real-part z1) (real-part z2))
(+ (imag-part z1) (imag-part z2))))
(define (sub-complex z1 z2)
(make-from-real-imag (- (real-part z1) (real-part z2))
(- (imag-part z1) (imag-part z2))))
(define (mul-complex z1 z2)
(make-from-mag-ang (* (magnitude z1) (magnitude z2))
(+ (angle z1) (angle z2))))
(define (div-complex z1 z2)
(make-from-mag-ang (/ (magnitude z1) (magnitude z2))
(- (angle z1) (angle z2))))
;; interface to rest of the system
(define (tag z) (attach-tag ’complex z))
(put ’add ’(complex complex)
(lambda (z1 z2) (tag (add-complex z1 z2))))
(put ’sub ’(complex complex)
(lambda (z1 z2) (tag (sub-complex z1 z2))))
(put ’mul ’(complex complex)
(lambda (z1 z2) (tag (mul-complex z1 z2))))
(put ’div ’(complex complex)
(lambda (z1 z2) (tag (div-complex z1 z2))))
(put ’make-from-real-imag ’complex
(lambda (x y) (tag (make-from-real-imag x y))))
(put ’make-from-mag-ang ’complex
(lambda (r a) (tag (make-from-mag-ang r a))))
’done)
Programs outside the complex-number package can construct complex numbers either from real and
imaginary parts or from magnitudes and angles. Notice how the underlying procedures, originally
defined in the rectangular and polar packages, are exported to the complex package, and exported from
there to the outside world.
(define (make-complex-from-real-imag x y)
((get ’make-from-real-imag ’complex) x y))
(define (make-complex-from-mag-ang r a)
((get ’make-from-mag-ang ’complex) r a))
What we have here is a two-level tag system. A typical complex number, such as 3 + 4i in rectangular
form, would be represented as shown in figure 2.24. The outer tag (complex) is used to direct the
number to the complex package. Once within the complex package, the next tag (rectangular) is
used to direct the number to the rectangular package. In a large and complicated system there might be
many levels, each interfaced with the next by means of generic operations. As a data object is passed
‘‘downward,’’ the outer tag that is used to direct it to the appropriate package is stripped off (by
applying contents) and the next level of tag (if any) becomes visible to be used for further
dispatching.
Figure 2.24: Representation of 3 + 4i in rectangular form.
Figure 2.24: Representation of 3 + 4i in rectangular form.
In the above packages, we used add-rat, add-complex, and the other arithmetic procedures
exactly as originally written. Once these definitions are internal to different installation procedures,
however, they no longer need names that are distinct from each other: we could simply name them
add, sub, mul, and div in both packages.
Exercise 2.77. Louis Reasoner tries to evaluate the expression (magnitude z) where z is the
object shown in figure 2.24. To his surprise, instead of the answer 5 he gets an error message from
apply-generic, saying there is no method for the operation magnitude on the types
(complex). He shows this interaction to Alyssa P. Hacker, who says ‘‘The problem is that the
complex-number selectors were never defined for complex numbers, just for polar and
rectangular numbers. All you have to do to make this work is add the following to the complex
package:’’
(put ’real-part ’(complex) real-part)
(put ’imag-part ’(complex) imag-part)
(put ’magnitude ’(complex) magnitude)
(put ’angle ’(complex) angle)
Describe in detail why this works. As an example, trace through all the procedures called in evaluating
the expression (magnitude z) where z is the object shown in figure 2.24. In particular, how many
times is apply-generic invoked? What procedure is dispatched to in each case?
Exercise 2.78. The internal procedures in the scheme-number package are essentially nothing
more than calls to the primitive procedures +, -, etc. It was not possible to use the primitives of the
language directly because our type-tag system requires that each data object have a type attached to it.
In fact, however, all Lisp implementations do have a type system, which they use internally. Primitive
predicates such as symbol? and number? determine whether data objects have particular types.
Modify the definitions of type-tag, contents, and attach-tag from section 2.4.2 so that our
generic system takes advantage of Scheme’s internal type system. That is to say, the system should
work as before except that ordinary numbers should be represented simply as Scheme numbers rather
than as pairs whose car is the symbol scheme-number.
Exercise 2.79. Define a generic equality predicate equ? that tests the equality of two numbers, and
install it in the generic arithmetic package. This operation should work for ordinary numbers, rational
numbers, and complex numbers.
Exercise 2.80. Define a generic predicate =zero? that tests if its argument is zero, and install it in
the generic arithmetic package. This operation should work for ordinary numbers, rational numbers,
and complex numbers.
2.5.2 Combining Data of Different Types
We have seen how to define a unified arithmetic system that encompasses ordinary numbers, complex
numbers, rational numbers, and any other type of number we might decide to invent, but we have
ignored an important issue. The operations we have defined so far treat the different data types as
being completely independent. Thus, there are separate packages for adding, say, two ordinary
numbers, or two complex numbers. What we have not yet considered is the fact that it is meaningful to
define operations that cross the type boundaries, such as the addition of a complex number to an
ordinary number. We have gone to great pains to introduce barriers between parts of our programs so
that they can be developed and understood separately. We would like to introduce the cross-type
operations in some carefully controlled way, so that we can support them without seriously violating
our module boundaries.
One way to handle cross-type operations is to design a different procedure for each possible
combination of types for which the operation is valid. For example, we could extend the
complex-number package so that it provides a procedure for adding complex numbers to ordinary
numbers and installs this in the table using the tag (complex scheme-number):49
;; to be included in the complex package
(define (add-complex-to-schemenum z x)
(make-from-real-imag (+ (real-part z) x)
(imag-part z)))
(put ’add ’(complex scheme-number)
(lambda (z x) (tag (add-complex-to-schemenum z x))))
This technique works, but it is cumbersome. With such a system, the cost of introducing a new type is
not just the construction of the package of procedures for that type but also the construction and
installation of the procedures that implement the cross-type operations. This can easily be much more
code than is needed to define the operations on the type itself. The method also undermines our ability
to combine separate packages additively, or least to limit the extent to which the implementors of the
individual packages need to take account of other packages. For instance, in the example above, it
seems reasonable that handling mixed operations on complex numbers and ordinary numbers should
be the responsibility of the complex-number package. Combining rational numbers and complex
numbers, however, might be done by the complex package, by the rational package, or by some third
package that uses operations extracted from these two packages. Formulating coherent policies on the
division of responsibility among packages can be an overwhelming task in designing systems with
many packages and many cross-type operations.
Coercion
In the general situation of completely unrelated operations acting on completely unrelated types,
implementing explicit cross-type operations, cumbersome though it may be, is the best that one can
hope for. Fortunately, we can usually do better by taking advantage of additional structure that may be
latent in our type system. Often the different data types are not completely independent, and there may
be ways by which objects of one type may be viewed as being of another type. This process is called
coercion. For example, if we are asked to arithmetically combine an ordinary number with a complex
number, we can view the ordinary number as a complex number whose imaginary part is zero. This
transforms the problem to that of combining two complex numbers, which can be handled in the
ordinary way by the complex-arithmetic package.
In general, we can implement this idea by designing coercion procedures that transform an object of
one type into an equivalent object of another type. Here is a typical coercion procedure, which
transforms a given ordinary number to a complex number with that real part and zero imaginary part:
(define (scheme-number->complex n)
(make-complex-from-real-imag (contents n) 0))
We install these coercion procedures in a special coercion table, indexed under the names of the two
types:
(put-coercion ’scheme-number ’complex scheme-number->complex)
(We assume that there are put-coercion and get-coercion procedures available for
manipulating this table.) Generally some of the slots in the table will be empty, because it is not
generally possible to coerce an arbitrary data object of each type into all other types. For example,
there is no way to coerce an arbitrary complex number to an ordinary number, so there will be no
general complex->scheme-number procedure included in the table.
Once the coercion table has been set up, we can handle coercion in a uniform manner by modifying
the apply-generic procedure of section 2.4.3. When asked to apply an operation, we first check
whether the operation is defined for the arguments’ types, just as before. If so, we dispatch to the
procedure found in the operation-and-type table. Otherwise, we try coercion. For simplicity, we
consider only the case where there are two arguments.50 We check the coercion table to see if objects
of the first type can be coerced to the second type. If so, we coerce the first argument and try the
operation again. If objects of the first type cannot in general be coerced to the second type, we try the
coercion the other way around to see if there is a way to coerce the second argument to the type of the
first argument. Finally, if there is no known way to coerce either type to the other type, we give up.
Here is the procedure:
(define (apply-generic op . args)
(let ((type-tags (map type-tag args)))
(let ((proc (get op type-tags)))
(if proc
(apply proc (map contents args))
(if (= (length args) 2)
(let ((type1 (car type-tags))
(type2 (cadr type-tags))
(a1 (car args))
(a2 (cadr args)))
(let ((t1->t2 (get-coercion type1 type2))
(t2->t1 (get-coercion type2 type1)))
(cond (t1->t2
(apply-generic op (t1->t2 a1) a2))
(t2->t1
(apply-generic op a1 (t2->t1 a2)))
(else
(error “No method for these types”
(list op type-tags))))))
(error “No method for these types”
(list op type-tags)))))))
This coercion scheme has many advantages over the method of defining explicit cross-type operations,
as outlined above. Although we still need to write coercion procedures to relate the types (possibly n2
procedures for a system with n types), we need to write only one procedure for each pair of types
rather than a different procedure for each collection of types and each generic operation.51 What we
are counting on here is the fact that the appropriate transformation between types depends only on the
types themselves, not on the operation to be applied.
On the other hand, there may be applications for which our coercion scheme is not general enough.
Even when neither of the objects to be combined can be converted to the type of the other it may still
be possible to perform the operation by converting both objects to a third type. In order to deal with
such complexity and still preserve modularity in our programs, it is usually necessary to build systems
that take advantage of still further structure in the relations among types, as we discuss next.
Hierarchies of types
The coercion scheme presented above relied on the existence of natural relations between pairs of
types. Often there is more ‘‘global’’ structure in how the different types relate to each other. For
instance, suppose we are building a generic arithmetic system to handle integers, rational numbers,
real numbers, and complex numbers. In such a system, it is quite natural to regard an integer as a
special kind of rational number, which is in turn a special kind of real number, which is in turn a
special kind of complex number. What we actually have is a so-called hierarchy of types, in which, for
example, integers are a subtype of rational numbers (i.e., any operation that can be applied to a rational
number can automatically be applied to an integer). Conversely, we say that rational numbers form a
supertype of integers. The particular hierarchy we have here is of a very simple kind, in which each
type has at most one supertype and at most one subtype. Such a structure, called a tower, is illustrated
in figure 2.25.
Figure 2.25: A tower of types.
Figure 2.25: A tower of types.
If we have a tower structure, then we can greatly simplify the problem of adding a new type to the
hierarchy, for we need only specify how the new type is embedded in the next supertype above it and
how it is the supertype of the type below it. For example, if we want to add an integer to a complex
number, we need not explicitly define a special coercion procedure integer->complex. Instead,
we define how an integer can be transformed into a rational number, how a rational number is
transformed into a real number, and how a real number is transformed into a complex number. We
then allow the system to transform the integer into a complex number through these steps and then add
the two complex numbers.
We can redesign our apply-generic procedure in the following way: For each type, we need to
supply a raise procedure, which ‘‘raises’’ objects of that type one level in the tower. Then when the
system is required to operate on objects of different types it can successively raise the lower types until
all the objects are at the same level in the tower. (Exercises 2.83 and 2.84 concern the details of
implementing such a strategy.)
Another advantage of a tower is that we can easily implement the notion that every type ‘‘inherits’’ all
operations defined on a supertype. For instance, if we do not supply a special procedure for finding the
real part of an integer, we should nevertheless expect that real-part will be defined for integers by
virtue of the fact that integers are a subtype of complex numbers. In a tower, we can arrange for this to
happen in a uniform way by modifying apply-generic. If the required operation is not directly
defined for the type of the object given, we raise the object to its supertype and try again. We thus
crawl up the tower, transforming our argument as we go, until we either find a level at which the
desired operation can be performed or hit the top (in which case we give up).
Yet another advantage of a tower over a more general hierarchy is that it gives us a simple way to
‘‘lower’’ a data object to the simplest representation. For example, if we add 2 + 3i to 4 – 3i, it would
be nice to obtain the answer as the integer 6 rather than as the complex number 6 + 0i. Exercise 2.85
discusses a way to implement such a lowering operation. (The trick is that we need a general way to
distinguish those objects that can be lowered, such as 6 + 0i, from those that cannot, such as 6 + 2i.)
Figure 2.26: Relations among types of geometric figures.
Figure 2.26: Relations among types of geometric figures.
Inadequacies of hierarchies
If the data types in our system can be naturally arranged in a tower, this greatly simplifies the
problems of dealing with generic operations on different types, as we have seen. Unfortunately, this is
usually not the case. Figure 2.26 illustrates a more complex arrangement of mixed types, this one
showing relations among different types of geometric figures. We see that, in general, a type may have
more than one subtype. Triangles and quadrilaterals, for instance, are both subtypes of polygons. In
addition, a type may have more than one supertype. For example, an isosceles right triangle may be
regarded either as an isosceles triangle or as a right triangle. This multiple-supertypes issue is
particularly thorny, since it means that there is no unique way to ‘‘raise’’ a type in the hierarchy.
Finding the ‘‘correct’’ supertype in which to apply an operation to an object may involve considerable
searching through the entire type network on the part of a procedure such as apply-generic. Since
there generally are multiple subtypes for a type, there is a similar problem in coercing a value ‘‘down’’
the type hierarchy. Dealing with large numbers of interrelated types while still preserving modularity
in the design of large systems is very difficult, and is an area of much current research.52
Exercise 2.81. Louis Reasoner has noticed that apply-generic may try to coerce the arguments
to each other’s type even if they already have the same type. Therefore, he reasons, we need to put
procedures in the coercion table to “coerce” arguments of each type to their own type. For example, in
addition to the scheme-number->complex coercion shown above, he would do:
(define (scheme-number->scheme-number n) n)
(define (complex->complex z) z)
(put-coercion ’scheme-number ’scheme-number
scheme-number->scheme-number)
(put-coercion ’complex ’complex complex->complex)
a. With Louis’s coercion procedures installed, what happens if apply-generic is called with two
arguments of type scheme-number or two arguments of type complex for an operation that is not
found in the table for those types? For example, assume that we’ve defined a generic exponentiation
operation:
(define (exp x y) (apply-generic ’exp x y))
and have put a procedure for exponentiation in the Scheme-number package but not in any other
package:
;; following added to Scheme-number package
(put ’exp ’(scheme-number scheme-number)
(lambda (x y) (tag (expt x y)))) ; using primitive expt
What happens if we call exp with two complex numbers as arguments?
b. Is Louis correct that something had to be done about coercion with arguments of the same type, or
does apply-generic work correctly as is?
c. Modify apply-generic so that it doesn’t try coercion if the two arguments have the same type.
Exercise 2.82. Show how to generalize apply-generic to handle coercion in the general case of
multiple arguments. One strategy is to attempt to coerce all the arguments to the type of the first
argument, then to the type of the second argument, and so on. Give an example of a situation where
this strategy (and likewise the two-argument version given above) is not sufficiently general. (Hint:
Consider the case where there are some suitable mixed-type operations present in the table that will
not be tried.)
Exercise 2.83. Suppose you are designing a generic arithmetic system for dealing with the tower of
types shown in figure 2.25: integer, rational, real, complex. For each type (except complex), design a
procedure that raises objects of that type one level in the tower. Show how to install a generic raise
operation that will work for each type (except complex).
Exercise 2.84. Using the raise operation of exercise 2.83, modify the apply-generic procedure
so that it coerces its arguments to have the same type by the method of successive raising, as discussed
in this section. You will need to devise a way to test which of two types is higher in the tower. Do this
in a manner that is ‘‘compatible’’ with the rest of the system and will not lead to problems in adding
new levels to the tower.
Exercise 2.85. This section mentioned a method for ‘‘simplifying’’ a data object by lowering it in the
tower of types as far as possible. Design a procedure drop that accomplishes this for the tower
described in exercise 2.83. The key is to decide, in some general way, whether an object can be
lowered. For example, the complex number 1.5 + 0i can be lowered as far as real, the complex
number 1 + 0i can be lowered as far as integer, and the complex number 2 + 3i cannot be lowered
at all. Here is a plan for determining whether an object can be lowered: Begin by defining a generic
operation project that ‘‘pushes’’ an object down in the tower. For example, projecting a complex
number would involve throwing away the imaginary part. Then a number can be dropped if, when we
project it and raise the result back to the type we started with, we end up with something equal
to what we started with. Show how to implement this idea in detail, by writing a drop procedure that
drops an object as far as possible. You will need to design the various projection operations53 and
install project as a generic operation in the system. You will also need to make use of a generic
equality predicate, such as described in exercise 2.79. Finally, use drop to rewrite apply-generic
from exercise 2.84 so that it ‘‘simplifies’’ its answers.
Exercise 2.86. Suppose we want to handle complex numbers whose real parts, imaginary parts,
magnitudes, and angles can be either ordinary numbers, rational numbers, or other numbers we might
wish to add to the system. Describe and implement the changes to the system needed to accommodate
this. You will have to define operations such as sine and cosine that are generic over ordinary
numbers and rational numbers.
2.5.3 Example: Symbolic Algebra
The manipulation of symbolic algebraic expressions is a complex process that illustrates many of the
hardest problems that occur in the design of large-scale systems. An algebraic expression, in general,
can be viewed as a hierarchical structure, a tree of operators applied to operands. We can construct
algebraic expressions by starting with a set of primitive objects, such as constants and variables, and
combining these by means of algebraic operators, such as addition and multiplication. As in other
languages, we form abstractions that enable us to refer to compound objects in simple terms. Typical
abstractions in symbolic algebra are ideas such as linear combination, polynomial, rational function, or
trigonometric function. We can regard these as compound ‘‘types,’’ which are often useful for
directing the processing of expressions. For example, we could describe the expression
as a polynomial in x with coefficients that are trigonometric functions of polynomials in y whose
coefficients are integers.
We will not attempt to develop a complete algebraic-manipulation system here. Such systems are
exceedingly complex programs, embodying deep algebraic knowledge and elegant algorithms. What
we will do is look at a simple but important part of algebraic manipulation: the arithmetic of
polynomials. We will illustrate the kinds of decisions the designer of such a system faces, and how to
apply the ideas of abstract data and generic operations to help organize this effort.
Arithmetic on polynomials
Our first task in designing a system for performing arithmetic on polynomials is to decide just what a
polynomial is. Polynomials are normally defined relative to certain variables (the indeterminates of the
polynomial). For simplicity, we will restrict ourselves to polynomials having just one indeterminate
(univariate polynomials).54 We will define a polynomial to be a sum of terms, each of which is either
a coefficient, a power of the indeterminate, or a product of a coefficient and a power of the
indeterminate. A coefficient is defined as an algebraic expression that is not dependent upon the
indeterminate of the polynomial. For example,
is a simple polynomial in x, and
is a polynomial in x whose coefficients are polynomials in y.
Already we are skirting some thorny issues. Is the first of these polynomials the same as the
polynomial 5y2 + 3y + 7, or not? A reasonable answer might be ‘‘yes, if we are considering a
polynomial purely as a mathematical function, but no, if we are considering a polynomial to be a
syntactic form.’’ The second polynomial is algebraically equivalent to a polynomial in y whose
coefficients are polynomials in x. Should our system recognize this, or not? Furthermore, there are
other ways to represent a polynomial — for example, as a product of factors, or (for a univariate
polynomial) as the set of roots, or as a listing of the values of the polynomial at a specified set of
points.55 We can finesse these questions by deciding that in our algebraic-manipulation system a
‘‘polynomial’’ will be a particular syntactic form, not its underlying mathematical meaning.
Now we must consider how to go about doing arithmetic on polynomials. In this simple system, we
will consider only addition and multiplication. Moreover, we will insist that two polynomials to be
combined must have the same indeterminate.
We will approach the design of our system by following the familiar discipline of data abstraction. We
will represent polynomials using a data structure called a poly, which consists of a variable and a
collection of terms. We assume that we have selectors variable and term-list that extract those
parts from a poly and a constructor make-poly that assembles a poly from a given variable and a
term list. A variable will be just a symbol, so we can use the same-variable? procedure of
section 2.3.2 to compare variables. The following procedures define addition and multiplication of
polys:
(define (add-poly p1 p2)
(if (same-variable? (variable p1) (variable p2))
(make-poly (variable p1)
(add-terms (term-list p1)
(term-list p2)))
(error “Polys not in same var — ADD-POLY”
(list p1 p2))))
(define (mul-poly p1 p2)
(if (same-variable? (variable p1) (variable p2))
(make-poly (variable p1)
(mul-terms (term-list p1)
(term-list p2)))
(error “Polys not in same var — MUL-POLY”
(list p1 p2))))
To incorporate polynomials into our generic arithmetic system, we need to supply them with type tags.
We’ll use the tag polynomial, and install appropriate operations on tagged polynomials in the
operation table. We’ll embed all our code in an installation procedure for the polynomial package,
similar to the ones in section 2.5.1:
(define (install-polynomial-package)
;; internal procedures
;; representation of poly
(define (make-poly variable term-list)
(cons variable term-list))
(define (variable p) (car p))
(define (term-list p) (cdr p))
(define (add-poly p1 p2) …)
(define (tag p) (attach-tag ’polynomial p))
(put ’add ’(polynomial polynomial)
(lambda (p1 p2) (tag (add-poly p1 p2))))
(put ’mul ’(polynomial polynomial)
(lambda (p1 p2) (tag (mul-poly p1 p2))))
(put ’make ’polynomial
(lambda (var terms) (tag (make-poly var terms))))
’done)
Polynomial addition is performed termwise. Terms of the same order (i.e., with the same power of the
indeterminate) must be combined. This is done by forming a new term of the same order whose
coefficient is the sum of the coefficients of the addends. Terms in one addend for which there are no
terms of the same order in the other addend are simply accumulated into the sum polynomial being
constructed.
In order to manipulate term lists, we will assume that we have a constructor
the-empty-termlist that returns an empty term list and a constructor adjoin-term that
adjoins a new term to a term list. We will also assume that we have a predicate empty-termlist?
that tells if a given term list is empty, a selector first-term that extracts the highest-order term
from a term list, and a selector rest-terms that returns all but the highest-order term. To
manipulate terms, we will suppose that we have a constructor make-term that constructs a term with
given order and coefficient, and selectors order and coeff that return, respectively, the order and
the coefficient of the term. These operations allow us to consider both terms and term lists as data
abstractions, whose concrete representations we can worry about separately.
Here is the procedure that constructs the term list for the sum of two polynomials:56
(define (add-terms L1 L2)
(cond ((empty-termlist? L1) L2)
((empty-termlist? L2) L1)
(else
(let ((t1 (first-term L1)) (t2 (first-term L2)))
(cond ((> (order t1) (order t2))
(adjoin-term
t1 (add-terms (rest-terms L1) L2)))
((< (order t1) (order t2))
(adjoin-term
t2 (add-terms L1 (rest-terms L2))))
(else
(adjoin-term
(make-term (order t1)
(add (coeff t1) (coeff t2)))
(add-terms (rest-terms L1)
(rest-terms L2)))))))))
The most important point to note here is that we used the generic addition procedure add to add
together the coefficients of the terms being combined. This has powerful consequences, as we will see
below.
In order to multiply two term lists, we multiply each term of the first list by all the terms of the other
list, repeatedly using mul-term-by-all-terms, which multiplies a given term by all terms in a
given term list. The resulting term lists (one for each term of the first list) are accumulated into a sum.
Multiplying two terms forms a term whose order is the sum of the orders of the factors and whose
coefficient is the product of the coefficients of the factors:
(define (mul-terms L1 L2)
(if (empty-termlist? L1)
(the-empty-termlist)
(add-terms (mul-term-by-all-terms (first-term L1) L2)
(mul-terms (rest-terms L1) L2))))
(define (mul-term-by-all-terms t1 L)
(if (empty-termlist? L)
(the-empty-termlist)
(let ((t2 (first-term L)))
(adjoin-term
(make-term (+ (order t1) (order t2))
(mul (coeff t1) (coeff t2)))
(mul-term-by-all-terms t1 (rest-terms L))))))
This is really all there is to polynomial addition and multiplication. Notice that, since we operate on
terms using the generic procedures add and mul, our polynomial package is automatically able to
handle any type of coefficient that is known about by the generic arithmetic package. If we include a
coercion mechanism such as one of those discussed in section 2.5.2, then we also are automatically
able to handle operations on polynomials of different coefficient types, such as
Because we installed the polynomial addition and multiplication procedures add-poly and
mul-poly in the generic arithmetic system as the add and mul operations for type polynomial,
our system is also automatically able to handle polynomial operations such as
The reason is that when the system tries to combine coefficients, it will dispatch through add and
mul. Since the coefficients are themselves polynomials (in y), these will be combined using
add-poly and mul-poly. The result is a kind of ‘‘data-directed recursion’’ in which, for example,
a call to mul-poly will result in recursive calls to mul-poly in order to multiply the coefficients. If
the coefficients of the coefficients were themselves polynomials (as might be used to represent
polynomials in three variables), the data direction would ensure that the system would follow through
another level of recursive calls, and so on through as many levels as the structure of the data
dictates.57
Representing term lists
Finally, we must confront the job of implementing a good representation for term lists. A term list is,
in effect, a set of coefficients keyed by the order of the term. Hence, any of the methods for
representing sets, as discussed in section 2.3.3, can be applied to this task. On the other hand, our
procedures add-terms and mul-terms always access term lists sequentially from highest to
lowest order. Thus, we will use some kind of ordered list representation.
How should we structure the list that represents a term list? One consideration is the ‘‘density’’ of the
polynomials we intend to manipulate. A polynomial is said to be dense if it has nonzero coefficients in
terms of most orders. If it has many zero terms it is said to be sparse. For example,
is a dense polynomial, whereas
is sparse.
The term lists of dense polynomials are most efficiently represented as lists of the coefficients. For
example, A above would be nicely represented as (1 2 0 3 -2 -5). The order of a term in this
representation is the length of the sublist beginning with that term’s coefficient, decremented by 1.58
This would be a terrible representation for a sparse polynomial such as B: There would be a giant list
of zeros punctuated by a few lonely nonzero terms. A more reasonable representation of the term list
of a sparse polynomial is as a list of the nonzero terms, where each term is a list containing the order
of the term and the coefficient for that order. In such a scheme, polynomial B is efficiently represented
as ((100 1) (2 2) (0 1)). As most polynomial manipulations are performed on sparse
polynomials, we will use this method. We will assume that term lists are represented as lists of terms,
arranged from highest-order to lowest-order term. Once we have made this decision, implementing the
selectors and constructors for terms and term lists is straightforward:59
(define (adjoin-term term term-list)
(if (=zero? (coeff term))
term-list
(cons term term-list)))
(define (the-empty-termlist) ’())
(define (first-term term-list) (car term-list))
(define (rest-terms term-list) (cdr term-list))
(define (empty-termlist? term-list) (null? term-list))
(define (make-term order coeff) (list order coeff))
(define (order term) (car term))
(define (coeff term) (cadr term))
where =zero? is as defined in exercise 2.80. (See also exercise 2.87 below.)
Users of the polynomial package will create (tagged) polynomials by means of the procedure:
(define (make-polynomial var terms)
((get ’make ’polynomial) var terms))
Exercise 2.87. Install =zero? for polynomials in the generic arithmetic package. This will allow
adjoin-term to work for polynomials with coefficients that are themselves polynomials.
Exercise 2.88. Extend the polynomial system to include subtraction of polynomials. (Hint: You may
find it helpful to define a generic negation operation.)
Exercise 2.89. Define procedures that implement the term-list representation described above as
appropriate for dense polynomials.
Exercise 2.90. Suppose we want to have a polynomial system that is efficient for both sparse and
dense polynomials. One way to do this is to allow both kinds of term-list representations in our
system. The situation is analogous to the complex-number example of section 2.4, where we allowed
both rectangular and polar representations. To do this we must distinguish different types of term lists
and make the operations on term lists generic. Redesign the polynomial system to implement this
generalization. This is a major effort, not a local change.
Exercise 2.91. A univariate polynomial can be divided by another one to produce a polynomial
quotient and a polynomial remainder. For example,
Division can be performed via long division. That is, divide the highest-order term of the dividend by
the highest-order term of the divisor. The result is the first term of the quotient. Next, multiply the
result by the divisor, subtract that from the dividend, and produce the rest of the answer by recursively
dividing the difference by the divisor. Stop when the order of the divisor exceeds the order of the
dividend and declare the dividend to be the remainder. Also, if the dividend ever becomes zero, return
zero as both quotient and remainder.
We can design a div-poly procedure on the model of add-poly and mul-poly. The procedure
checks to see if the two polys have the same variable. If so, div-poly strips off the variable and
passes the problem to div-terms, which performs the division operation on term lists. Div-poly
finally reattaches the variable to the result supplied by div-terms. It is convenient to design
div-terms to compute both the quotient and the remainder of a division. Div-terms can take two
term lists as arguments and return a list of the quotient term list and the remainder term list.
Complete the following definition of div-terms by filling in the missing expressions. Use this to
implement div-poly, which takes two polys as arguments and returns a list of the quotient and
remainder polys.
(define (div-terms L1 L2)
(if (empty-termlist? L1)
(list (the-empty-termlist) (the-empty-termlist))
(let ((t1 (first-term L1))
(t2 (first-term L2)))
(if (> (order t2) (order t1))
(list (the-empty-termlist) L1)
(let ((new-c (div (coeff t1) (coeff t2)))
(new-o (- (order t1) (order t2))))
(let ((rest-of-result
))