1e2, are by default inexact, just like those with decimal points
If you are keen on introducing your students early on to the variadic
(variable arity) nature of the pre-defined arithmetic procedures, a
good place would be the definition of
Instead of defining it as
(* (* 300 50) 30), you can
define it as
(* 300 50 30). However, you should
understand why we chose not to do so. We don't want pre-defined
procedures to be magic, and until we introduce lists in chapter 7, we
don't have the necessary machinery to show students how they could
write variadic procedures themselves.
When students first start working with nested expressions, it is
important to emphasize the inside-to-outside order of the processing,
because this is counterintuitive to the beginners, to a degree that
surprises many of us who are more experienced. In an expression like
(f (g (h x)))), the first thing done is
g, and then last of all
f. This looks
to the beginner like right-to-left order, though it is really
inside-to-outside. The problem is that, except for those students
whose natural language is semitic (Hebrew or Arabic), this apparent
right-to-left ordering runs counter to very ingrained habits of
left-to-right. One additional point of leverage on this ordering
issue is to demonstrate on the board that one needn't write
expressions from left to right either, but can rather construct them
by starting with the innermost kernel and then successively wrapping
layers around it. In fact, all through the course it is worthwhile to
design and write code in the most natural order, rather than trying to
start with the first character and work sequentially through to the
last. Point out to students that they are using modern text editors,
not teletypes, and shouldn't feel compelled to work sequentially.
Our process diagrams show the successive steps in the reduction of problems going down the page, and the levels of subproblems as columns across the page. This orientation is based on the form factor of the printed page. Classroom whiteboards (or blackboards) generally have a much more horizontal form factor. Therefore, you might want to transpose the two dimensions of these diagrams when you do them on the board. That is, you would do the main problem in the topmost horizontal row, with successive steps from left to right. Subproblems would get pulled down into a second row, sub-subproblems even further down, etc. The advantage is that it uses the board better, but the disadvantage is that it causes a change of gears going from the book to the board and back. You should experiment some with both orientations and decide what works best for you.
Some instructors may regret our decision to use
(lambda (x y) ...)) rather than the shorthand notation
(define (f x y) ...). Although you can of course teach
this shorthand, we would like to explain why we avoided it. We used
to teach it, but found that it caused students difficulty later, when
they needed to use
lambda in more flexible ways. In
particular, we later write code like
(define f (let ((private-state (make-some-stateful-object))) (lambda (x y) ...)))which contrasts with
(define f (lambda (x y) (let ((something-for-just-this-application (make-it))) ...)))Students seem to have much less difficulty with seeing the difference between
lambdathan they used to when we used the shorthand notation, since there the
lambdacase was implicit.
remainder, but Scheme also has another
similar pre-defined procedure,
modulo. These two give
different results only when applied to two numbers of opposite signs.
As this doesn't arise anywhere in the book, we haven't mentioned
modulo or explained the distinction.
When looking at the process diagram for iterative
factorial, it may pay to refer back to the one for
tax in chapter 1, since that is a simpler example of
reducing a problem to a simpler problem with the same answer.
Some instructors may regret our decision to use internal
definitions rather than
letrec. The two are
completely equivalent, but individual instructors tend to have strong
preferences for one or the other, depending on whether they themselves
learned in the MIT or Indiana tradition. (MIT teaches using internal
define while Indiana teaches using
Each notation has its advantages, but in our opinion the arguments in
each direction are approximately equally strong. This is different
from the case with
(define f (lambda (x y) ...)) vs. the
(define (f x y) ...) shorthand, another Indiana vs. MIT
distinction. There we see a strong reason to favor the Indiana
tradition of not using the shorthand. Here, lacking any strong reason
one way or the other, we have gone with the MIT tradition we
personally grew up with.
In the exercise of more efficiently calculating the
sum-of-divisors, the reason to stop when
low2 is greater than or equal to n
rather than when low is greater than or equal to the square
root of n is to avoid floating-point round-off error.
You might care to point out that the numerators and denominators of section 3.4's rational approximations to the golden ratio are Fibonacci numbers. This would be most appropriate if your students are already familiar with Fibonacci numbers from another course -- we don't introduce them until chapter 12.
The notes section at the end of the chapter provides some explanation for our decision to focus on invariant quantities, rather than invariant assertions. We can shed a bit more light on that by pointing out that in chapter 13 we introduce representation invariants that are propositional in form, like loop invariant assertions. (We continue to use such invariants in chapters 14 and 15.) This difference reflects the differing circumstances. Here, in a purely functional context, we can focus on the value computed, while there, in a state-transformation context, we need to focus on the properties of the successive states.
We strongly encourage you to give each student the card sorting experience. It may seem to be not college/university level, but there is really no substitute for personally feeling the selection sort becoming progressively more stupid as the deck size goes up. If you do closed labs, you might want to do the sorting in lab. If not, the logistics of getting the students to do it with physical cards may be insurmountable, so consider our simulation applet as an alternative.
Our "asymptotic outlook" takes students some experience to get used to (hence the card sorting experience), but it isn't likely to be half as puzzling to the students as its name is. Their prior experience with what an asymptote is doesn't correspond at all well with the broader sense of asymptotics. You can help by explaining.
Section 3.3 (searching for perfect numbers) can be reprised in chapter
4 as an empirical lab on orders of growth. In addition to comparing
the orders of growth of the two versions of
sum-of-divisors when they are used directly, you can also
compare the outcome when each is used by a parent procedure that scans
the range of integers from 1 to n, calculating the
sum-of-divisors of each number in that range. (One
reason to do this scan is to compare the relative frequency of
abundant and deficient numbers, that is, those with too high vs. too
low a sum of divisors to be perfect.) The order of growth of this
scanning procedure can be analyzed using the same sort of reasoning as
we illustrate for selection sort.
Students are likely to happily assume that d-curves have that name because d comes after c in the alphabet. If any are curious, however, you can tell them that d actually stands for dragon; these fractals are traditionally called dragon curves. C-curves, of course, are named after their C-like shape.
Exercise 4.7 can be improved by suggesting that students do some
length-of-c-curve computations that have the
same levels as those suggested in the exercise, but with endpoints
that are at a separation other than 1. That way they will go into the
mathematical analysis aware that the length needs to be expressed as a
function of the endpoint separation as well as the level.
debunk-halts?procedure can be dramatized in a fun way, tentatively attributed to Gerald Sussman of MIT. Tell the students that you are going to prove that none of them can correctly predict whether you are going to dismiss class or babble on forever. You are going to do this by following the strategy of
debunk-halts?. Now carefully choose one of the students who has been paying attention (otherwise you are in trouble) and ask him or her whether you are going to dismiss class. (If in doubt about the student, be sure to emphasize that you are going to prove him or her wrong.)
This is the chapter where we introduce programs that do their own
I/O (for example using
rather than all the interaction being by the Scheme system's
read-eval-print loop (REPL). This is a big transition for the
students, and needs to be handled much more carefully than we did in
the text. In particular, students have difficulty getting the
distinction between an expression's value, displayed by the REPL, and
any other output that the expression may display as a side effect. If
you use a Scheme system (such as DrScheme) that makes a visual
distinction between the two kinds of output, the conceptual
distinction may be easier to explain. But it still needs explaining.
Section 7.4 introduces without any fanfare the use of
equal? to test for list "equality". Before this point,
equal? to test for equality of symbols. (The more
usual practice of using
eq? for symbols is only
justifiable on penny-pinching efficiency grounds.) Because of this
prior use of
equal?, students are not generally surprised
to see it used for lists as well, and its sense of equality
(structural equality, also known as isomorphism) is also generally not
surprising. Only once we introduce mutable objects does it make much
sense to introduce object identity as an alternative sense of
equality, and to introduce the procedure for testing it,
eq?. Therefore, we leave this until Chapter 13.
However, depending on your predilections, you might want to make more
of a point here about what exactly
equal? is doing.
Consider assigning the following additional exercise:
reverse-ontoprocedure, defined internally to
reversein section 7.4, were instead defined separately. Now consider the following procedure definition:
(define mystery (lambda (list1 list2) (reverse-onto (reverse list1) list2)))describe clearly what the
mysteryprocedure does, and how you know that it does that.
mysteryprocedure in part (a), but which generates a linearly recursive process and calls only on itself and the built-in procedures
The movie query system can serve as a springboard for discussing general styles of user interfaces and their history. At one point, it seemed that fancier versions of "natural language query interfaces" would be the wave of the future, as a replacement for cryptic formal query languages. Then it turned out that people prefer filling in a form to composing a question. Now some say that speech input will revive this query style. In what contexts (if any) does this seem likely? Why?
If students are annoyed by our ducking of the problem of how to do alphabetical comparisons, you can reassure them that we do eventually address it, in the application section of chapter 13, where we reprise the problem of storing movie records in a tree.
When covering the privacy issues sidebar, a good discussion topic is to consider what the countervailing factors are that tempt computing professionals (even well-meaning, moral, conscientious ones) to not follow the ACM's stated moral imperative. It helps if you focus on the "this imperative implies ..." part, since it is quite concrete.
David Wolfe pointed out that we goofed in the expression tree ADT on page 227: we should have included a selector
(define constant-value (lambda (x) x))and then used it on page 228 by changing the third line of
(cond ((constant? expr) (constant-value expr))and changing the sixth line of
(cons (constant-value tree) list)This is here rather than the errata list because the correction is physically large (more suited to a new edition than a new printing), and because it makes a good discussion topic.
We state that
letter->number can be most easily
member. What we
really mean is that this is the easiest approach, given the subset
of Scheme we had covered. The truly easiest approach is using
case. Therefore, if you are inclined to be less
selective in your coverage of Scheme than we are, this would be a
great time to introduce
lengthprocedure can equally well find the length of a list of integers or a list of strings, since it pays no attention to the specific elements of the list. The other meaning is where multiple procedure implementations exist, but one is selected based on static typing rather than actual run-time type. This is also called overloading.) Finally, some authors speak of "dynamic dispatch" or "late binding."
In the application section, drawing with transformed media can be acted out in class; an example scenario follows. Start with one student, say Alice, who is a direct drawing medium. Alice stands at the board with marker or chalk and when given a command (like "draw a line from (0,0) to (1,1)"), does the specified drawing action on a square region of the board. Now have a second student, say Bob, be a line image, who can be given a command like "draw yourself using Alice." When given this command, Bob gives the above "draw a line" command to Alice, and Alice draws his line, as expected. Now tell a third student, say Christine, that she is to be a turned image, created as a turned version of Bob. Christine isn't to pay any attention to what kind of image Bob is, she is just supposed to know that she is a turned version of Bob. Now we ask Christine to draw herself using Alice. The first thing Christine should do is to recruit a fourth student, say David, to be a transformed medium layered on top of Alice. Next, Christine tells Bob to "draw yourself using David." Bob tells David to "draw a line from (0,0) to (1,1)," just like he had earlier told Alice. But now he is telling it to David, who responds differently than Alice did. Rather than actually doing any drawing, David turns around to Alice and says "draw a line from (0,0) to (1,-1)." Alice now does this, and the turned line shows up on the board, just as Christine was supposed to arrange for.
Another possibility for making the application section more clear (possibly in conjunction with acting it out) is to use UML collaboration diagrams.
A fun possibility for another kind of transformed image is one where a little noise is added to the points' coordinates. This can be used to produce a c-curve that looks hand-sketched, for example.
By the end of chapter 10, students will have seen several different styles of tracing (or explanatory output). In addition to those shown in this chapter, there are the process diagrams in chapters 1-3 and (probably) the tracing output of whatever Scheme implementation they are using. This leads to good discussion material. What are the strengths and weaknesses of each? Did the authors make the right choice here? How about back in chapters 1-3: should they have used the same style back there? Why or why not?
We use a new Scheme syntax,
begin, for the first time
on page 313 without ever explaining it or remarking that it is new.
This was an oversight we'll fix in the next edition. For now, it
would be worth explaining to your students.
To allay any confusion that may result, you should be aware that our discussion of floating point numerals (in the sidebar about what can be stored in a location) mixes the real with the hypothetical. We say that it is typical in a 64-bit word to devote 53 bits to the mantissa and 11 bits to the exponent. This reflects the IEEE standard, which is used in essentially all contemporary computers. We then go on to say that if the two sub-words were used to represent integers in the usual way (meaning twos-complement) the mantissa would be able to range from -252 to 252-1 and the exponent from -210 to 210-1. While this is literally true (if the fields were so used, the values would so range), we are here straying from the IEEE standard; the IEEE standard represents the mantissa and exponent differently.
You may want to show how to incorporate vectors into the substitution
model. When you create a vector (by applying
make-vector), you draw the vector as a collection of
numbered boxes, somewhere separate from the evaluation trace, and give
it a label like "V1:". Then in the main evaluation, you show the
#<V1>, or if you are drawing by hand, you can use a circled
V1. Everywhere the vector gets substituted for a parameter, you just
substitute in this
#<V1> tag. When
applied, you go off and modify the contents of one of the boxes.
cause you to go look at the row of boxes. Basically you have a "heap"
of mutable objects separate from, but referred to by, the evaluation
process. If you've chosen to introduce the environment model, you can
use the same approach, but with a variable binding like "where
The procedure named
vector first shows up, without any
explanation, in a review problem on page 376. Further uses are in a
chapter 12 review problem on page 415, the chapter 13
application code on page 477, and the bootstrapping of
chapter 14's object-oriented programming system on pages
537-538. Nowhere do we ever explain it, nowhere do we include it in
the chapter inventory, and nowhere is it in the index. This was an
oversight, which you may want to correct in class.
We are intentionally a bit sloppy in several of this chapter's algorithms, for example changing exponential time algorithms only to quadratic time when linear time would be possible with some careful attention to details. Elsewhere in the book we explain how to be careful about the details that make the difference between quadratic and linear time (or cubic and linear, or cubic and quadratic, etc). Here we are trying to keep matters simple while frying bigger fish.
Mutable abstract data types (viewed as such, not in terms of their
representation) can be introduced into the substitution or environment
model the same way as vectors are. That is, you can draw a conceptual
picture of a stack and label it "S1:" and then use
#<S1> as the value in the substitution or
environment model of the process, doing the appropriate operations on
the stack whenever called for.
With the second representation for RA-stacks, we introduce the notion of "layering" or "inheritance" of representation invariants. Any node list obeys the node list representation invariant. An RA-stack (in representation 2) is represented as a node list, therefore it obeys the node list representation invariant. However, it also needs to obey an additional representation invariant specific to RA-stacks, leading (when the two are combined) to a narrower restriction on what is a legal RA-stack representation. You should ensure that the students have this concept crystal clear in this simple context of node lists and RA-stacks, because it will be essential in the section on red-black trees. In that latter section, there is a deeper inheritance layering (a red-black tree is a binary search tree is a (ranked) binary tree, with additional constraint at each layer) and there is also additional cognitive burden from other, unrelated, matters. So, the basic notion of layered representation invariants needs to have been already mastered.
Don't get so caught up in the details of specific alternative representations of RA-stacks and queues that your students miss the big picture. Alternative designs are being considered by considering alternative representation invariants, and part of the design process is to find a representation invariant that is strong enough to support efficient operations, yet weak enough to be efficiently maintainable. This is the key lesson of the queue examples. (Red-black trees really derive from the same idea, but it is less accessible to students at this level. You might still want to remark on it, once you've set the stage with queues.)
Everywhere prior to p. 477 that the names
red-black-tree are used,
they refer to a key comparator, a key extractor, and a red-black tree.
However, on p. 477, they refer to procedures that when given a
dictionary, return that dictionary's key comparator, key extractor,
and red-black tree. Less confusing names for the procedures on p. 477
the-red-black-tree-of. (In a new edition, the
the-" prefix could probably be left off for brevity.
When explaining the gotcha in the current edition, including it
The dictionaries application section can also form the basis for a good lab on the empirical comparison of data structures' performance. It is easy to construct an alternative implementation of the dictionary ADT that uses linear search in a list. You'll need to have a source for large amounts of data to allow the asymptotic differences to be readily apparent; the movie database is unlikely to suffice. When we've done this lab, we used the college-wide database linking students' names to their email addresses. Any readily available source of thousands of keyed records will do.
Section 13.5 warns students not to let Scheme's read-eval-print loop display a cyclic structure, lest the system go into an infinite loop. There are some Scheme systems which actually handle this gracefully, either by simply truncating the infinite output at some point, or by using a pointer-like representation that shows the structural sharing explicitly. Therefore, if you are teaching using one of these systems, you may want to modulate our warning a bit, by saying that the students can get away with displaying cyclic structures on the local Scheme implementation, but that it is a bad habit to get into, because on other systems it won't work.
Section 13.5 also introduces
eq?, briefly mentioning that
it tests whether its two arguments are the same object. Depending on
your attitudes and goals, you may want to make a bigger deal than we
do out of the distinctions between the various equality predicates.
The review problem of using a mutable data type to help turn micro-Scheme into mini-Scheme is a very important one. This exercise very forcefully makes the point that one reason for using mutable state is to enhance modularity, by allowing communication to be directly between the involved modules, without having to burden all sorts of unrelated modules. (Of course, undisciplined use of state can hurt modularity, by obscuring the interfaces and interactions of modules.) We don't really emphasize this modularity motivation for state elsewhere in the book, instead focusing more on the other two motivations: efficiency (as in dynamic programming) and modeling (as in the object-oriented adventure game). All three motivations are important in practice.
You may wonder why we put the
class name into each method name (before
^). This is essentially
a replacement for having typed variables. When one says
baz), one is not only invoking the
bar method of the object
also declaring that object to be of type
foo (though of
course it could be an instance of some descendent class of
foo). Notice that this type declaration is
quite literally error-checked. For example, if one does
(witch/describe (make-object)), one will get an error message, even
describe method used for
witches is in fact inherited from
object class, so could without any real adverse effects be applied
to a vanilla
object -- but the programmer said it was a
witch, so it
had better be one. This leads to a much more disciplined form of
polymorphism, where interfaces have to be explicitly defined in a
common ancestor of all classes supporting that interface. Other
classes outside the descendents of that ancestor can also have a
method that coincidentally has the same name, but it is treated as
independent, and there is no way to polymorphically select one or the
We use the vocabulary of "method name" versus "method implementation." If you have students with a Smalltalk background, they may be more familiar with the alternative vocabulary of "message" versus "method."
Our implementation of the object-oriented programming system contains
a useful procedure, not mentioned in the text, called
oops-display. This is the procedure that the
object/describe method uses for displaying each of the
instance variables values in a summary form -- for example
object of class witch] or
[a 3421 element
vector]. It is useful when debugging, when you don't know for
sure in advance what type of value you are going to encounter (because
the bug may be precisely that you have the wrong type, for example the
person object rather than the symbolic name of the person).
The compu-duds system, like chapter 7's query system, provides a good setting for discussing human-computer interface styles. When were interfaces like this common? Why have they largely disappeared? Where do they still occur? Why?
We can't emphasize too much that the fun part of the adventure game is the creative construction of the game, not the playing of it, and that one should really feel free to totally scrap our setting, characters, etc. We have been using this game (with variations) for years now, and even though from a player's perspective it is a pretty dumb game, every year our students have a blast, remembering it fondly for years afterwards. This is because we encourage them to be creative. Some students exercise their creativity within fairly narrow bounds, but others go wild and leave no recognizable shred of the "Land of Gack." (This flexibility to choose one's own scope of work is a real plus.) Some students have drawn on familiar literary works. This year (1998) our campus (and town) suffered severe damage from a major tornado, and sure enough, we wound up with a bunch of tornado-themed adventure games. Behind all this student involvement lies an important pedagogic goal: providing a memorable metaphor for object-oriented design more broadly. At an alumni gathering, several of our alumni who are involved in commercial object-oriented development and/or consulting revealed that they are still thinking of inheritance (and explaining it to clients) in terms of a witch being a person who acts in a special way.
Looking at the Land of Gack class diagram that shows associations, students may notice that a place's contents can include other places, not just persons and things. This leads to some good exercises which can be done purely at the UML diagramming level, without getting into the code. First, debate whether this is a bug or a feature; it certainly is somewhat non-intuitive, but there may be uses for it. Second, how would you alter the class diagram to rule out this possibility?
Reloading everything from scratch after any change is particularly sanity-preserving when working on the adventure game (though it really is advantageous in many contexts from chapter 5 onward). To make this easy, we suggest having a master file that loads in all the other files. That way you can still keep things modularized (one file per class, plus one for the user interface and one for producing the specific objects that constitute the world) and yet still easily reload it all by loading in a single file.
stopmethod in the
Threadclass (introduced at the very end of Section 15.4) is now "deprecated," so the compiler mentions this. Sun is now discouraging the use of this method, and instead recommending the approach we describe in Exercise 15.13. The deprecated
stopmethod is also used in the code for Section 15.5. Of course, we don't make any use of the new features introduced in Java 2. Going backwards to JDK 1.0 rather than forwards would be difficult: we make substantial use of the new API features introduced in 1.1, in particular, the new style of handling GUI events.
We put the
inputItem class method in the
class, but a case could be made for putting it in the
instead -- that way the
CompuDuds class has a dependency only on the
Item class, and remains 100% ignorant of what subclasses
The BlueJ interactive Java environment works very nicely for the CompuDuds example of section 15.2. (It also works for the applets in sections 15.3-15.5, but provides much less added value there.) The big advantage of using BlueJ for CompuDuds is that you can interact directly with the objects, rather than only through the program's user interface. This parallels how the Scheme version of compu-duds is developed and explored in section 14.2.
One instructor reports that he found it helpful to teach
the material in section 15.3 (on event-driven GUIs) in a different
order than given in the text. If you are comfortable deviating from
the the text's order, you might want to consider this
alternative approach and decide which you like better. He started
with the version of the UML class diagram that shows the associations,
and discussed the loop of
TileActionListener. From there,
he went to the code for
TileActionListener. Finally, he
Puzzle class's code.
The "inner classes" introduced into the Java language in the 1.1 revision could be used to good effect in some of our code, particularly for the listener classes. Our decision to avoid inner classes was motivated by the KISS (keep it simple, stupid) principle.
We give a sample
.html file for the
puzzle applet, complete with the
width specifications in
applet tag. Unfortunately, although those values are reasonable
for the initial version of the puzzle applet, they should be increased
when the controls are added -- the initialize/randomize/poltergeist
buttons. The failure mode if you don't increase these sizes can be
pretty awful under some browsers --
the buttons that there isn't room for are simply omitted!
Our separating out of the
initializeTiles method from
init in section 15.3 is an example of a
refactoring. You may want to discuss this, as refactoring in
general is a hot topic in object-oriented software development. See
Martin Fowler's book, Refactoring: Improving the Design of
Existing Code, Addison Wesley Longman, 1999.
Early versions of the poltergeist-infested puzzle
applet just leave their poltergeist threads running. If your students
are going to browse such a version in a normal browser (as
appletviewer), you should warn them that the
browser may need to be forcibly killed.
The design of the
FormattedField class has some
questionable design decisions. In particular, we avoided using the
event mechanism to deliver newly entered values to the client; instead
we invoke a
valueEntered method that must be overridden in
a subclass. We did this simply to keep matters simple for
beginners. This also explains why we have
TextField rather than being a new kind of component that internally
contains and makes use of a
TextField. Certainly the "is
a" relationship here is somewhat tenuous. If a
FormattedField was truly a
TextField we should be able to arbitrarily set
set!at all. We've got a nice simple model of what a name is (a placeholder for substitution) and we've got a nice simple model of what state is (the numbered memory locations of the underlying computer, chunks of which show through as Scheme's vectors), and there is no point in mucking both of these up just to gain a little bit of notational convenience occasionally. The essence of the preceding argument is that we can tie the functional and imperative models more closely together by using mutable data structures rather than mutable variable bindings. However, we have more: we also get a closer tie to the object-oriented model. Although many object-oriented programming languages make instance variables look syntactically like local variables, they are semantically the slots of a mutable data structure. Therefore, we are closer to object-oriented programming with data structure mutation than we would be with variable mutation.
You may wonder why we don't show the environment model of evaluation
as well as the substitution model. The basic answer is that the book is
already on the large side, and we decided this was one expendable topic.
Naturally, you may disagree with this judgment; a good case can be
made for the environment model. Namely, the environment model
corresponds fairly naturally to how people think. Generally when
(lambda (x) (* x x)) to 3, you think about evaluating
(* x x)
x=3, not about evaluating
(* 3 3). The major down-side
to the environment model is that it requires introducing
the notion of closures -- the result of evaluating
(lambda (x) (+ x y))
+=addition) is the procedure object with parameter list
(+ x y), and environment
+=addition).) If you do choose to
teach the environment model, we
advocate doing so in terms of these ", where var1=val1,
var2=val2, ..." clauses, rather than in terms of a tree of environment frames
as used by Abelson and Sussman's SICP textbook. The tree of
environment frames is an implementation detail, faithful to some
implementations and not others, and is a big source of additional
cognitive complexity. The only reason why SICP needs the tree of
frames is so mutation (resulting from
set!) can be done
in the (shared) environment frames rather than in separate stateful
Some students (and others) are likely to ask you why we don't start directly in Java, rather than working our way there via Scheme. We have two answers to this question:
Depending on your background, you may regret that we wait so long to introduce lists. We'd like to put a more positive light on this by pointing out that we introduce not one but two surrogates for list processing in the early chapters: stackable images and numbers viewed as digit strings. The reason why we hold off on lists per se is because we want to introduce them in the context of data abstraction, and want to deal with procedural abstraction before data abstraction.