Tips for Concrete Abstractions Instructors

This web page provides remarks and suggestions that may be of use to instructors who have adopted Concrete Abstractions: An Introduction to Computer Science Using Scheme, by Max Hailperin, Barbara Kaiser, and Karl Knight. We welcome additional suggestions and pointers to resources that you would like to share with other instructors; please drop us an email. We will assume you are giving us permission to redistribute any suggestions you send us; we will acknowledge you as the source, of course.

Chapter 1

Although we introduce Scheme's distinction between exact and inexact numbers, we intentionally keep our discussion brief and gloss over the details. Depending on your interests and those of your students, you may wish to provide some supplementation, such as:

If you are keen on introducing your students early on to the variadic (variable arity) nature of the pre-defined arithmetic procedures, a good place would be the definition of ark-volume. Instead of defining it as (* (* 300 50) 30), you can define it as (* 300 50 30). However, you should understand why we chose not to do so. We don't want pre-defined procedures to be magic, and until we introduce lists in chapter 7, we don't have the necessary machinery to show students how they could write variadic procedures themselves.

When students first start working with nested expressions, it is important to emphasize the inside-to-outside order of the processing, because this is counterintuitive to the beginners, to a degree that surprises many of us who are more experienced. In an expression like (f (g (h x)))), the first thing done is h, then g, and then last of all f. This looks to the beginner like right-to-left order, though it is really inside-to-outside. The problem is that, except for those students whose natural language is semitic (Hebrew or Arabic), this apparent right-to-left ordering runs counter to very ingrained habits of left-to-right. One additional point of leverage on this ordering issue is to demonstrate on the board that one needn't write expressions from left to right either, but can rather construct them by starting with the innermost kernel and then successively wrapping layers around it. In fact, all through the course it is worthwhile to design and write code in the most natural order, rather than trying to start with the first character and work sequentially through to the last. Point out to students that they are using modern text editors, not teletypes, and shouldn't feel compelled to work sequentially.

Our process diagrams show the successive steps in the reduction of problems going down the page, and the levels of subproblems as columns across the page. This orientation is based on the form factor of the printed page. Classroom whiteboards (or blackboards) generally have a much more horizontal form factor. Therefore, you might want to transpose the two dimensions of these diagrams when you do them on the board. That is, you would do the main problem in the topmost horizontal row, with successive steps from left to right. Subproblems would get pulled down into a second row, sub-subproblems even further down, etc. The advantage is that it uses the board better, but the disadvantage is that it causes a change of gears going from the book to the board and back. You should experiment some with both orientations and decide what works best for you.

Some instructors may regret our decision to use (define f (lambda (x y) ...)) rather than the shorthand notation (define (f x y) ...). Although you can of course teach this shorthand, we would like to explain why we avoided it. We used to teach it, but found that it caused students difficulty later, when they needed to use lambda in more flexible ways. In particular, we later write code like

(define f
  (let ((private-state (make-some-stateful-object)))
    (lambda (x y)
       ...)))
which contrasts with
(define f
  (lambda (x y)
    (let ((something-for-just-this-application (make-it)))
      ...)))
Students seem to have much less difficulty with seeing the difference between lambda-inside-let vs. let-inside-lambda than they used to when we used the shorthand notation, since there the lambda in the let-inside-lambda case was implicit.

Chapter 2

You should use induction as a debugging tool more than just for the example in the book. The ideal is to use it when a buggy procedure naturally arises, for example, when code is being actively written in class, or better yet, in a closed lab setting. (Using mental debugging tools rather than "the debugger" even when one is seated in front of the computer is a great lesson.) We didn't want to include more examples specifically of debugging, since that is artificial, but you should keep your eyes out for real-life examples that include the other two general categories of bugs that our example doesn't cover: base case bugs, and bugs where the inductive case doesn't algebraicly work out to what it should. Regarding base case bugs, the most fertile source of these is iterative algorithms, which are introduced in the next chapter.

We introduce remainder, but Scheme also has another similar pre-defined procedure, modulo. These two give different results only when applied to two numbers of opposite signs. As this doesn't arise anywhere in the book, we haven't mentioned modulo or explained the distinction.

Chapter 3

It really is worth taking the class time to compare iterative and recursive paper chain assembly side by side. When the difference is well understood, you can also then compare two different iterative versions. In the one we give in the text, the request is to "link k more links onto this chain. " In the other version, the request is of the form "here is a chain of length k, build it up to length n." In the first version, the counting is downward, while in the second it is upward. Students at this level often do not immediately see that the distinction between iteration and recursion is more fundamental than the distinction between counting up and counting down.

When looking at the process diagram for iterative factorial, it may pay to refer back to the one for tax in chapter 1, since that is a simpler example of reducing a problem to a simpler problem with the same answer.

Some instructors may regret our decision to use internal definitions rather than letrec. The two are completely equivalent, but individual instructors tend to have strong preferences for one or the other, depending on whether they themselves learned in the MIT or Indiana tradition. (MIT teaches using internal define while Indiana teaches using letrec.) Each notation has its advantages, but in our opinion the arguments in each direction are approximately equally strong. This is different from the case with (define f (lambda (x y) ...)) vs. the (define (f x y) ...) shorthand, another Indiana vs. MIT distinction. There we see a strong reason to favor the Indiana tradition of not using the shorthand. Here, lacking any strong reason one way or the other, we have gone with the MIT tradition we personally grew up with.

In the exercise of more efficiently calculating the sum-of-divisors, the reason to stop when low2 is greater than or equal to n rather than when low is greater than or equal to the square root of n is to avoid floating-point round-off error.

You might care to point out that the numerators and denominators of section 3.4's rational approximations to the golden ratio are Fibonacci numbers. This would be most appropriate if your students are already familiar with Fibonacci numbers from another course -- we don't introduce them until chapter 12.

The notes section at the end of the chapter provides some explanation for our decision to focus on invariant quantities, rather than invariant assertions. We can shed a bit more light on that by pointing out that in chapter 13 we introduce representation invariants that are propositional in form, like loop invariant assertions. (We continue to use such invariants in chapters 14 and 15.) This difference reflects the differing circumstances. Here, in a purely functional context, we can focus on the value computed, while there, in a state-transformation context, we need to focus on the properties of the successive states.

Chapter 4

We strongly encourage you to give each student the card sorting experience. It may seem to be not college/university level, but there is really no substitute for personally feeling the selection sort becoming progressively more stupid as the deck size goes up. If you do closed labs, you might want to do the sorting in lab. If not, the logistics of getting the students to do it with physical cards may be insurmountable, so consider our simulation applet as an alternative.

Our "asymptotic outlook" takes students some experience to get used to (hence the card sorting experience), but it isn't likely to be half as puzzling to the students as its name is. Their prior experience with what an asymptote is doesn't correspond at all well with the broader sense of asymptotics. You can help by explaining.

Section 3.3 (searching for perfect numbers) can be reprised in chapter 4 as an empirical lab on orders of growth. In addition to comparing the orders of growth of the two versions of sum-of-divisors when they are used directly, you can also compare the outcome when each is used by a parent procedure that scans the range of integers from 1 to n, calculating the sum-of-divisors of each number in that range. (One reason to do this scan is to compare the relative frequency of abundant and deficient numbers, that is, those with too high vs. too low a sum of divisors to be perfect.) The order of growth of this scanning procedure can be analyzed using the same sort of reasoning as we illustrate for selection sort.

Students are likely to happily assume that d-curves have that name because d comes after c in the alphabet. If any are curious, however, you can tell them that d actually stands for dragon; these fractals are traditionally called dragon curves. C-curves, of course, are named after their C-like shape.

Exercise 4.7 can be improved by suggesting that students do some additional length-of-c-curve computations that have the same levels as those suggested in the exercise, but with endpoints that are at a separation other than 1. That way they will go into the mathematical analysis aware that the length needs to be expressed as a function of the endpoint separation as well as the level.

Chapter 5

The debunk-halts? procedure can be dramatized in a fun way, tentatively attributed to Gerald Sussman of MIT. Tell the students that you are going to prove that none of them can correctly predict whether you are going to dismiss class or babble on forever. You are going to do this by following the strategy of debunk-halts?. Now carefully choose one of the students who has been paying attention (otherwise you are in trouble) and ask him or her whether you are going to dismiss class. (If in doubt about the student, be sure to emphasize that you are going to prove him or her wrong.)

Chapter 6

The data abstraction idea, introduced with the cat-tail socks story, can be acted out in class. You get three people to be the constructor and the two selectors, and they huddle and agree on an encoding. Then a customer can quietly ask the constructor to construct a representation of the kind of sock they want, and the constructor writes down the appropriate magic number on an index card. The customer hands that card to the student portraying the mail-order company, who can see what is on the card but has no idea what it represents, since the encoding is a secret. However, that secret is known to the two selector students. So, by passing the card to each of them and asking the question that selector knows how to answer, the two pieces of information can be decoded back out -- and announced out loud, to be confirmed (hopefully!) by the original customer. This can be iterated with new constructor and selector students who come up with a new secret representation -- creativity can really be exercised in coming up with one! For classroom visibility, you may want to use large cards and a fat marker, so that the card can be held up and seen by all. (Seeing the encoded value also gives the non-involved students something to do while awaiting completion of the transaction; they can try to guess how to interpret it.)

This is the chapter where we introduce programs that do their own I/O (for example using display and read), rather than all the interaction being by the Scheme system's read-eval-print loop (REPL). This is a big transition for the students, and needs to be handled much more carefully than we did in the text. In particular, students have difficulty getting the distinction between an expression's value, displayed by the REPL, and any other output that the expression may display as a side effect. If you use a Scheme system (such as DrScheme) that makes a visual distinction between the two kinds of output, the conceptual distinction may be easier to explain. But it still needs explaining.

Chapter 7

If you are familiar with other Scheme or Lisp textbooks, you may be surprised that we don't do any "deep" list processing. For example, we don't write a procedure to count the atoms in an arbitrarily deeply nested list structure, or to reverse such a structure at all levels. Such examples are generally quite artificial. In more realistic contexts, it is generally preferable not to think about nested lists, but rather to think about some more abstract data type that may happen to be represented as nested lists. This can come in two forms. One is where different abstract structures are being nested, but all happen to be represented concretely as lists. For example, one might have a list of personnel records, where each personnel record is itself represented as an n-element list (for some particular n), one of the elements of which is a list of prior employers, etc. In a case like this, it makes no sense to operate uniformly on all the levels, since they are semantically quite distinct, even though each is represented as a list. The other case is where there genuinely is a uniform hierarchical composition of the same abstract data type at each level. This is the topic of the next chapter. However, even there (where we do wind up writing "deep" recursions that operate at each level of the hierarchy), we aren't writing in terms of nested lists, but rather in terms of some more abstract type, such as binary search trees. This is important, because it lets us do things like having three components (root, left subtree, and right subtree) only two of which (the subtrees) should be treated recursively. The root value may happen to also be represented as a list, but there is no reason to recursively descend into it.

Section 7.4 introduces without any fanfare the use of equal? to test for list "equality". Before this point, we use equal? to test for equality of symbols. (The more usual practice of using eq? for symbols is only justifiable on penny-pinching efficiency grounds.) Because of this prior use of equal?, students are not generally surprised to see it used for lists as well, and its sense of equality (structural equality, also known as isomorphism) is also generally not surprising. Only once we introduce mutable objects does it make much sense to introduce object identity as an alternative sense of equality, and to introduce the procedure for testing it, eq?. Therefore, we leave this until Chapter 13. However, depending on your predilections, you might want to make more of a point here about what exactly equal? is doing.

Consider assigning the following additional exercise:

  1. Suppose that the reverse-onto procedure, defined internally to reverse in section 7.4, were instead defined separately. Now consider the following procedure definition:
    (define mystery
      (lambda (list1 list2)
        (reverse-onto (reverse list1)
                      list2)))
    
    describe clearly what the mystery procedure does, and how you know that it does that.
  2. Write a procedure that has the same effect as the mystery procedure in part (a), but which generates a linearly recursive process and calls only on itself and the built-in procedures cons, car, cdr, and null?.

The movie query system can serve as a springboard for discussing general styles of user interfaces and their history. At one point, it seemed that fancier versions of "natural language query interfaces" would be the wave of the future, as a replacement for cryptic formal query languages. Then it turned out that people prefer filling in a form to composing a question. Now some say that speech input will revive this query style. In what contexts (if any) does this seem likely? Why?

Chapter 8

If you have a projection computer in your classroom, our BST applet makes a good demo, with the students telling you what to do next, and why.

If students are annoyed by our ducking of the problem of how to do alphabetical comparisons, you can reassure them that we do eventually address it, in the application section of chapter 13, where we reprise the problem of storing movie records in a tree.

When covering the privacy issues sidebar, a good discussion topic is to consider what the countervailing factors are that tempt computing professionals (even well-meaning, moral, conscientious ones) to not follow the ACM's stated moral imperative. It helps if you focus on the "this imperative implies ..." part, since it is quite concrete.

David Wolfe pointed out that we goofed in the expression tree ADT on page 227: we should have included a selector

(define constant-value
  (lambda (x) x))
and then used it on page 228 by changing the third line of evaluate to
    (cond ((constant? expr) (constant-value expr))
and changing the sixth line of post-order to
            (cons (constant-value tree) list)
This is here rather than the errata list because the correction is physically large (more suited to a new edition than a new printing), and because it makes a good discussion topic.

We state that letter->number can be most easily written using cond and member. What we really mean is that this is the easiest approach, given the subset of Scheme we had covered. The truly easiest approach is using case. Therefore, if you are inclined to be less selective in your coverage of Scheme than we are, this would be a great time to introduce case.

Chapter 9

In this chapter (and the remainder of the book), we use the word "generic" to refer to procedures where the specific implementation is dynamically selected, based on the actual type of value the procedure is applied to. You should be aware that the nomenclature in this area is decidedly not standardized, neither with regard to what "generic" means nor with regard to what one should call the dynamic selection of a procedure implementation. Some authors use "generic" to refer to type parameterization, in which a general template is provided that needs to have a particular type filled in; C++'s templates are an example of this. What we refer to as genericness, some other authors call polymorphism. However, polymorphism is also used by other authors to mean at least two other things. (One is the use of a single procedure implementation that works transparently across all types. For example, Scheme's length procedure can equally well find the length of a list of integers or a list of strings, since it pays no attention to the specific elements of the list. The other meaning is where multiple procedure implementations exist, but one is selected based on static typing rather than actual run-time type. This is also called overloading.) Finally, some authors speak of "dynamic dispatch" or "late binding."

In the application section, drawing with transformed media can be acted out in class; an example scenario follows. Start with one student, say Alice, who is a direct drawing medium. Alice stands at the board with marker or chalk and when given a command (like "draw a line from (0,0) to (1,1)"), does the specified drawing action on a square region of the board. Now have a second student, say Bob, be a line image, who can be given a command like "draw yourself using Alice." When given this command, Bob gives the above "draw a line" command to Alice, and Alice draws his line, as expected. Now tell a third student, say Christine, that she is to be a turned image, created as a turned version of Bob. Christine isn't to pay any attention to what kind of image Bob is, she is just supposed to know that she is a turned version of Bob. Now we ask Christine to draw herself using Alice. The first thing Christine should do is to recruit a fourth student, say David, to be a transformed medium layered on top of Alice. Next, Christine tells Bob to "draw yourself using David." Bob tells David to "draw a line from (0,0) to (1,1)," just like he had earlier told Alice. But now he is telling it to David, who responds differently than Alice did. Rather than actually doing any drawing, David turns around to Alice and says "draw a line from (0,0) to (1,-1)." Alice now does this, and the turned line shows up on the board, just as Christine was supposed to arrange for.

Another possibility for making the application section more clear (possibly in conjunction with acting it out) is to use UML collaboration diagrams.

A fun possibility for another kind of transformed image is one where a little noise is added to the points' coordinates. This can be used to produce a c-curve that looks hand-sketched, for example.

Chapter 10

The order of evaluation of subexpressions of applications in micro- and mini-Scheme is undefined, because our implementation winds up depending on the order of the underlying Scheme, which is itself undefined. Thus, in the application section, the tracing output may look different than shown (be in a different order). In particular, under MIT-Scheme, the order will be reversed -- first the operands, right to left, then the operator.

By the end of chapter 10, students will have seen several different styles of tracing (or explanatory output). In addition to those shown in this chapter, there are the process diagrams in chapters 1-3 and (probably) the tracing output of whatever Scheme implementation they are using. This leads to good discussion material. What are the strengths and weaknesses of each? Did the authors make the right choice here? How about back in chapters 1-3: should they have used the same style back there? Why or why not?

We use a new Scheme syntax, begin, for the first time on page 313 without ever explaining it or remarking that it is new. This was an oversight we'll fix in the next edition. For now, it would be worth explaining to your students.

Chapter 11

It is possible to dramatize the inner workings of SLIM in class, having one student play the role of each functional unit. Those who are playing the role of state elements (registers, memories, and PC) should stand in front of the board, which they use to hold their units' state. The instructor will need to initially direct the action, though the students generally catch on. In addition to this full dramatization, it also is worthwhile to step through a program's execution at a higher level. This can be done either manually or by using SLIME with a projection system.

To allay any confusion that may result, you should be aware that our discussion of floating point numerals (in the sidebar about what can be stored in a location) mixes the real with the hypothetical. We say that it is typical in a 64-bit word to devote 53 bits to the mantissa and 11 bits to the exponent. This reflects the IEEE standard, which is used in essentially all contemporary computers. We then go on to say that if the two sub-words were used to represent integers in the usual way (meaning twos-complement) the mantissa would be able to range from -252 to 252-1 and the exponent from -210 to 210-1. While this is literally true (if the fields were so used, the values would so range), we are here straying from the IEEE standard; the IEEE standard represents the mantissa and exponent differently.

You may want to show how to incorporate vectors into the substitution model. When you create a vector (by applying make-vector), you draw the vector as a collection of numbered boxes, somewhere separate from the evaluation trace, and give it a label like "V1:". Then in the main evaluation, you show the value as #<V1>, or if you are drawing by hand, you can use a circled V1. Everywhere the vector gets substituted for a parameter, you just substitute in this #<V1> tag. When vector-set! is applied, you go off and modify the contents of one of the boxes. Similarly, vector-ref and vector-length cause you to go look at the row of boxes. Basically you have a "heap" of mutable objects separate from, but referred to by, the evaluation process. If you've chosen to introduce the environment model, you can use the same approach, but with a variable binding like "where x=#<V1>".

The procedure named vector first shows up, without any explanation, in a review problem on page 376. Further uses are in a chapter 12 review problem on page 415, the chapter 13 application code on page 477, and the bootstrapping of chapter 14's object-oriented programming system on pages 537-538. Nowhere do we ever explain it, nowhere do we include it in the chapter inventory, and nowhere is it in the index. This was an oversight, which you may want to correct in class.

Chapter 12

In section 12.2's analysis of the order of growth of Fibonacci numbers, we don't make explicit the difference between, on the one hand, being Theta(bn) for a b that is between sqrt(2) and 2, and on the other hand being both Omega(sqrt(2)n) and O(2n). We prove the latter (looser) condition, while saying the empirical evidence seems to suggest the former (tighter) condition. Nothing we say is outright wrong, but we do perhaps leave some students with the impression that the two are equivalent. It happens that the Fibonacci order of growth is Theta(bn), where b happens to be the golden ratio, but there are other functions that lurch around enough between their lower and upper bounds to grow faster than sqrt(2)n and slower than 2n without being Theta(bn) for any b.

We are intentionally a bit sloppy in several of this chapter's algorithms, for example changing exponential time algorithms only to quadratic time when linear time would be possible with some careful attention to details. Elsewhere in the book we explain how to be careful about the details that make the difference between quadratic and linear time (or cubic and linear, or cubic and quadratic, etc). Here we are trying to keep matters simple while frying bigger fish.

Chapter 13

Two exercises call for working through shift/reduce parsing with index cards. This also makes a good in-class activity, using large cards perched on the chalk tray or easels.

Mutable abstract data types (viewed as such, not in terms of their representation) can be introduced into the substitution or environment model the same way as vectors are. That is, you can draw a conceptual picture of a stack and label it "S1:" and then use #<S1> as the value in the substitution or environment model of the process, doing the appropriate operations on the stack whenever called for.

With the second representation for RA-stacks, we introduce the notion of "layering" or "inheritance" of representation invariants. Any node list obeys the node list representation invariant. An RA-stack (in representation 2) is represented as a node list, therefore it obeys the node list representation invariant. However, it also needs to obey an additional representation invariant specific to RA-stacks, leading (when the two are combined) to a narrower restriction on what is a legal RA-stack representation. You should ensure that the students have this concept crystal clear in this simple context of node lists and RA-stacks, because it will be essential in the section on red-black trees. In that latter section, there is a deeper inheritance layering (a red-black tree is a binary search tree is a (ranked) binary tree, with additional constraint at each layer) and there is also additional cognitive burden from other, unrelated, matters. So, the basic notion of layered representation invariants needs to have been already mastered.

Don't get so caught up in the details of specific alternative representations of RA-stacks and queues that your students miss the big picture. Alternative designs are being considered by considering alternative representation invariants, and part of the design process is to find a representation invariant that is strong enough to support efficient operations, yet weak enough to be efficiently maintainable. This is the key lesson of the queue examples. (Red-black trees really derive from the same idea, but it is less accessible to students at this level. You might still want to remark on it, once you've set the stage with queues.)

Everywhere prior to p. 477 that the names key-comparator, key-extractor, and red-black-tree are used, they refer to a key comparator, a key extractor, and a red-black tree. However, on p. 477, they refer to procedures that when given a dictionary, return that dictionary's key comparator, key extractor, and red-black tree. Less confusing names for the procedures on p. 477 might be the-key-comparator-of, the-key-extractor-of, and the-red-black-tree-of. (In a new edition, the "the-" prefix could probably be left off for brevity. When explaining the gotcha in the current edition, including it provides emphasis.)

The dictionaries application section can also form the basis for a good lab on the empirical comparison of data structures' performance. It is easy to construct an alternative implementation of the dictionary ADT that uses linear search in a list. You'll need to have a source for large amounts of data to allow the asymptotic differences to be readily apparent; the movie database is unlikely to suffice. When we've done this lab, we used the college-wide database linking students' names to their email addresses. Any readily available source of thousands of keyed records will do.

Section 13.5 warns students not to let Scheme's read-eval-print loop display a cyclic structure, lest the system go into an infinite loop. There are some Scheme systems which actually handle this gracefully, either by simply truncating the infinite output at some point, or by using a pointer-like representation that shows the structural sharing explicitly. Therefore, if you are teaching using one of these systems, you may want to modulate our warning a bit, by saying that the students can get away with displaying cyclic structures on the local Scheme implementation, but that it is a bad habit to get into, because on other systems it won't work.

Section 13.5 also introduces eq?, briefly mentioning that it tests whether its two arguments are the same object. Depending on your attitudes and goals, you may want to make a bigger deal than we do out of the distinctions between the various equality predicates.

The review problem of using a mutable data type to help turn micro-Scheme into mini-Scheme is a very important one. This exercise very forcefully makes the point that one reason for using mutable state is to enhance modularity, by allowing communication to be directly between the involved modules, without having to burden all sorts of unrelated modules. (Of course, undisciplined use of state can hurt modularity, by obscuring the interfaces and interactions of modules.) We don't really emphasize this modularity motivation for state elsewhere in the book, instead focusing more on the other two motivations: efficiency (as in dynamic programming) and modeling (as in the object-oriented adventure game). All three motivations are important in practice.

Chapter 14

Many students share the common misconception that object-oriented programming is new. It may be worth pointing out that it actually dates back to the 1960s.

You may wonder why we put the class name into each method name (before / or ^). This is essentially a replacement for having typed variables. When one says (foo/bar baz), one is not only invoking the bar method of the object baz, but also declaring that object to be of type foo (though of course it could be an instance of some descendent class of foo). Notice that this type declaration is quite literally error-checked. For example, if one does (witch/describe (make-object)), one will get an error message, even though the describe method used for witches is in fact inherited from the object class, so could without any real adverse effects be applied to a vanilla object -- but the programmer said it was a witch, so it had better be one. This leads to a much more disciplined form of polymorphism, where interfaces have to be explicitly defined in a common ancestor of all classes supporting that interface. Other classes outside the descendents of that ancestor can also have a method that coincidentally has the same name, but it is treated as independent, and there is no way to polymorphically select one or the other.

We use the vocabulary of "method name" versus "method implementation." If you have students with a Smalltalk background, they may be more familiar with the alternative vocabulary of "message" versus "method."

Our implementation of the object-oriented programming system contains a useful procedure, not mentioned in the text, called oops-display. This is the procedure that the object/describe method uses for displaying each of the instance variables values in a summary form -- for example [an object of class witch] or [a 3421 element vector]. It is useful when debugging, when you don't know for sure in advance what type of value you are going to encounter (because the bug may be precisely that you have the wrong type, for example the person object rather than the symbolic name of the person).

The compu-duds system, like chapter 7's query system, provides a good setting for discussing human-computer interface styles. When were interfaces like this common? Why have they largely disappeared? Where do they still occur? Why?

We can't emphasize too much that the fun part of the adventure game is the creative construction of the game, not the playing of it, and that one should really feel free to totally scrap our setting, characters, etc. We have been using this game (with variations) for years now, and even though from a player's perspective it is a pretty dumb game, every year our students have a blast, remembering it fondly for years afterwards. This is because we encourage them to be creative. Some students exercise their creativity within fairly narrow bounds, but others go wild and leave no recognizable shred of the "Land of Gack." (This flexibility to choose one's own scope of work is a real plus.) Some students have drawn on familiar literary works. This year (1998) our campus (and town) suffered severe damage from a major tornado, and sure enough, we wound up with a bunch of tornado-themed adventure games. Behind all this student involvement lies an important pedagogic goal: providing a memorable metaphor for object-oriented design more broadly. At an alumni gathering, several of our alumni who are involved in commercial object-oriented development and/or consulting revealed that they are still thinking of inheritance (and explaining it to clients) in terms of a witch being a person who acts in a special way.

Looking at the Land of Gack class diagram that shows associations, students may notice that a place's contents can include other places, not just persons and things. This leads to some good exercises which can be done purely at the UML diagramming level, without getting into the code. First, debate whether this is a bug or a feature; it certainly is somewhat non-intuitive, but there may be uses for it. Second, how would you alter the class diagram to rule out this possibility?

Reloading everything from scratch after any change is particularly sanity-preserving when working on the adventure game (though it really is advantageous in many contexts from chapter 5 onward). To make this easy, we suggest having a master file that loads in all the other files. That way you can still keep things modularized (one file per class, plus one for the user interface and one for producing the specific objects that constitute the world) and yet still easily reload it all by loading in a single file.

Chapter 15

We have specified version 1.1 of the Java language and API. However, we have now also tested our code under the JDK 1.2.1 implementation of Java 2, and it seems to all still compile and run fine. The only minor hitch is that the stop method in the Thread class (introduced at the very end of Section 15.4) is now "deprecated," so the compiler mentions this. Sun is now discouraging the use of this method, and instead recommending the approach we describe in Exercise 15.13. The deprecated stop method is also used in the code for Section 15.5. Of course, we don't make any use of the new features introduced in Java 2. Going backwards to JDK 1.0 rather than forwards would be difficult: we make substantial use of the new API features introduced in 1.1, in particular, the new style of handling GUI events.

We put the inputItem class method in the CompuDuds class, but a case could be made for putting it in the Item class instead -- that way the CompuDuds class has a dependency only on the Item class, and remains 100% ignorant of what subclasses Item might have.

The BlueJ interactive Java environment works very nicely for the CompuDuds example of section 15.2. (It also works for the applets in sections 15.3-15.5, but provides much less added value there.) The big advantage of using BlueJ for CompuDuds is that you can interact directly with the objects, rather than only through the program's user interface. This parallels how the Scheme version of compu-duds is developed and explored in section 14.2.

One instructor reports that he found it helpful to teach the material in section 15.3 (on event-driven GUIs) in a different order than given in the text. If you are comfortable deviating from the the text's order, you might want to consider this alternative approach and decide which you like better. He started with the version of the UML class diagram that shows the associations, and discussed the loop of Puzzle, Button, and TileActionListener. From there, he went to the code for TileActionListener. Finally, he presented the Puzzle class's code.

The "inner classes" introduced into the Java language in the 1.1 revision could be used to good effect in some of our code, particularly for the listener classes. Our decision to avoid inner classes was motivated by the KISS (keep it simple, stupid) principle.

We give a sample .html file for the puzzle applet, complete with the height and width specifications in the applet tag. Unfortunately, although those values are reasonable for the initial version of the puzzle applet, they should be increased when the controls are added -- the initialize/randomize/poltergeist buttons. The failure mode if you don't increase these sizes can be pretty awful under some browsers -- the buttons that there isn't room for are simply omitted!

Our separating out of the initializeTiles method from init in section 15.3 is an example of a refactoring. You may want to discuss this, as refactoring in general is a hot topic in object-oriented software development. See Martin Fowler's book, Refactoring: Improving the Design of Existing Code, Addison Wesley Longman, 1999.

Early versions of the poltergeist-infested puzzle applet just leave their poltergeist threads running. If your students are going to browse such a version in a normal browser (as opposed to appletviewer), you should warn them that the browser may need to be forcibly killed.

The design of the FormattedField class has some questionable design decisions. In particular, we avoided using the event mechanism to deliver newly entered values to the client; instead we invoke a valueEntered method that must be overridden in a subclass. We did this simply to keep matters simple for beginners. This also explains why we have FormattedField extend TextField rather than being a new kind of component that internally contains and makes use of a TextField. Certainly the "is a" relationship here is somewhat tenuous. If a FormattedField was truly a TextField, then like any TextField we should be able to arbitrarily set its text.

General

You may wonder why we have chosen not to use set! at all. We've got a nice simple model of what a name is (a placeholder for substitution) and we've got a nice simple model of what state is (the numbered memory locations of the underlying computer, chunks of which show through as Scheme's vectors), and there is no point in mucking both of these up just to gain a little bit of notational convenience occasionally. The essence of the preceding argument is that we can tie the functional and imperative models more closely together by using mutable data structures rather than mutable variable bindings. However, we have more: we also get a closer tie to the object-oriented model. Although many object-oriented programming languages make instance variables look syntactically like local variables, they are semantically the slots of a mutable data structure. Therefore, we are closer to object-oriented programming with data structure mutation than we would be with variable mutation.

You may wonder why we don't show the environment model of evaluation as well as the substitution model. The basic answer is that the book is already on the large side, and we decided this was one expendable topic. Naturally, you may disagree with this judgment; a good case can be made for the environment model. Namely, the environment model corresponds fairly naturally to how people think. Generally when applying (lambda (x) (* x x)) to 3, you think about evaluating (* x x) where x=3, not about evaluating (* 3 3). The major down-side to the environment model is that it requires introducing the notion of closures -- the result of evaluating (lambda (x) (+ x y)) where y=3 (and +=addition) is the procedure object with parameter list x, body (+ x y), and environment y=3 (and +=addition).) If you do choose to teach the environment model, we advocate doing so in terms of these ", where var1=val1, var2=val2, ..." clauses, rather than in terms of a tree of environment frames as used by Abelson and Sussman's SICP textbook. The tree of environment frames is an implementation detail, faithful to some implementations and not others, and is a big source of additional cognitive complexity. The only reason why SICP needs the tree of frames is so mutation (resulting from set!) can be done in the (shared) environment frames rather than in separate stateful objects.

Some students (and others) are likely to ask you why we don't start directly in Java, rather than working our way there via Scheme. We have two answers to this question:

  1. By going to Java via Scheme, we have the opportunity to construct object-oriented programming, rather than simply accepting it as a given. We mean this is two senses:
    1. We can build up the constituent ideas of object-oriented programming one by one: data-abstraction, generic/polymorphic operations, state, and inheritance. We construct the programming style so that we see how the ideas fit together.
    2. We can build an implementation of the object-oriented programming system in terms of more primitive elements, such as vectors. We construct the programming infrastructure so that we see how the ideas are supported behind the scenes.
  2. Scheme is simpler than Java, so in the early going when we are still struggling with the very idea of programming, we don't have to additionally juggle a lot of individual linguistic constructs.

Depending on your background, you may regret that we wait so long to introduce lists. We'd like to put a more positive light on this by pointing out that we introduce not one but two surrogates for list processing in the early chapters: stackable images and numbers viewed as digit strings. The reason why we hold off on lists per se is because we want to introduce them in the context of data abstraction, and want to deal with procedural abstraction before data abstraction.


For more information, see the parent web page, or contact Max Hailperin:
Email: max@gustavus.edu
Mathematics and Computer Science Department
Gustavus Adolphus College
800 W. College Avenue
St. Peter, MN 56082
USA
Revision 1.12 as of 2002/06/07 16:30:25