Comments on the Tucker and Noonan book:
Page 50: The characterizations of the three kinds of semantics are imprecise. In particular, denotational semantics identifies some mathematical object as the meaning of each piece of abstract syntax, but those objects need not be, and in fact generally are not, state-transforming functions. We will consider these matters more carefully later in the chapter as we work our way through the three kinds of semantics.
Also, contrary to what the last sentence before the footnotes suggests, denotational semantics is not more suited than operational semantics for laboratory exploration. (After all, operational semantics is based on actual execution.) The reason why Tucker and Noonan can suggest that they will use denotational semantics to support laboratory exploration is because their "denotational" semantics essentially amounts to an operational semantics.
Page 51: Regarding the statement that "If
x is a
double, the expression
x+u.p causes an
error...," please note that this expression would cause
the same error even if
x were an
Also, a dynamically typed language like Lisp need not be type
safe. The language specifications typically allow implementations to
omit checking for some type errors and instead produce arbitrary,
unspecified results. For example, in Scheme, it is an error to apply
(lambda (x) (+ x 1)) to an argument that
is not a number. The language specification does not require this
error to be checked for and signaled. Instead, a Scheme
implementation is free to behave any way whatsoever if presented with
an erroneous program. Similar remarks apply to Common Lisp.
Page 52: The top of this page suggests that a type system can be defined as a set of mathematical functions that define what it means for a program to be type safe. To capture the standard notion of a static type system, it is important that these functions be computable, that is that they define a decidable, conservative notion of type safety. We can look in class at examples of programs where it is undecidable whether they will encounter type errors when run. The type system must declare such programs to be unsafe, even if in fact the errors would not occur.
Also, typedefs in C do not introduce new types; they merely create shorthand names for existing types. (C does, however, have mechanisms for constructing new types, analogous to classes in Java: arrays, pointers, structures, and unions.)
Pages 52-55: These pages sketch rules for determining properties of ASTs: whether the ASTs are valid, what the type map specified by the declarations is, and what type of value the expressions compute. It is worth noting that other than the communication of the type map from the declarations to the body, these rules all can be applied bottom-up in the abstract syntax tree. The validity of the whole is determined from the validity of the parts. The type computed by an expression is determined from the type computed by its subexpressions. We will discuss this phenomenon, which also occurs in many real languages as well, even ones with more interesting type systems. However, it is also worth considering the alternative: what if the type of a subexpression cannot be determined without considering the context in which it appears?