`sum-of-divisors`

procedure
introduced in section 3.3.
If you increase the argument *n* by a factor of ten, how much
longer would you expect the original version of
`sum-of-divisors`

to take? (How many more numbers does it
test to see if they are divisors?) How about your new version?

- To load into Scheme a procedure for timing how long computations
take, evaluate the following expression:
(load "/Net/solen/u8/Faculty/ek/karl/Public/MC27/time")

Evaluate the text's definitions of`sum-of-divisors`

and`divides?`

and time how long it takes to find the sum of divisors of 100, 1000, 10000, and 100000. You should do the same timing repeatedly, at least for the smaller numbers, to get some idea how precisely repeatable the timings are. To illustrate how you would do a timing, here is a sample timing of the`sum-of-divisors`

of 100:(time sum-of-divisors 100) ;Time: .04695454545454616 ;Value: 217

The time is given in seconds; e.g., in the above example, the evaluation took a little less than five hundredths of a second. The time reported will not agree well with your own impression of the time taken, because it doesn't include time the computer spends doing other things, and also because for fast evaluations (such as the above) the evaluation is repeated until at least a second has elapsed and an average time for the repetitions is reported.What is the ratio of times from each measurement to the next? (That is: What is the ratio of times going from 100 to 1000? From 1000 to 10000? And so forth.) How does this compare with your expectation? (

**Note**: When answering questions like these in your lab writeup, be sure to back up your answers with adequate arguments. In other words, give reasons, probably in the form of*big Theta*arguments, which explain your expectations.) - Now evaluate the improved definition from exercise 3.6 and repeat the measurements. Again, what are the time ratios, and how do they compare with your prediction?
- One use for the
`sum-of-divisors`

procedure other than finding perfect numbers is to measure the relative frequency of*abundant*and*deficient*numbers. As mentioned in section 3.3, very few numbers are perfect. The remaining numbers either have a sum of divisors that is more than twice the number, in which case they are called abundant, or less than twice the number, in which case they are called deficient. The empirical question we'd like to ask is, are these two kinds of numbers equally common, or does one predominate? If so, by how much? The usual mathematical way to formulate this is to count how many numbers less than or equal to*n*are abundant and how many are deficient, and take the ratio. Then repeat this for larger and larger values of*n*. If the ratio approaches closer and closer to some limiting value as*n*increases, then this limiting ratio tells us the relative frequency of abundant and deficient numbers. For example, if the ratio approaches a limit of 1, then abundant and deficient numbers are equally frequent. On the other hand, a limiting value of 1/2 would mean that there are twice as many deficient numbers as abundant numbers.Write a procedure that generates an iterative process for computing this ratio, given

*n*. Most of the work can be done by a subsidiary procedure that keeps count of how many abundant numbers it has found and how many deficient numbers it has found as it scans the range from 1 to*n*. Once the entire range has been scanned, one count can be divided by the other. Test your procedure to be sure that it works.Suppose that you are using the text's definition of

`sum-of-divisors`

with your procedure for computing the ratio, and double the size of the range being tested. By roughly what factor do you expect the number of`divides?`

tests (and hence time) to increase? (See page 96 of the text for help with this.) How about if you use the`sum-of-divisors`

procedure from exercise 3.6? (You can use the same general style of reasoning as on page 96 to produce a*big Theta*asymptotic order of growth.)Now measure how long your procedure takes to find the ratio for

*n*equal to 100, 200, 400, and 800, and calculate the ratio of each consecutive pair of times. Do this with each of the two definitions for`sum-of-divisors`

. How well do your measurements agree with your predictions? Do the abundant/deficient ratios seem to approach some simple limiting value? If so, what?

You might also want to make some additional predictions that go
beyond your measurements. Suppose you were interested in pursuing the
abundant/deficient ratio study well beyond *n=800*, perhaps to
*n=3200*. How long would you expect each method to take? Would you
want to experimentally verify this?

When you write the lab report, remember that your audience is generally knowledgeable about computer science and scheme, but not about what you did.

As with lab 1, you should also consult the document entitled "Suggestions for clear lab reports in computer science courses", which pretty much says it all.

Course labs web site: http://www.gac.edu/~max/courses/S97/MC27L/

One lab instructor: Karl Knight <karl@gac.edu>

The other lab instructor: Max Hailperin <max@gac.edu>