MCS-378 Lab 1: Scheduling

Due: October 12, 2005


In this lab, you will work in an assigned group of two or three students to explore the role that dynamic priority adjustment plays in the Linux scheduler. You will do this by experimenting with the normal scheduler and a variant that you create, in which dynamic priority adjustment is eliminated. You will determine what impact (if any) this has on the way in which processor-intensive threads of different niceness levels share the processor time as well as what impact (if any) the dynamic priority adjustment has on the responsiveness of threads receiving input, given that computationally-intensive threads are meanwhile running. Your group will jointly write a report that carefully explains the experiment you carried out and the results you obtained.

Lab facilities

Each group will be assigned an older PC for its exclusive use. These machines are in a back corner of OHS 329. I will put a note on each computer listing the names of the group members. (I will also send out the groups by email.) The machines are configured identically. Each has a single user, root, with a password that I will disclose in class. In case you badly mess up your machine, I will provide a way to get it back to the initial state, by booting from a floppy and restoring the contents of the hard disk from a copy on a network server, using the ghost program. I will provide each group with the floppy and instructions for this. (Feel free to also ask for help if you get into a bind.)

Each machine has three different linux kernels; when you boot the machine, there is a message indicating that you must press a key within a few seconds to select one other than the default. If you press a key, you get a menu from which you can choose any of linux-2.6.11-1.1369_FC4, linux-2.6.11-prep, and linux-2.6.11-experimental. The first of these is the default, and is the standard kernel that came as part of the Fedora Core 4 distribution. It plays no role in the experimentation. The other two are initially identical to each other except for name. You will change the experimental one (by eliminating dynamic priority adjustment) and then compare the two of them. These two are quite similar to the default kernel; they incorporate a slightly different set of patches from the default just because I downloaded the source for them more recently. You can always find out which of the kernels you are running by giving the command

uname -r

The lab machines are configured not to start up networking automatically. This allows experimentation under the most controlled circumstance possible. However, in one part of this lab, I would like you to see the impact of scheduling responsiveness on a program that uses the network. (You may also want to use the network to upload your working files for safekeeping and eventual incorporation into a lab report.) To turn networking on, give the command

ifup eth0

Making the modified kernel

Using emacs (or another editor of your choice), edit the file /usr/src/linux-2.6.11-experimental/kernel/sched.c so that no dynamic adjustments in priority are made. With your current working directory being /usr/src/linux-2.6.11-experimental give the command

make install
to rebuild the kernel and install it.

Testing processor sharing

Create a C program, runner.c that runs forever. It can be as simple as

Compile this program using the command
cc -o runner runner.c
You can now set a runner going by giving the command
./runner &
You can have more than one running in this way. If you want some of them to be at an elevated niceness, you can use the nice command. For example, the following command would set a runner going with its niceness elevated by 5 points:
nice --adjustment=5 ./runner &
To see what fraction of the processor time each runner is getting, you can use the top command. To kill off all the runners, use the command
killall runner

Compare the prep and experimental kernels, using various mixtures of runners of differing nicenesses. Is there any appreciable difference in how the two kernels apportion the processor time? Does that correspond with your expectation? What if Linux used decay-usage scheduling; would you expect different results? Your report should explain these matters.

Testing responsiveness

I would suggest testing responsiveness under conditions where only a single niceness of runner is running. You still have three variables to control: (1) which kernel is in use, (2) what the niceness of the runners is, and (3) how many of the runners are running.

You should test cases where the runners are of higher niceness than the program that is waiting for input, cases where they are of equal niceness, and cases where the runners are of lower niceness. You could do the latter cases by using the nice command shown earlier with a negative adjustment. However, I would recommend that instead you run the runners with normal (or perhaps slightly elevated) niceness and then run a shell with a higher niceness, using a command such as

nice --adjustment=5 bash
The advantage to this approach is that if the responsiveness is so poor that you can't type in a killall command to kill the runners, you can always do the killing from another shell in a second virtual terminal.

One simple test of responsiveness is to wait a few seconds and then press a key. How long does the shell take to echo what you have typed? You may not be able to give a quantitative answer, but the different kernels and configurations of runners may still be distinguishable by using qualitative descriptions such as "no perceptible delay," "a very brief but perceptible delay," or "several seconds of delay."

Another test, which shows the impact of responsiveness on the performance of a real program, would be to download a web page, such as my personal home page, using the following command:

time lftp -c get
This should create a local file index.html that is a copy of my web page. More interestingly, because of the use of time, it will report the time taken. Specifically, it will report the elapsed time and also the amount of time the processor actually spent running the program ("user" time) and running the operating system kernel on the program's behalf ("system" time). If the program is slow in responding to incoming network data, which of these times goes up? How is this influenced by the presence or absence of dynamic priority adjustment, and by the number and niceness of runners?

You should try to explain quite specifically the responsiveness results you have observed. For example, if you find responsiveness good so long as the runners' niceness is no more than N points below that of the shell, but bad if the difference is greater than that, then you should try to come up with an explanation that accounts for the specific value of N.

Lab report

Your report should explain your experiment carefully enough that someone else could at least approximately replicate it. This implies that you should describe the hardware and software context in which you performed your experiment, as well as the experiment itself.

Your report should give specific results from your experiments, summarizing the quantitative data clearly by using such features as tables and graphs.

Your report should explain the results in context of how the scheduler works, comparing what you observed with what you would predict theoretically. Be sure to address all specific issues mentioned in this lab handout.