CS 1550 Practice Mid-Term Exam Questions
These are practice questions similar in form to those that may appear on the mid-term. They are intended to help you prepare to answer questions of varying formats, not as a comprehensive review of the material. Keeping up with the readings has been strongly emphasized in class, and you should have read through the discussion of working sets.
If you are unsure of the meaning of a question, feel free to ask the instructor for clarification (this is acceptable during the exam too). On a similar note, it is often useful to explicitly state any assumptions you make.
The most common OS architecture (e.g. Linux,and Windows).
Answer: Monolithic
This architecture is what you get if you focus only on resource
management, leaving abstraction as a later step.
Answer: VM
QNX, VxWorks, and Mach are
all ______ architecture
Answer: Microkernel
Answer: Average response time.
Answer: The trick here is to consider “optimal at doing what?” It is optimal at increasing completion rate, but what about average turnaround time if starvation occurs? fairness? response time?
/** Producer Consumer
Using Semaphores **/
semaphore mutex = 1;
semaphore empty = N;
semaphore full = 0;
int buffer[N];
void producer() {
int item;
while(TRUE) {
item = produce_item();
down(mutex);
down(empty);
insert_item(item,buffer);
up(full);
up(mutex);
}
}
void consumer() {
int item;
while(TRUE) {
down(mutex);
down(full);
item = remove_item(buffer);
up(empty);
up(mutex);
}
}
/** Producer Consumer
Using Semaphores **/
Answer: Hint – consider the order of
mutex and counting semaphores. A deadlock can
easily happen here. Limit the mutex-protected
region to the bare minimum that needs to be protected (around insert_item() and remove_item() to be
precise).
Answer: Disabling interrupts.
Answer: It’s simply the total access
set size, seven pages in this case.
Q: Define and give an example of Belady's anomaly.
A: from the book.
A: Direct Memory Access: in a system with DMA, the CPU can run applications
without being involved in the transfer of data from a device to main memory.
Q: if all subcomponents of system use the bus to get data from memory and
the CPU needs data from memory to run the application, why does using a DMA
allow for higher performance?
A: after a chunk of memory was read (say 4KB) into cache, the CPU will not
read anything from memory until a cache miss was detected.
Q: Describe the organ-pipe distribution and mention why it is a good
tool increase the performance of disks.
A: placing the most used blocks of data close together in order to reduce
the seek time, which is the most important delay in disk performance. the
organ-pipe distribution places data this way, using a histogram and allowing
the most used blocks of data to be in the same track, the next most used blocks
in the next tracks, etc.
Q: what is the sequence of software layers that are traversed from the
time a user needs to read a disk block until the time the data is available
to the user (from library call to the return from the library call). among
the layers are:
a) libraries, b) page replacement algorithms. c) ISRs, d) Device-independent
OS software, e) data placement algorithms, f) de-framentation software, g)
device drivers, h) controllers, i) device itself
A: libraries --> Device-independent OS software
--> device drivers --> controller --> device --> ISR -->
device drivers --> Device-independent OS software --> libraries.
Q: Let there be the following requests for data blocks
in tracks number 100, 175, 51, 133, 8, 140, 73, and 77 and let the head position
be in track number 63. What is the number of tracks that will be traversed
with the following disk scheduling algorithms (answer only those that make
sense):
a) FIFO
b) LIFO
A: 623 tracks
c) LRU
d) second chance
A: c) and d) are page replacement algorithms
e) SCAN
A: 238 tracks
f) C-Look
A: 322 tracks
Q: what is and what is the main shortcomings of the SSTF disk scheduling
algorithm?
A: shortest seek time first algorithm looks at the queue of requests and
givest highest priority to the request with the shortest seek time (that is,
closest track to the current position of the read-write head)
it may starve some requests when requests are arriving dynamically