-
Notifications
You must be signed in to change notification settings - Fork 1
/
se350.txt
2303 lines (1421 loc) · 75.9 KB
/
se350.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# 1 Computer System Overview
* What is an Operating System?
A program that controls the execution of application programs and acts as a
standardized interface between applications and hardware.
* What does an Operating System do?
Manages resources, provides a set of services, consumes resources.
* What are the two types of user-visible registers?
Data registers: Stores results from calculations or data.
Address register: Points to memory locations.
* What are control and status registers?
Invisible to the user, they control the operation of the computer.
Used by privileged OS routines.
* What are some control and status registers?
Program counter, Instruction register, Program status word
* Why are interrupts useful when communicating with I/O devices?
Because I/O devices are usually slower than the processor, and interrupts allow
the processor to avoid polling the device.
Improves processor utilization.
* When an interrupt occurs, what steps does the hardware handle?
- Processor finishes execution of current instruction
- Processor signals acknowledgement of interrupt
- Processor pushes PSW and PC onto the control stack
- Processor loads new PC value based on interrupt
* When an interrupt occurs, what steps does the software handle?
- The remainder of the process state information is saved.
- The interrupt is processed.
- The process state information is restored.
- The old PSW and PC is restored.
* What are the two approaches to handling multiple interrupts?
1. Disable interrupts in ISR
2. Nested interrupts.
* What is the principle of locality w.r.t memory?
Memory references by programs tend to cluster.
# 2 Operating System Overview
* What are the 3 objectives of operating systems?
Convenience, Efficiency, and the Ability to Evolve.
* What is the kernel?
The portion of the OS that is in main memory.
* What are the advantages and disadvantages of Batch Multiprogramming vs Time Sharing?
+Batch: Processor use is maximized.
+Time: Response time is minimized.
+Time: Could dynamically enter commands into a terminal.
* Why are most elements of modern OSes split into different services in user-space?
- Security: Prevents unneeded access to privileged instructions
- Availability: If a service goes down, the whole OS doesn't crash.
# 3 Process Description and Control
* What are the requirements of the OS w.r.t processes?
- Maximize processor utilization while providing reasonable response time.
- Allocate resources to processes.
- Support interprocess communication.
- Support process creation.
* What are the 3 components of processes?
1. An executable program
2. Associated data
3. Execution context
* What is the process control block?
A data structure that holds metadata about a process.
* What does the process control block contain?
- Identifiers about the current process
- User-visible registers
- Control and status registers
- Processor state information
- Stack pointers
- Process control information
* What are the two modes of execution a modern OS usually has?
User mode and system mode.
* How do you switch from user mode to system mode?
Trigger a specific interrupt.
* When would you want to suspend a process?
Swapping, by user request, time sharing, blocked on I/O or memory.
# 4 Threading
* What is multi-threading?
Operating system supports multiple threads of execution with a process.
* What is the difference between a thread and a process?
Threads share memory and don't have their own PCB.
* What the advantages of threads over processes?
- Takes less time to create threads
- Takes less time to terminate threads
- Takes less time to switch between threads
* What happens to threads when you terminate or suspend their process?
All threads in the process get terminated or suspended.
* What is a user-level thread?
A thread managed by the application. The kernel is not aware.
* What is a kernel-level thread?
The kernel is aware about the thread, and scheduling is done on a thread basis.
* What are the advantages and disadvantages of user vs kernel threads?
User:
Less switching overhead
Can run on any OS
Custom scheduling
Kernel:
Only the thread is blocked when you call I/O
Can schedule multiple threads on multiple processors simultaneously
Don't need to import library.
* Why do user-level threads have less switching overhead than kernel-level threads?
They do not require system calls to switch, which means that they do not
incur the cost of having to fire an interrupt to switch to kernel mode first.
* What is the microkernel architecture?
Small OS core with many external subsystems.
* What are some advantages of having a microkernel architecture?
Doesn't distinguish between kernel level and user level services.
More extensible: Can add a new service easily.
More flexible: Can modify services easily.
More portable: Only need to change the microkernel between platforms.
More reliable: If a service goes down, OS is still fine.
# 5 Concurrency
* What is an atomic operation?
A function or action implemented in such a way that it appears to be a "single"
action.
* What is a critical section?
A section of code within a process that requires access to shared resources and
that must not be executed while another process is in a corresponding section
of code.
* What is a race condition?
A situation in which multiple threads or processes read and white a shared data
item and the final result depends on the relative timing of their execution.
* What are 3 possible levels of interaction among processes?
1. Processes unaware of each other.
2. Processes indirectly aware of each other.
3. Processes directly aware of each other.
* What are the 6 requirements for a facility to provide mutual exclusion?
1. Enforcement: Only one process is allowed inside its critical section
2. A process that halts in its noncritical section must not interfere with other
processes
3. No deadlock or starvation
4. A process must not be delayed access to a critical section when there are no
other processes using it
5. No assumptions are made about relative process speeds or number of processes
6. A process remains inside of its critical section for a finite time only.
* What are two ways for the hardware to provide mutual exclusion?
1. Disable interrupts in a uniprocessor system.
2. Special machine instructions. e.g. Test and Set that operate in one cycle.
* What are the advantages of using special machine instructions for mutual exclusion?
- Applicable to any number of processes and any number of processors as long as
main memory is shared.
- Simple and easy to verify
- Can support multiple critical sections.
* What are the disadvantages of using special machine instructions for mutual exclusion?
- Need to busy-wait, since consumes processor time.
- Possible to starve a process.
- Possible to cause a deadlock. (e.g. Low priority process has critical section,
and high priority process is busy-waiting. Forever.)
- Pipeline stalls.
* What is a semaphore?
A special variable (that has an integer value) for concurrency control.
* What are the operations for a semaphore?
semWait(): decrements the semaphore. And maybe blocks the process.
semSignal(): increments the semaphore. And maybe unblocks a process.
* What do we need to implement semaphores?
Special machine instructions like Test and Set.
* When do semaphores block?
When their value is negative.
* What does the initial value of a semaphore control?
The number of processes that can be admitted simultaneously.
* What does the current value in a semaphore indicate?
The number of processes waiting for the critical section.
* What is the producer-consumer problem?
Multiple producers are generating data and filling up a buffer.
A single consumer is taking data out of the buffer.
Only one entity may access the buffer at once.
* What is a monitor?
Similar to the semaphore, except that it uses condition variables for signaling,
and unused signals are lost.
* What is message passing useful for?
- Mutual exclusion
- Exchanging information between processes.
* What is direct addressing w.r.t message passing?
send() primitive includes a specific id referring to the destination process.
* What is indirect addressing w.r.t message passing?
A mailbox is created and shared among processes.
Processes send and receive messages to and from the mailbox.
# 6 Deadlock and Starvation
* What is a deadlock?
A situation in which two or more processes are unable to proceed because each is
waiting for one of the others to do something.
* What is a livelock?
A situation in which two or more processes continuously change their states in
response to changes in the other process(es) without doing any useful work.
* What is mutual exclusion?
The requirement that when one process is in a critical section that accesses
shared resources, no other process may be in a critical section that accesses
any of those shared resources.
* What is starvation?
A situation in which a runnable process is overlooked indefinitely by the
scheduler; although it is able to proceed, it is never chosen.
* What are the two types of resources?
Consumable and reusable.
* How can reusable resources cause a deadlock?
Each process holds one resource and requests the other.
* How can consumable resources cause a deadlock?
All processes are blocked from a receive() function.
* What are the conditions for a deadlock?
- Mutual exclusion: Only one process may use a resource at a time
- Hold-and-wait: A process may hold onto allocated resources while waiting for
more resources.
- No preemption (w.r.t resources): No resource can be forcibly removed from a
process holding it.
- Circular wait: A chain of processes exists, such that each process hold a
resource needed by the next process in the chain.
* What are two approaches to deadlock avoidance at execution time?
1. Do not start a process if its demands might lead to deadlock.
2. Do not grant an incremental resource request if this allocation might lead
to a deadlock.
* What is the Banker's Algorithm?
Only run a process if you can find at least one sequence that does not result
in a deadlock.
* What are the conditions for deadlock avoidance to be possible?
- Maximum resource requirements are stated in advance.
- Processes under consideration are independent.
- There must be a fixed number of resources to allocate.
- No process may exit while holding resources.
* What can you do when a deadlock is detected?
1. Abort all deadlocked processes
2. Back up each deadlocked process to some previously defined checkpoint.
3. Keep doing this until deadlock no longer exists.
4. If it's still deadlocking, keep preempting resources until deadlock no longer
exists.
# 7 Memory Management
* What is the "relocation" requirement of memory management?
Programmer does not know where the program will be placed in memory until its
executed.
* What is the "protection" requirement of memory management?
Processes should not be able to reference memory locations for another process.
* What is the "sharing" requirement of memory management?
Several processes should be allowed to reference the same portion of memory.
* What is the "logical organization" requirement of memory management?
You should be able to share "modules" of code among processes.
Each module can be written and compiled separately, and have different degrees
of protection.
* What is the "physical organization" requirement of memory management?
Programmer does not know how much space will be available.
* What is the fixed partitioning approach to memory management?
Divide the memory to partitions. Each process is loaded into an available
partition.
* What are the problems with the fixed partitioning approach to memory management?
- A program may not always fit inside a partition.
- There is a lot of internal fragmentation.
* What is internal fragmentation?
Unused, but allocated memory.
* What is external fragmentation?
Unallocatable memory.
* What is the placement algorithm for unequal-size partitions in the fixed-size partitioning approach to memory management?
Assign each process to the smallest available partition that will fit it.
* What is the dynamic partitioning approach to memory management?
Partitions are of variable length and number. Map processes to partitions.
* What is a problem of the dynamic partitioning approach to memory management?
We may get "holes" in the memory, causing external fragmentation. Must use
compaction to remove the holes.
* What are some algorithms for placing partitions in the dynamic partitioning approach to memory management?
Best-fit, first-fit, next-fit.
* Why is the best-fit algorithm the worst performer for placing partitions into memory?
Since smallest block is found for partition, the smallest amount of fragmentation
is left. Therefore, memory compaction must be done more often.
* What is the buddy system approach to memory management?
Entire space available is treated as a single block.
When a request comes, the block is broken into halves until we have got the
smallest half that will satisfy the request.
* What is a logical address?
Reference to a memory location independent of the current assignment of data to
memory.
* What is a relative address?
Address expressed as a location relative to some known point.
* What is a physical address?
The actual location in main memory.
* What are the two registers used when translating addresses for relocation?
- Base register: Starting address for the process
- Bounds register: Ending address of the process
These values are set whenever the process is loaded in.
* What is paging?
Loading "pages" of memory from disk to main memory when needed.
Dividing memory into equal-sized chunks and dividing each process into the same
fixed-size chunks.
* What is a page vs what is a frame?
Page: Chunks of a process.
Frame: Chunks of memory.
* Does paging have internal or external fragmentation?
No internal fragmentation, except for final page of process.
No external fragmentation.
* What is segmentation?
Program is divided into segments.
* How is segmentation different from paging?
Segmentation doesn't divide up main memory.
Programmers can control and can be aware of segmentation.
# 8 Virtual Memory
* What are the requirements for virtual memory?
- We only use relative, logical address that are translated at runtime.
- We are using paging or segmentation.
* What is the main breakthrough of virtual memory?
We do not need to keep all pages/segments of a program in main memory during
execution.
* What is the resident set of a process?
The portion of a process that is currently in main memory.
* What happens when the processor tries to reference a logical address not yet located in main memory?
A page fault (memory access fault) occurs.
* What are the two implications of virtual memory?
1. More processes may be maintained in main memory.
2. A process may be larger than all of main memory.
* What is thrashing?
Continuously getting and throwing away pages from main memory.
When the system spends most of its time swapping pieces rather than executing
instructions.
* What is the modify bit used for in virtual memory?
If the modify bit is not set, it means that the memory page has not been
modified since it was loaded. This means that it doesn't have to be written back
to disk when it is unloaded.
* What is the inverted page table?
- A hash map that maps the page number to a frame number.
- Only keeps track of the pages currently in main memory.
- Uses a chaining mechanism to deal with hash collisions.
* What is the translation lookaside buffer and why is it useful?
A special hardware location which stores recently accessed pages -> frames.
This means that you don't need to query the page table (which requires an
additional memory lookup).
* What is the correlation between page size and page fault rate?
Increased page size => smaller resident set => More page faults.
Until a point where you load enough memory into a page that the whole process
is going to be loaded, then 0 page faults.
* What are the advantages of segmentation over not using it?
1. Simplifies handling of growing data structures. A growing data structure can
be assigned to its own segment!
2. Allows programs to be altered and recompiled independently, without requiring
the entire set of programs to be relinked and reloaded.
3. Lends itself to sharing among processes. A segment can be allocated to be
accessed by multiple programs.
4. Lends itself to protection, as the segments can be assigned specific
privileges.
* What is the metadata needed for a segment table entry + virtual memory?
- Whether or not the segment is in main memory.
- Whether or not the segment has been altered.
- Possible bits for protection and sharing.
* Can the segment table be stored in registers? (like you can with pages)
No, since you can have a variable number of segments.
* How does paging + segmentation work?
Memory is broken up into segments, as specified by the programmer.
Each segment is broken up into pages.
* What are the different types of access permissions for a segment across processes?
- No access allowed
- Branching not allowed
- Referencing data allowed
- Referencing data not allowed
* What are rings, w.r.t. protection systems?
Programs are assigned to a ring number.
1. A program may access only data that reside of the same ring or a less
privileged ring.
2. A program may call services residing on the same or more privileged ring.
# 8.2 Operating System Software
* What are the 3 decisions one must make when designing an OS w.r.t memory?
1. Whether or not to use virtual memory techniques
2. Whether or not to use paging or segmentation or both
3. Which algorithm to use for managing memory
* What is the fetch policy w.r.t. virtual memory?
The policy which determines when a page should be brought into main memory.
* What is demand paging w.r.t virtual memory?
A page is brought into main memory only when a reference is made to a location
on that page.
* What is prepaging w.r.t virtual memory?
Pages other than the one demanded by a page fault are also brought in.
Exploits secondary memory characteristics, like seeking.
* What is the placement policy w.r.t virtual memory?
The policy which determines where in real memory a process piece is to reside.
This is usually a non-issue since pages can be "freely" moved in and out of
main memory, and there is no external fragmentation.
* What is the replacement policy w.r.t virtual memory?
The policy which determines which page in main memory to swap out when needed.
* What is frame locking w.r.t virtual memory?
- Some frames in memory are locked, meaning that they should not be replaced.
- Achieved by using a lock bit.
* What is the optimal policy w.r.t replacement algorithms and virtual memory?
Replace the page who's time to next reference is the longest.
* What is the least recently used policy w.r.t replacement and virtual memory?
Replace the page that has not been referenced in the longest time.
* What is the First-In-First-Out policy w.r.t replacement and virtual memory?
Go in a circular order, and replace the page that has been in main memory the
longest.
* What is the clock policy w.r.t replacement and virtual memory?
Each frame has a bit, which is reset to 0 when the circular pointer passes over
it. It is set to 1 when the frame is accessed or loaded. The first page that
the circular pointer sees with a 0 bit is replaced.
* How can the clock policy be augmented to be more powerful?
Use more bits, like a not-modified bit.
* What is page buffering?
There are two lists, a free list and a modified list. The free list keeps track
of the frames that can be assigned a page. The modified list keeps track of pages
that have yet to be written back into secondary memory.
The OS tries to keep a certain amount of frames in the free list at all times.
* How does page buffering help efficiency?
The number of I/O operations needed is reduced, as the page writes can be
batched together.
* Why is resident set management important?
- The smaller the amount of memory allocated to a process, the more process
can reside in main memory at one time.
- In a relatively small number of pages of a process are in main memory, page
faults are going to be high.
- Beyond a certain size, additional allocation of main memory to a process is
not going to have a noticeable effect on the page fault rate.
* What are the characteristics of a fixed-allocation policy w.r.t resident set management?
Each process gets a fixed number of frames in main memory within which to
execute. The number is decided at initial load time and may be determined based
on the type of process.
* What are the characteristics of a variable-allocation policy w.r.t resident set management?
Each process can keep a variable number of frames in main memory at a time.
A process that is consistently page fault may get more frames.
* What are the characteristics of a local replacement policy w.r.t resident set management?
Only resident pages of the process that generated the page fault are candidates
for replacement.
* What are the characteristics of a global replacement policy w.r.t resident set management?
All unlocked pages in main memory are candidates for replacement.
* What are some of the drawbacks with fixed allocation, local scope w.r.t resident set management?
1. If allocations are too small, then there will be a high page fault rate.
2. If allocations are too big, then there are too few programs in main memory.
* What is the drawback of variable allocation, global scope w.r.t resident set management?
The process which suffers the reduction in resident set size may not be optimum.
This is partially mitigated by page buffering, which allows the replaced page
to possibly be reclaimed.
* What is variable allocation, local scope w.r.t resident set management?
1. When a new process is loaded into main memory, determine the number of frames
in its resident set.
2. When a page fault occurs, select a page in its resident set to be replaced.
3. From time to time, reevaluate the allocation provided to the process.
Increase or decrease the resident set size to improve overall performance.
* What is the working set?
The set of pages of a process that have been referenced in the last x time units
at time t.
* What is the pattern of working set size over time?
Stable periods followed by peaks of transient periods.
Transient periods are caused when the process switches to a new locality.
* How can we use the working set to guide resident set size?
1. Monitor the working set of each process
2. Periodically remove from the resident set pages that are not in the working
set.
3. A process may execute only if its working set is in main memory.
* What are the problems with using the working set to guide resident set size?
1. The past does not always predict the future. (Size and membership of the
working set will change over time.)
2. A true measurement of the working set for each process is impractical.
3. The optimal value of x is unknown, and would vary.
* What is the page fault frequency algorithm w.r.t resident set management?
1. A threshold F is defined
2. If the amount of time since the last page fault is less than F, a page is
added to the resident set of the process.
3. Otherwise, discard all pages with a use bit of 0.
* What is the major flaw of the page fault frequency algorithm w.r.t resident set management?
It does not perform well when there is a shift to a new locality. No page
drops out of the resident set until F time units have elapsed since it was
referenced.
* What is the variable-interval sample working set policy w.r.t resident set management?
Define:
M: The minimum duration of the sampling interval
L: The maximum duration of the sampling interval
Q: The number of page faults that are allowed to occur between sampling instances.
1. If the virtual time since the last sampling instance reaches L, then suspend
the process and scan the use bits.
2. If, prior to an elapsed virtual time of L, Q page faults occur.
a. If the virtual time since the last sampling instance is less than M, then
wait until the elapsed virtual time reaches M to suspend the process and
scan use bits.
b. If the virtual time since the last sampling instance is greater than or
equal to M, suspend the process and scan the use bits.
* What is the cleaning policy w.r.t virtual memory?
The policy that determines when a modified page should be written out to
secondary memory.
* What is demand cleaning w.r.t cleaning policies?
A page is written out to secondary memory only when it is selected for
replacement.
* What is precleaning w.r.t cleaning policies?
Modified pages are written out before their pages frames are needed. This means
that they can be written out in batches.
* How can you use page buffering with cleaning policies w.r.t virtual memory?
Clean pages periodically in the modified list, then move them to the unmodified
list.
* What is load control responsible for w.r.t virtual memory?
Load control is responsible for determining the number of processes that will be
resident in main memory.
* What is a main problem of multiprogramming w.r.t virtual memory?
If the number of processes are high, then there is an increased chance of
thrashing due to decreased resident set size.
* What is the L = S criterion w.r.t virtual memory?
Adjust the multiprogramming level so that the mean time between faults equals
the mean time required to process a page fault.
* What is the 50% criterion w.r.t virtual memory?
Adjust the multiprogramming rate until the utilization of the paging device is
at approximately 50%.
* How can to augment the clock algorithm to control multiprogramming levels?
You can safely increased the multiprogramming level if
1. If few page faults are occurring, then there few requests to advance the
pointer; or
2. For each request, the average number of frames scanned by the pointer is
small, which means that there are not many resident pages being referenced.
* What are some of the possibilities for choosing a process to suspend when decreasing the multiprogramming level?
- Lowest-priority process: This implements a scheduling policy decision and is
unrelated to performance issues.
- Faulting process: The reasoning is that there is a greater probability that
faulting task does not have its working set resident. You will also block a
process that will be blocked anyways.
- Last process activated: This is process least likely to have its working set
resident
- Process with the smallest resident set: Least effort to reload in the future.
However, it penalizes processes with strong locality.
- Largest process: Obtains the most free frames in an overcommitted memory.
- Process with the largest remaining execution window: Approximates
shortest-processing-time-first scheduling discipline.
* What page replacement algorithm does Linux use?
Clock policy.
# 9 Uniprocessor Scheduling
* What is long-term scheduling?
The decision to add to the pool of processes to be executed.
* What is medium-term scheduling?
The decision to add to the number of processes that are partially or fully in
main memory.
* What is short-term scheduling?
The decision as to which available process will be executed by the processor.
* What is I/O scheduling?
The decision as to which process's pending I/O request shall be handled by an
available I/O device.
* Which process state transitions is long-term scheduling responsible for?
New -> Ready/Suspend and New -> Ready
* Which process state transitions is medium-term scheduling responsible for?
Ready/Suspend -> Ready and Blocked/Suspend -> Blocked
* Which process state transitions is short-term scheduling responsible for?
Ready -> Running
* What is the aim of processor scheduling?
To assign processes to be executed by the processor or processors over time, in
a way that meets system objectives.
* What are some of the system objectives for processor scheduling?
Response time, throughput, processor efficiency.
* What are some of the strategies that long-term scheduling will use to admit new processes?
First-come-first-served. Highest priority. Expected execution time. I/O requirements.
* What are some examples of when short-term scheduling is performed?
Clock interrupts, I/O interrupts, operating system calls, signals.
* What is turnaround time?
The interval of time between the submission of a process and its completion.
* What is response time?
The time from the submission of a request until the response begins to be recieved.
* What are deadlines w.r.t scheduling criteria?
When deadlines are set, the scheduler should try to hit as many as possible.
* What is predictability w.r.t scheduling criteria?
A given job should run at the same time and with the same cost regardless of
the load on the system.
* What is throughput w.r.t scheduling criteria?
The scheduling policy should try to maximize the number of processes completed
per unit of time.
* What is fairness w.r.t scheduling criteria?
All processes should be the same, disregarding priorities. No process should be
starved.
* How are priorities implemented w.r.t uniprocessor scheduling?
Each priority has its own separate queue. The queues are queried in the order of
priority for a process to run.
* What is the first-come-first-served scheduling policy?
The first process to enter is fully executed, then the next process to enter
is fully executed, and so on.
* What are some of the problems with the first-come-first-served scheduling policy?
- Long processes are favoured over short processes.
- Processor bound processes are favoured over I/O bound processes.
* What is the round-robin scheduling policy?
The processor time is divided up into equal-sized time slices. Each process
gets one time slice at a time, and is then queued up again.
* What are some of the problems with round-robin scheduling policy?
- If the quanta (time slice) is too short, then there is a lot of overhead.
- Processor bound processes are favoured over I/O bound processes.
- This is because an I/O call will use up the rest of the time slice.
* What is virtual round robin w.r.t scheduling policy?
I/O queues are added, which have higher priority than the normal process queues.
This means that the system will no longer favour processor-bound processes more.
* What is the shortest process next scheduling policy?
The process with the shortest service is run to completion.
* What is the shortest remaining time scheduling policy?
The process with the shortest service time left - time already spent is chosen
to run immediately. May be preempted if a shorter program arrives.
* What is the highest response ratio next scheduling policy?
The process which maximizes ((waiting time + total service time) / total service
time) is run to completion.
* What is turnaround time w.r.t process scheduling?
The total time that the process spends in the system (waiting + service time).
* When can policies SPN, SRT, and HRRN not be used w.r.t process scheduling?
When we have no indication of the relative length of various processes.
* What is the feedback scheduling policy?
Multiple queues are used. Every process starts at the highest priority queue.
When a process is preempted, it is pushed into the queue one priority lower.
And so forth, until you're at the very bottom, in which case you stay there.
* Why does the feedback scheduling policy favour shorter processes?
Shorter processes are less likely to be preempted, and therefore won't navigate
as far down the priority queues.
* What is a problem with the feedback scheduling policy, and how can be alleviate it?
Longer processes may have really bad turnaround time, due to their low priority.
We can alleviate this problem by having variable preemption times for each level
in the priority (e.g. p0 has 1 time unit, p1 has 2, p2 has 4, etc...).
A better remedy is to increase the priority if the process has been waiting for
a long time.
* What is fair-share scheduling?
In a multi-user system, you may want to group processes together and have them
share resources.
You can do this by having a "group" utilization counter in addition to a normal
per-process utilization counter.
# 10 Multiprocessor Scheduling
* What is independent parallelism?
Each process is a separate application. e.g. time sharing system.
This has properties similar to the uniprocessor system.
* What is coarse and very-coarse grained parallelism?
A level where there is some synchronization among processes.
Distributing computation among separate processes or network nodes.
* What is medium-grained parallelism?
Parallel processing within the application level.
e.g. Threads that interact a lot with each other.
* What is fine-grained parallelism?
High parallel applications, e.g. graphics rendering.
* What are the three scheduling issues on multiprocessor systems?
1. Assignment of processes to processors.
2. Use of multiprogramming on individual processors.
3. Actual dispatching of a process.
* What is static processor assignment w.r.t multiprocessor scheduling?
Each process is permanently assigned to one processor.
* What is a disadvantage of static processor assignment w.r.t multiprocessor scheduling?
One processor could be idle while another has a backlog.
* What is the master/slave architecture w.r.t multiprocessor assignment?