Tuesday, January 15, 2008
Assignment 4
The major difference between deadlock, starvation and race is that in deadlock, the problem occurs when the jobs are processed. Starvation, however is the allocation of resource that prevents one job to be executed. Race occurs before the process has been started.
2. Example of Deadlock:
When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.
Example of Starvation:
When you barrowed a book and the owner want it back.
Example of Race:
A two car race for a price.
3. Four necessary condition needed for the deadlock from exercise #2:
if the terminal of the train is only one.if the two train needed the passengers.if there's other alternative terminal available.if the two train is not full.
4.
5.
a. Deadlock will not happen because there are two traffic lights that control the traffic. But when some motorist don't follow the traffic lights, deadlock can occur because there's only one bridge to drive through.b. Deadlock can be detected when there will be a huge bumper to bumper to the traffic and there will be accident that will happen.c. The solution to prevent deadlock is that, the traffic lights should be accurate and motorist should follow it. In order to have a nice driving through the bridge.
Thursday, December 13, 2007
Assignment 3
For instance, if your computer has a slow disk drive and you are doing a lot of paging (using virtual memory) to switch from one program to another rapidly, then your disk drive will become a performance bottleneck and your computer will seem to have trouble keeping up with your commands. The computer, here, is "thrashing", spending all of it's time trying to keep up. Imagine a person drowning. They are thrashing because they are spending all of their energy doing one thing to stay alive.
Q:What is the cause of thrashing?How does the system detect thrashing?
Once it detects thrashing, what can the system do to eliminate this problem? - Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.
Operating system designers attempt to keep high CPU utilization by maintaining an optimal multiprogramming level (MPL). Although running more processes makes it less likely to leave the CPU idle, too many processes adversely incur serious memory competition, and even introduce thrashing, which eventually lowers CPU utilization. A common practice to address the problem is to lower the MPL with the aid of process swapping out/in operations. This approach is expensive and is only used when the system begins serious thrashing. The objective of our study is to provide highly responsive and cost-effective thrashing protection by adaptively conducting priority page replacement in a timely manner. We have designed a dynamic system Thrashing Protection Facility (TPF) in the system kernel. Once TPF detects system thrashing, one of the active processes will be identified for protection. The identified process will have a short period of privilege in which it does not contribute its least recently used (LRU) pages for removal so that the process can quickly establish its working set, improving the CPU utilization. With the support of TPF, thrashing can be eliminated in its early stage by adaptive page replacement, so that process swapping will be avoided or delayed until it is truly necessary. We have implemented TPF in a current and representative Linux kernel running on an Intel Pentium machine. Compared with the original Linux page replacement, we showthat TPF consistently and significantly reduces page faults and the execution time of each individual job in several groups of interacting SPEC CPU2000 programs. We also show that TPF introduces little additional overhead to program executions, and its implementation in Linux (or Unix) systems is straightforward.
1.Explain the following:
A.Multiprogramming. Why is it used?
A multiprogramming is a technique used to utilize maximum CPU time by running multiple programs simultaneously. The execution begins with the first program and continuous till an instruction waiting for a peripheral is reached, the context of this program is stored, and the second program is memory is given chance to run. The process continued until all program finished running. Multiprogramming has no guarantee that a program will run is timely manner.
B.Internal Fragmentation. How does it occur?
The internal fragmentation occurs it when a fixed partition is partially used by program, the remaining space within the partition is unavailable to any other job and that's the time internal fragmentation occur when there is another job followed on the space. So that it will not wasted.
C.Compaction: Why is it need?
Compaction is very needed because it is the process of collecting fragments of available memory space into contiguous in block by moving programs and data in a
computer's memory disks, or known as garbage collection.
E.Relocation: How often should it performed?
It depend on the process of address refferences in program.
2.Describe the Major Disadvantages for each of the four memory allocation schemes presented in the chapter.
The disadvantage of this memory allocation its an overhead process, so that while compaction is being done everything else must wait.
3.Describe the Major Advantages for each of the memory allocation schemes presented in the chapter.
They could be divided into segments of variable sizes of equal size. Each page, or segment, could be stored wherever there was an empty block best enough to hold it.
Assignment 2
How each emplements virtual memory?
Virtual memory is one of the most important subsystems of any modern operating system. Virtual memory is deeply intertwined with user processes, protection between processes and protection of the kernel from user processes, efficient shared memory, communication with IO (DMA, etc.), paging, swapping, and countless other systems. Understanding the VM subsystem greatly helps understanding how all other parts of the kernel work and interact. Because of this "Understanding the Linux Virtual Memory Manager" is a great guide in better understanding and working with the entire kernel
How each handles page sizes?
As computer system main memories get larger and processor cycles-per-instruction (CPIs) get smaller, the time spent in handling translation lookaside buffer (TLB) misses could become a performance bottleneck. We explore relieving this bottleneck by (a) increasing the page size and (b) supporting two page sizes. We discuss how to build a TLB to support two page sizes and examine both alternatives experimentally with a dozen uniprogrammed, user-mode traces for the SPARC architecture. Our results show that increasing the page size to 32KB causes both a significant increase in average working set size (e.g., 60%) and a significant reduction in the TLB's contribution to CPI, CPITLB, (namely a factor of eight) compared to using 4KB pages. Results for using two page sizes, 4KB and 32KB pages, on the other hand, show a small increase in working set size (about 10%) and variable decrease in CPITLB, (from negligible to as good as found with the 32KB page size). CPITLB when using two page sizes is consistently better for fully associative TLBs than for set-associative ones. Our results are preliminary, however, since (a) our traces do not include multiprogramming or operating system behavior, and (b) our page-size assignment policy may not reflect a real operating system's policy.
How each handles page fault?
The chip uses this 32 bit number to look up values in a page table. The value in this page table is the page's physical address (or an indication that the page is not available) and the accessibility of the page (read/write, user/kernel). The physical address actually maps to real memory in the computer that contains the data being accessed. If the page is not available- a page fault occurs and the kernel either kills the process or loads the page from disk, depending on the value in the page table (which is up to the kernel to set) If the page is readonly and a write is being attempted- a page fault occurs and the kernel either kills the process or does other clever stuff (also depending on data in the entry or elsewhere) If the page is kernel and the processor is not in kernel mode- a fault occurs (can't remember if its a page fault or a GPF) and the kernel again decides what to do to the process.
How each handles working set?
No such concept. For all practical purposes, the app has virtually no control over its working set, unless the programmer has done something as fundamentally irresponsible as using VirtualLock, which almost always is a mistake, usually caused by fundamental misunderstanding of the programming problem. It is an API sufficiently obscure that it is hardly ever used anyway, and therefore it can usually be ignored as a possibility. If the app tops out at 32K files, it has exceeded some other limit, for example, some internal table that some programmer did a #define of 32768 (or some multiple thereof), or it is running some MS-DOS system, such as WIn98, that has built-in limits on how many objects you can add to a control. It has absolutely nothing to do with "working set".
How it reconciles thrashing issues?
Many interactive computing environments provide automatic storage reclamation and virtual memory to ease the burden of managing storage. Unfortunately, many storage reclamation algorithms impede interaction with distracting pauses. Generation Scavenging is a reclamation algorithm that has no noticeable pauses, eliminates page faults for transient objects, compacts objects without resorting to indirection, and reclaims circular structures, in one third the time of traditional approaches. We have incorporated Generation Scavenging in Berkeley Smalltalk(BS), our Smalltalk-80 implementation, and instrumented it to obtain performance data. We are also designing a microprocessor with hardware support for Generation Scavenging.
Assignment 1
OS had 4.35 per cent of the world's operating system share last December.
Now it only has 4.33 per cent.
While this is not much of a dip, it reverses a trend that saw interest in
Apple's operating system actually growing a few years back.
What is worse, from Apple's perspective, is that its operating system is
losing ground in favour of Windows XP, which even Microsoft admits is a bit
out of date. XP has 84.18 per cent of the operating systems used by machines
accessing the Web sites measured by Net Applications during August.
blasted for its security holes and has a product that is years out of date.
Apple executives might be wondering what will happen to its operating system
if Vista takes off, or Linux ever turns itself into a proper desktop.
2. One of users' gripes with Vista is its significant memory needs - a minimum of 1Gb for all versions except the bare-bones Vista Home Basic.
It's one thing to compare this with the memory requirements of, say, Windows XP, Linux or Mac OS X. But a more relevant contrast is at hand: Windows HPC Server 2008, also known as "Windows for Supercomputers", which can run on 512Mb of memory.
The new server software is aimed at the growing high-performance computing (HPC) market, with its stringent performance needs. It is designed for efficient HPC clusters, such as the 2,048-core production test cluster Microsoft used to test-drive the software.
It is the successor to Windows Compute Cluster Server 2003, and is based on Windows Server 2008. Microsoft is recommending it for high-throughput applications such as service-oriented architecture (SOA) web services.
Vista, on the other hand, is intended for home and office desktops. On top of the 1Gb minimum memory requirement, Microsoft recommends 2Gb or 4Gb to achieve the best experience.
Microsoft explained that Windows HPC Server 2008 also needs additional memory to perform at its best. "The minimum hardware requirements for Windows HPC Server 2008 are similar to the hardware requirements for the x64-based version of the Windows Server 2008 Standard operating system," the company said in a white paper on HPC Server 2008. "Windows HPC Server 2008 supports up to 64 Gb of RAM."
t memory needs - a minimum of 1Gb for all versions except the bare-bones Vista Home Basic.
It's one thing to compare this with the memory requirements of, say, Windows XP, Linux or Mac OS X. But a more relevant contrast is at hand: Windows HPC Server 2008, also known as "Windows for Supercomputers", which can run on 512Mb of memory.
The new server software is aimed at the growing high-performance computing (HPC) market, with its stringent performance needs. It is designed for efficient HPC clusters, such as the 2,048-core production test cluster Microsoft used to test-drive the software.
It is the successor to Windows Compute Cluster Server 2003, and is based on Windows Server 2008. Microsoft is recommending it for high-throughput applications such as service-oriented architecture (SOA) web services.
Vista, on the other hand, is intended for home and office desktops. On top of the 1Gb minimum memory requirement, Microsoft recommends 2Gb or 4Gb to achieve the best experience.
Microsoft explained that Windows HPC Server 2008 also needs additional memory to perform at its best. "The minimum hardware requirements for Windows HPC Server 2008 are similar to the hardware requirements for the x64-based version of the Windows Server 2008 Standard operating system," the company said in a white paper on HPC Server 2008. "Windows HPC Server 2008 supports up to 64 Gb of RAM."