- Information
- AI Chat
CSCI 340 Operating Systems
Operating Systems Principles (CSCI 340)
Queens College CUNY
Preview text
1 What Operating Systems Do Tuesday, September 11, 2018 1:42 PM The central processing unit (CPU), the memory, and the the basic computing resources for the system. The application as word processors, spreadsheets, compilers, and Web the ways in which these resources are used to solve computing problems. The operating system controls the hardware and coordinates its use among the various application programs for the various users. 1.1 User View There are different types of operating systems and each version caters to a specific computers and utilizations. A server OS is different than a mainframe OS which is different than a mobile OS, and that is once again different than a PC OS. Each OS focuses on different tasks, and resource distribution that is related to the utilization of the purpose of the machine. 1.1 System View Consider the OS as a resource allocator in which it is responsible for managing the computer hardware to solve problems. Control Program: A OS is considered as a control program as it manages the execution of user programs to prevent errors and improper use of the computer especially but not limited to devices. 1.1 Defining Operating Systems Kernel: A program that is running at all times, there are also usually other programs running with the kernel called system programs System Programs: Programs running aside the kernel which are associated with the OS but not necessarily part of the Kernel. Middleware: A set of software framework that provide additional services to app developers such as what android and iOS does Chapter 1 Overview Page 1 1 Organization Tuesday, September 11, 2018 6:38 PM In a general purpose PC there is the CPU and a number of device controllers connected to a common bus that provides access to shared memory. Device Controllers: A common bus that is in charge of a specific type of device such as Disk drives, audio devices and video displays The CPU and device controllers execute in parallel, competing for memory cycles. Firmware: also known as a bootstrap program is an initial program that is typically stored in the computer hardware in memory that is initializes the system from CPU registers to device controllers to memory contents. In order for a the firmware to load up the kernel upon boot, it needs to locate the OS kernel and load it into memory, once loaded and executing, it can start providing services to the system and user. System Processes: Also known as system daemons which are processes that run the entire time the kernel is running, for example in UNIX the first system process is which starts other daemons at which point the OS is booted and awaiting for an event to occur. Interrupt: an occurrence of an event that is signaled from either the hardware or software. Hardware may trigger an interrupt at any time sending a signal to the CPU, usually way of the system bus. System Call: also known as a monitor call is a special operation used software to trigger an interrupt Interrupts are an important part of computer architecture as each computer design has its own interrupt mechanism but severalof which are common. The interrupt must transfer control to the appropriate interrupt service routine. The straightforward method for handling this transfer would be to invoke a generic routine to examine the interrupt information. The routine, in turn, would call the handler. However, interrupts must be handled quickly. Since only a predefined number of interrupts is possible, a table of pointers to interrupt routines can be used instead to provide the necessary speed. The interrupt routine is called indirectly through the table, with no intermediate routine needed. Generally, the table of pointers is stored in low memory (the first hundred or so locations). These locations hold the addresses of the interrupt service routines for the various devices. Interrupt Vector: is a set of addresses that in a table that is used to refer to a specific protocol defined the type of interrupt 1.2 Storage Structure: Memory: also known as RAM is rewriteable memory called main memory where most computers store programs to run. Main memory Chapter 1 Overview Page 2 direct memory access (DMA) is used. After setting up buffers, pointers, and counters for the device, the device controller transfers an entire block of data directly to or from its own buffer storage to memory, with no intervention the CPU. Only one interrupt is generated per block, to tell the device driver that the operation has completed, rather than the one interrupt per te generated for devices. While the device controller is performing these operations, the CPU is available to accomplish other work. Chapter 1 Overview Page 4 1 Architecture Sunday, September 16, 2018 8:36 PM 1.3 Single Processor Systems: On a system, there is one main CPU capable of executing a set, including instructions from user processes. Almost all systems have other processors as well. processors are components built into the hardware. The operating system cannot communicate with these they do their jobs autonomously. The use of microprocessors is common and does not turn a system into a multiprocessor. If there is only one CPU, then the system is a system. 1.3 Systems: Multiprocessor systems: Also known as parallel systems or multicore systems are machines that have two processors or more that share clock, memory and peripheral devices. They have three main advantages 1. Increased Throughput. increasing the number of processors, we increase the throughput and get work done in less time. The only thing different is that the number of processors N does not represent the N times faster work completed. an overhead with more processors comes more work in keeping all parts working properly. More like N programmers working on a project does not mean work done N times faster. 2. Economy of scale. Multiprocessor systems can cost less than equivalent multiple systems, because they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processors share them than to have many computers with local disks and many copies of the data. 3. Increased Reliability. Increased reliability. If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. If we have ten processors and one fails, then each of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether. Graceful Degradation: Some have the ability to continue providing service proportional to the level of surviving hardware. Fault Tolerant: The next level of graceful degradation in which a system can suffer a failure of any component and be able to detect, diagnose, and correct failures. This however is expensive and requires special hardware and considerable duplication found in the HP nonstop. Asymmetric Processing: In which each processor is assigned a specific task. A boss processor controls the the other processors either look to the boss for instruction or have predefined tasks. This scheme defines a relationship. The boss processor schedules and allocates work to the worker processors. Symmetric Multiprocessing: which each processor performs all tasks within the operating system. SMP means that all processors are no relationship exists between processors. that each processor has its own set of registers, as well as a However, all processors share physical memory. The benefit of this model is that many processes can run processes can run if there are N causing performance to deteriorate significantly. Also, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies. These inefficiencies can be avoided if the processors share certain data structures. The difference between symmetric and asymmetric multiprocessing may result from either hardware or software. Special hardware can differentiate the multiple processors, or the software can be written to allow only one boss and multiple workers. It is important to note that while multicore systems are multiprocessor systems, not all multiprocessor systems are multicore CPU design is to include multiple computing cores on a single chip. Such multiprocessor systems are termed multicore. They can be more efficient than multiple chips with single cores because communication is faster than communication. In addition, one chip with multiple cores uses significantly less power than multiple chips. Chapter 1 Overview Page 5 1 Structure Thursday, September 20, 2018 9:09 PM Multiprogramming: Increases CPU utilization organizing jobs (code and data) so that the CPU always has a task to execute. The operating system keeps several jobs in memory simultaneously. Job Pool: since in general main memory is too small to accommodate all jobs, the jobs are kept initially in the disk which is what the job pool. It consists of all processes on disk awaiting allocation of main memory. operating systems work executing multiple tasks at the same time, if a task is waiting on an operation the CPU goes top work on something else and does not sit idle. While provides an environment where multiple system resources are utilized effectively it does not provide user interaction Time Sharing: Also known as multitasking is the next step of in which the CPU executes multiple jobs so frequently that the user can interact with each program while running. Time Sharing OS requires an interactive system Interactive Computer System: a system which provides direct communication with the computer and the user via instructions the user to the OS via an device such as a keyboard. The OS gives back immediate results through a short response time. Process: A program that is loaded in memory and executed A program loaded into memory and executing is called a process. When a process executes, it typically executes for only a short time before it either finishes or needs to perform Rather than let the CPU sit idle as this interactive input takes place, the operating system will rapidly switch the CPU to the program of some other user. Job Scheduling: If several jobs are ready to be brought into memory for time sharing then the CPU must decide which jobs to load through this process. running multiple jobs concurrently requires that their ability to affect one another be limited in all phases of the operating system, including process scheduling, disk storage, and memory management. Swapping: To ensure reasonable response time the PC executes this process in which processes are swapped in and out of main memory and into the disk Chapter 1 Overview Page 7 1 Operating Systems Operations Saturday, September 22, 2018 5:22 PM Trap: is a software generated interrupt that is caused an error (div 0 or invalid memory access) or a user program that an operating system service must be performed For each type of interrupt there is a specific segment of code dedicated to making sure what type of interrupt and what the process is which is called an interrupt service routine. 1.5 and MultiMode Operation In order to ensure the proper execution of the operating system, we must be able to distinguish between the execution of code and user defined code. Those two modes of operation distinguish between these two modes. If a user application requires sets off a trap then modes are changes. User Mode: When the computer is executing code based on a user application Kernel Mode: When the computer is executing Operating system related code procedures Mode Bit: The bit that decides the current mode of the hardware if in kernel (0) or user (1) The system always switches to user mode ( setting the mode bit to 1) before passing control to a user program. Privileged Instructions: like allowing the OS to switch to kernel mode. The hardware allows privileged instructions to be executed in kernel mode if an attempt is made to execute these a privileged instruction in user mode the hardware does not execute and traps it to the OS. Eventually, control is switched back to the operating system via an interrupt, a trap, or a system call. System Calls: provide the means for a user program to ask the operating system to perform tasks reserved for the operating system on the user behalf. A system call usually takes the form of a trap to a specific location in the interrupt vector The lack of a dual mode can cause serious shortcomings in an operating system. A user program running awry can wipe out the operating system writing over it with data Chapter 1 Overview Page 8 1 Process Management Sunday, September 23, 2018 8:41 PM For now, you can consider a process to be a job or a program, but later you will learn that the concept is more general like A program being run an individual user on a PC is a process. A program is a passive entity, like the contents of a file stored on disk, whereas a process is an active entity. A process is the unit of work in a system. A system consists of a collection of processes, some of which are processes (those that execute system code) and the rest of which are user processes (those that execute user code). All these processes can potentially execute multiplexing on a single CPU, for example The operating system is responsible for the following activities in connection with process management Scheduling processes and threads on the CPUs Creating and deleting both user and system processes Suspending and resuming processes Providing mechanisms for process synchronization Providing mechanisms for process communication Chapter 1 Overview Page 10 1 Memory Management Sunday, September 23, 2018 8:49 PM Main memory is a large array of tes, ranging in size from hundreds of thousands to billions. Each te has its own address. Main memory is a repository of quickly accessible data shared the CPU and devices. CPU can only directly access only from main memory, for the CPU to process data from disk, those data must first be transferred to main memory calls. In selecting a scheme for a specific system, we must take into account many the hardware design of the system. Each algorithm requires its own hardware support. The operating system is responsible for the following activities in connection with memory management: Keeping track of which parts of memory are currently being used and who is using them Deciding which processes (or parts of processes) and data to move into and out of memory Allocating and deallocating memory space as needed Chapter 1 Overview Page 11 In a hierarchical storage structure, the same data may appear in different levels of the storage system. For example, suppose that an integer A that is to be incremented 1 is located in file B, and file B resides on magnetic disk. The increment operation proceeds first issuing an operation to copy the disk block on which A resides to main memory. This operation is followed copying A to the cache and to an internal register. Thus, the copy of A appears in several places: on the magnetic disk, in main memory, in the cache, and in an internal register (see Figure 1). Once the increment takes place in the internal register, the value of A differs in the various storage systems. The value of A becomes the same only after the new value of A is written from the internal register back to the magnetic disk. in a multitasking environment, where the CPU is switched back and forth among various processes, extreme care must be taken to ensure that, if several processes wish to access A, then each of these processes will obtain the most recently updated value of A. The situation becomes more complicated in a multiprocessor environment where, in addition to maintaining internal registers, each of the CPUs also contains a local cache Cache Coherency: When we make sure that an update to the value of A in one cache is immediately reflected in all other caches where A resides in a system where each processor has own local cache. 1.8 Systems One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user The subsystem consists of several components: A component that includes buffering, caching, and spooling A general interface Drivers for specific hardware devices Chapter 1 Overview Page 13 1 Kernel Data Structures Sunday, September 23, 2018 11:13 PM 1.10 Lists, Stacks, and Queues Main memory is constructed as an array. If the data item being stored is larger than one te, then multiple tes can be allocated to the item, and the item is addressed as item number item size In other situations, arrays give way to other data structures. List: list represents a collection of data values as a sequence. Linked List: Most common way of implementing a List in which items are linked to one another Linked lists are of several types: In a singly linked list, each item points to its successor, as illustrated in Figure 1. In a doubly linked list, a given item can refer either to its predecessor or to its successor, as illustrated in Figure 1. In a circularly linked list, the last element in the list refers to the first element, rather than to null, as illustrated in Figure 1. Linked lists accommodate items of varying sizes and allow easy insertion and deletion of items but a disadvantage of retrieving items with a time of O(n) as it requires potentially traversing all n elements in the worst case. They are used ultimately to create other data structures. Stack: a LIFO structure for adding and removing items which is respectively called push and pop. Queue: A contrasting structure to a stack since it uses the FIFO principle An OS uses stacks when invoking function calls, parameter local variables, and the return address are pushed onto the stack when a function is returning from the function call pops those items off the stack. For printing pages in an OS we use the queues structures as well as tasks awaiting to run on CPUs are organized in queues 1.10 Trees Tree: A tree is a data structure that can be used to represent data hierarchically. Data values in a tree structure are linked through relationships. In a general tree, a parent may have an unlimited number of children. Binary Search Tree: A parent may have at most two children, which we term the left child and the right child. additionally requires an ordering between the two children in which le f t child right child. When we search for an item in a binary search tree, the performance is O(n) since it can be a unbalanced tree Balanced Binary Search Tree: Here, a tree containing n items has at most lg n levels, thus ensuring performance of O(lg n). This remedies the issue of binary search trees that are unbalanced 1.10 Hash Functions and Maps Chapter 1 Overview Page 14 2 Operating System Services Monday, September 24, 2018 2:38 PM An operating system provides an environment for the execution of programs. It provides certain services to programs and to the users of those programs User Interface: All OS have some form of UI which takes up severla forms Command Line Interface (CLI): uses text commands and a method to enter them Batch Interface: commands and directives to execute said commands are written and stored in an executable files. Graphical User Interface (GUI): Most commonly used where there is actual menus with a pointing device from windows. Program execution: System must be able to load a program into memory and execute it then terminate it either normally or abnormally Operations: Must allow the program to have some form of means to access devices such as and what not. The program may not control the device directly so the OS will have to provide a means to through kernel mode Manipulation: Programs need to read and write files and directories. They also need to create and delete them name, search for a given file, and list file information. Some operating systems include permissions to allow or deny certain access to files or directories. Communications: communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together a Chapter 2 Operating System Structures Page 16 computer network. Shared Memory: In which two or more processes read and write to a shared section of memory, or message passing, in which packets of information in predefined formats are moved between processes the operating system. Error Detection: Chapter 2 Operating System Structures Page 17
CSCI 340 Operating Systems
Course: Operating Systems Principles (CSCI 340)
University: Queens College CUNY
- Discover more from: