The Basics Of Operating System Design
Each individual engineering team will have a different set of needs when they set out to design an operating system, but the basic questions they have to answer are always the same. Large industry data centers often use operating systems that are tailored to what sort of information they are processing and the hardware that they are working with. Designing this type of system software is very different than working with the kind of OS that powers a desktop or laptop.
Having a Goal in Mind
Engineers need to have a clear idea of what they went whenever they want to design a new operating system structure. Lack of a clear concept makes subsequent decisions very difficult.
Engineering crews at IBM once tried to design an architecture that would replace two existing structures with a single new one. Some of their operating system components were built on the FORTRAN programming language, and IBM had developed an entire parallel system based on COBOL. Engineering students were promoting Algol at the time, but a committee from IBM told them they needed to make something that would do everything for every customer regardless of the platform.
They designed a programming language called PL/I and authored entire operating system structures around it. It lacked a single unifying vision, and was ultimately a failure. It was too cumbersome and was largely a collection of features that were at war with each other. A great deal of PL/I code didn’t even compile correctly.
This story serves to illustrate just how important it is to have a clear vision when designing a new operating system. Software engineers have to know what they want and what sort of features aren’t necessary to be successful. Operating systems used by large industry data centers won’t need many of the features that are needed on operating systems designed for workstations or individual PC units.
Defining the Right Abstractions
Defining abstractions might be the most difficult task that engineering crews face when they are authoring a new operating system. Some abstractions like address spaces and files have been around so long that they seem obvious. Design needs will probably define how an operating system deals with files or processors.
Thread design is newer, and therefore it’s less mature. A multithreaded process might have one of its two threads blocked because it’s waiting for input. Some operating system designers will make a new thread in a new process that is also waiting for input, while others might want to handle this in a different way. Synchronization and input-output modeling are also important to consider when working on this step. General-purpose operating systems are usually much more varied when it comes to deciding how these systems have to work.
Ensure Isolation of Subsystems
Most system software is huge at this point. A majority of large data centers are going to rely on some variant of UNIX, and all major UNIX distributions currently exceed 3 million lines of code. Engineering specialists often struggle with the best way to isolate different systems from each other. When testing a new operating system, a team might find that a file system interacts with the memory system in some way that they didn’t foresee.
Operating system designers have to consider concurrency when they develop an OS. Operating system software has to deal with multiple I/O devices at once, and managing all these tasks at once is very difficult. Solid engineering decisions have to be made in order to ensure that these differing systems don’t interact with each other in ways that are beyond their design parameters.
Job and Task Control
Most operating system kernels use some sort of data structure that contains the information needed to manage each process. These control blocks handle the jobs and tasks that the operating system juggles.
How to work with these switch frames is one of the biggest engineering decisions developers have to make when authoring a kernel. These information structures are central to process management. They are accessed and modified by almost all of the utilities that ship with an OS. Scheduling software, memory utilities and I/O resource allocation routines all rely on these blocks.
Monitoring system performance requires strong process control block architecture as well. The set of PCBs defines whatever the current state of the operating system is. Process data structuring is almost always done in terms of some type of PCBs. Pointers to PCBs allows the creation of process queues, which is of vital importance when developing operating systems for large installations since these installations general need a large amount of automation.
Each PCB contains critical information a process relies on. This means it has to be kept in an area of memory that’s protected from normal access by users and other software. Many operating systems solve this problem by placing the PCB in the beginning of the kernel stack for that process. This area is conveniently protected from regular access.
Storage Management Techniques
How an operating system deals with storage is another important engineering decision. Hierarchical storage management is a term that has gained a great deal of popularity in recent years. This refers to a scheme where data is automatically moved between numerous high-cost media before being stored on a slow low-cost solution. Optical discs, tape drives and other similar solutions are less expensive, but they take a while. These storage techniques are generally implemented by standalone software, but there is no reason that an operating system developer couldn’t build on this.