Elements of Computer System Organization

Elements of Computer System Organization

According to Moore’s Law, the number of transistors in an integrated circuit will double every two years. Intel co-founder Gordon E. Moore made this observation in 1975, after a more optimistic formulation in 1965. As one of the central figures in a leading global semiconductor company, few others had a better perspective on this than him!

When Moore was writing, computer hardware was limited by many cost concerns and performance issues that no longer exist. Still, today’s computer scientists have inherited from their forebears something vital: An understanding of the fact that, to achieve peak performance, systems must be properly organized.

Organizing hardware components efficiently allowed computing pioneers to:

  • Overcome power consumption and cooling problems that limited processor speed.
  • Accelerate data processing rates to accommodate resource-intensive applications.
  • Mitigate, then eliminate, “bottlenecks” and points of fault in the CPU and RAM.

In short, computer scientists were constantly challenged to do more with less. Although resources have grown enormously since 1975, all modern computer infrastructure is built on those early lessons. Optimizing a system for efficiency allows users to achieve peak performance from each change in computing, which in turn accelerates the speed with which innovative technology can be realized.

Let’s look at some computer components and consider their organizing principles.

The Motherboard

In computer parlance, the motherboard is one of the “Big Four” – the devices computer experts should evaluate first when a system is not working as it should. In many ways, the motherboard is the most important device, since it supplies power and maintains connections between all major internal components.

The motherboard is a printed circuit board that contains microchips providing a system’s basic logic. One of the first organizational challenges in modern computer science was to develop a motherboard architecture providing adequate data throughput for devices with increasingly heavy needs, including RAM and video cards.

Intel and AMD, two of the largest global semiconductor firms, took different paths to this goal – in fact, their competing approaches often provided the creative energy for major hardware innovations. Many Intel designs use only a single chip to service most major devices, while AMD divides processing duties between a northbridge, managing high-bandwidth applications, and a southbridge, facilitating the CPU’s communication with peripherals and secondary drives.

The CPU (Central Processing Unit or Processor)

The processor orchestrates the functions of all hardware components. It acts on input from devices – including internal fixtures like the video card and external devices like the keyboard – and performs calculations. It also works with the motherboard’s quartz clock to provide the internal “rhythm” devices rely on to time their communications.

Under ideal conditions, the CPU is the major determinant of a computer’s speed. These days, its base speed is usually measured in gigahertz (GHz) – a 3-GHz processor can execute up to three billion instructions in one second, with the size of each “instruction” limited by the length of its memory registers. The first 1-GHz processors for end users were introduced in 2000 by AMD and Intel – now, 4-GHz is just around the corner.

In early computing, the processor was a major point of fault. It is the device that runs hottest, so a system design incorporating a high-end processor must optimize thermal efficiency and provide specialized cooling. At the same time, historical CPUs were prone to bandwidth bottlenecks across the external buses connecting them to other devices. To overcome these issues, scientists created new motherboard configurations, processor architectures, and form factors for key components.

As organization improved, so did processor efficiency, and end users no longer needed to overclock computers to achieve short bursts of peak performance. In overclocking, CPUs are “throttled” to operate faster than their intended maximum internal clock speed. In the case of a CPU, which often runs to temperatures of 140 degrees Fahrenheit, this was a huge risk! In this case, finding the right way to organize components was vital to safety.

RAM (Random Access Memory)

RAM, or main memory, is a temporary workspace allowing instructions to be processed while the computer is operating. RAM is volatile, meaning its contents are erased when the device is powered down, in contrast to the non-volatile hard disk. In most RAM, data is stored on capacitors that must be “refreshed” frequently to avoid data loss, limiting the performance of the RAM stick as a whole.

For most of computing history, RAM limitations were vexing for both scientists and consumers. RAM was a pricey bottleneck that could limit the effectiveness of even the best processor. When Samsung brought synchronous dynamic RAM (SDRAM) into wide circulation in 1993, it completely changed hardware design. SDRAM synchronizes with the system bus, allowing some implementations to execute instructions on the rising and falling edge of each clock cycle. This meant doubling the data rate at a given clock speed – hence the term double data rate RAM.

Hard Disk Drive (HDD) and Solid-State Drive (SSD)

Hard disk drives provide the basic nonvolatile memory in non-portable systems. Data is written by magnetizing ferromagnetic material directionally, storing it on rotating platters in the form of binary code. While most consumers think of the HDD in terms of capacity, rotational speed and data throughput rates are key limiting factors in high-performance systems. As the read head reaches a platter, it also experiences latency before the data is read. The higher a drive’s rotational speed, the lower this latency will be.

Although solid-state drives have been available since 1976, price prevented their use in all but the most high-end environments. In 1978, the SSD was the size of a filing cabinet and cost $400,000. Today, SSD significantly reduces a system’s design overhead: Using interconnected flash memory chips, it eliminates the need for spinning platters or read heads. This extends drive life and improves speed: SSD typically has random read latency under 1 millisecond compared to 5,400-7,200 RPM HDDs, which often lag by between six and eight milliseconds.

Elements of Computer System Organization

Future Challenges in Computer Systems Organization

Today’s systems are designed in an environment of great abundance compared to three decades ago. Still, it is important for computer science students to keep their eyes fixed on the core principles of efficiency and organization. Hardware of the near future will integrate transformative technology, demanding creativity and analytical skill to use to the fullest.

Tomorrow’s computer science leaders will encounter:

  • Nanotechnology: At the “nanoscale” – about three atoms long – many materials exhibit novel properties. The design of new materials or components at this scale may provide the solution for looming limitations on processor speed.
  • Biotechnology: Biotechnology is the use of living organisms in technological systems. Future biological computers will store information using chemical reactions. In fact, the first DNA-based transistor has already been developed.
  • Neuromorphics: The vast computational power of the human brain is provided by the intimate interconnection of all its systems compared to the linearity of current computing. Technology mimicking the brain may be the key to future AI and robotics.

Students who can take a holistic view and understand how the challenges of the past relate to those of the present will be poised to lead great innovations in 21st century. Doing so demands understanding, expanding, and challenging the organizational principles of the past.


Sources

https://www.risc.jku.at/education/courses/ss2002/compsys/slides/systems/slides-main.pdf
http://www.pearsonitcertification.com/articles/article.aspx?p=1681755&seqNum=2
http://homepage.cs.uri.edu/book/cpu_memory/cpu_memory.htm
LEARN MORE TODAY