Improving System Performance

Improving System Performance
Computer system performance is determined by the amount of work that the
system can perform within a given time utilizing a given amount of
resources (Dongarra, 2013). Since the discovery of computers, technology
firms have always focused on improving the performance of the prevailing
system through innovation. Consequently, different players in the
technology industry have coined several terminologies that describe
their innovation and the capacity of their technological advancements to
improve the performance of existing systems. Some of the innovative
concepts coined include the reduced instruction set computing (RISC),
pipeline, cache memory, and virtual memory. The current study will
address the historical development of the four innovations and their
effectiveness. Although the four innovations target in improving the
system performance, instruction pipeline is the most important
technological development in the field of computer technology.
Evolution of Reduced instruction set computing (RISC)
The development of RISC architecture began in 1975 at the IBM Research
Center and was completed in the 1980s (Oklobdzija, 1999). However, since
the information about this development was withheld from the public,
similar projects were initiated in the early 1980s at the University of
California Berkeley as well as Stanford University. Although the
research projects were carried out by different institutions, all the
players targeted at developing a performance oriented architecture,
which would be based on parallelism vial pipelining. These efforts were
initiated following the observation that only 10 instructions were
actually used from the instruction repertoire in 90 % of the time. The
target was to favor the selected instruction by reducing their cycle
while emulating the rest of the instructions.
The development of the RISC architecture was a significant milestone in
computer technology because most the mainstream architectures available
today (including SPARC and MIPS are of the RISC type (Oklobdzija, 1999).
The 1982 RISC-1 processors designed by Berkeley outperformed others
because it utilized 44,420 transistors and performed better than a
single-chip design. However, RISC-II designed in 1983 was three times
faster than RISC-I and carried 39 instructions. Through the evolution of
the RISC architecture testing microprocessors has been made easy,
smaller instruction codes can now be developed, and efficient codes can
now be produced by high-level language. Other architectures (such as
SPARC) have been developed since 1990s until 2011 when the RISC-V
projected was initiated to enhance several features (such as
heterogeneous multiprocessing, many core, and dense instruction coding)
of the architecture (Waterman, Lee, Patterson & Asanovi, 2011).
Evolution of instruction pipelining
Pipelining technology had a major contribution towards the development
of supercomputers including the array and vector processors. XMP line
was one of the early developments of supercomputers by the Cray Research
and utilized pipelining for multiply and adds / subtracts functions. The
Star Technologies later added some pipeline functions that worked in
parallel, and was advanced by the inclusion of a pipeline divide circuit
in 1984. This paved a way for the integration of pipeline technology
into microprocessors used in the computers of today.
The development pipeline resulted in movement of instructions in a
continuous and an overlapped way to the processors. The integration of
instruction pipelining in computer architecture facilitates fetching of
the next instruction while the processors is progressing with arithmetic
operations and holds them in a buffer awaiting the performance of each
instruction. This improves the system performance to levels that
non-pipelined processing cannot achieve. However, there are two major
challenges of pipelined architecture. First, the architecture cannot run
at maximum speed as a result of structural hazards, data hazards, and
control hazards (Cheng, 2013). Secondly, the process of detecting and
preventing these hazards makes the system more complicated, thus
reducing its efficiency.
Evolution of cache memory
Cache memory is a memory system that supplements the main memory and
function by storing the instructions that are frequently used
temporarily in a way that facilitates faster processing by the central
processor (Khatoon & Mirza, 2011). In the early days of the development
of computer technology, memory technologies included magnetic core,
semi-conductor, and disc. Catching was only used to fetch instructions
and data into a faster memory before the processor access. More research
has resulted in development of catches with bigger size, which was
dependent on the type of programming language. The cache development
strategies have focused on cost and an average speed of execution of
instructions, but fault tolerance and energy efficiency are issues or
priority in the ongoing research works.
The major challenge facing researcher is the cache optimization given
the large number of processor cores sharing the processor memory
bandwidth. According to Khatoon & Mirza (2011) the development of
on-chip memory utilization will increase the overall memory performance
and other applications being run in the system. This implies that the
overall performance will be determined by the throughput of multiple
programs and multiple programs running on multiple cores located within
the same chip.
Evolution of virtual memory
Virtual memory utilizes OS software and architectural mechanisms to hide
the actual presence (Hennessy & Patterson, 2002). The initial
implementation of the virtual memory in the mainstream operating system
faced several challenges such as the development of specialized hardware
and the risk of memory slowdown. These challenges were overcome through
extensive research that resulted in the production of a virtual memory
system that had a better performance compared to a manually controlled
system. However, introduction of virtual memory in x86 architecture
resulted in poor performance, thus forcing researchers to use paging
instead of combining both the paging and segmentation techniques
(Hennessy & Patterson, 2002). The innovation of virtual memory had two
major benefits in computer technology. First, virtual memory stores
excess data when the physical memory is full especially if the user open
multiple programs. Secondly, programmers can create large and
complicated programs that utilize both the physical and virtual
memories. However, the significance of virtual memory may reduce with
the current trend of fall in the price of the memory chips.
In conclusion, efforts to improve the system performance are an ongoing
process where the four innovations (RISC, pipelining, cache memory, and
virtual memory) considered in this paper are being reviewed for further
development. Despite the fact that all the four innovations contributed
towards the development of the better performing system, pipelining was
the most important of all innovations. This is because the development
of instruction pipeline paved ways for computer systems that could
support the execution several operations successfully. This innovation
enabled researchers to develop systems with capacity to overlap
instructions without the need for additional hardware. This is because
instruction pipelining allows the system to utilize different parts of
the same hardware to execute different instructions. Moreover,
instruction pipeline is a technology that may not get obsolete in the
near future unlike other architectures such as virtual memory which
application may be reduced by modern and cheap chips.
Cheng, C. (2013). Design examples of useful memory latency for
developing a hazard preventive pipeline high-performance
embedded-microprocessor. VLSI Design, 2013, 1-10. doi:
Dongarra, J. (2013). Performance of various computers using standard
lineage equations software. Knoxville: University of Tennessee.
Hennessy, L. & Patterson, A. (2002). Computer architecture: A
quantitative approach. Burlington: Morgan Kaufmann.
Khatoon, H. & Mirza, H. (2011). Improving memory performance using cache
optimization in chip multiprocessors. Sindh University Research Journal,
43, 57-62.
Oklobdzija, G. (1999). Reduced instruction set computers. Berkeley:
University of California.
Waterman, A., Lee, Y., Patterson, A. & Asanovi, K. (2011). The RISC-V
instruction set manual, volume I: Base user level ISA. Berkeley:
University of California.

Close Menu