Chapter 8
System Software
2
Chapter 8 Objectives
• System Software and their Role
– Operating Systems
– Programming Tools
– Database Software
– Transaction Managers.
3
8.1 Introduction
• Computers are not useful if they cannot be
used to perform work
• Users interact with computer via programs
• System software, such an operating system,
allow application programs to be loaded and
run by users.
4
8.2 Operating Systems
• The evolution of operating systems has paralleled
the evolution of computer hardware.
– As hardware became more powerful, operating systems
allowed people to more easily manage the power of the
machine.
• In the days when main memory was measured in
kilobytes, and tape drives were the only form of
magnetic storage, operating systems were simple
resident monitor programs.
– The resident monitor could only load, execute, and
terminate programs.
5
8.2 Operating Systems
• In the 1960s, hardware has become powerful
enough to accommodate multiprogramming, the
concurrent execution of more than one task.
• Multiprogramming is achieved by allocating each
process a given portion of CPU time (a timeslice).
• Interactive multiprogramming systems were
called timesharing systems.
– When a process is taken from the CPU and replaced by
another, we say that a context switch has occurred.
6
8.2 Operating Systems
• Today, multiprocessor systems have become
commonplace.
– They present an array of challenges to the operating
system designer, including the manner in which the
processors will be synchronized, and how to keep their
activities from interfering with each other.
• Tightly coupled multiprocessor systems share a
common memory and the same set of I/O devices.
– Symmetric multiprocessor systems are tightly coupled and
load balanced.
7
8.2 Operating Systems
• Loosely coupled multiprocessor systems have
physically separate memory.
– These are often called distributed systems.
– Another type of distributed system is a networked system,
which consists of a collection of interconnected,
collaborating workstations.
• Real time operating systems control computers that
respond to their environment.
– Hard real time systems have tight timing constraints, soft
real time systems do not.
8
8.2 Operating Systems
• Personal computer operating systems are designed
for ease of use rather than high performance.
• The idea that revolutionized small computer
operating systems was the BIOS (basic input-output
operating system).
– Different computers with different hardware but the same
architecture could run the same operating system
9
8.2 Operating Systems
• Operating systems used to be purely text based
with a command line interface (CLI).
– Graphical user interfaces were first brought to market in
the 1980s and were mainly targetted toward desktop
publishing and games but are everywhere today
• Many design concepts from UNIX continue on in
the popular Linux operating system.
– Most servers run a “flavor” of GNU/Linux
– Android phones use the Linux Kernel
10
8.2 Operating Systems
• Two operating system components are crucial:
The kernel and the system programs.
• As the core of the operating system, the kernel
performs scheduling, synchronization, memory
management, interrupt handling and it provides
security and protection.
– Microkernel systems provide minimal functionality, with
most services carried out by external programs.
– Monolithic systems provide most of their services within a
single operating system program.
– Most commercially viable operating systems are
somewhere inbetween.
11
8.2 Operating Systems
• Process management lies at the heart of operating
system services.
– The operating system creates processes, schedules their
access to resources, deletes processes, and deallocates
resources that were allocated during process execution.
• The operating system monitors the activities of each
process to avoid synchronization problems that can
occur when processes use shared resources.
• the operating system provides interprocess
communication services (IPC) If processes need to
communicate with one another.
12
8.2 Operating Systems
• The operating system schedules process execution.
• First, the operating system determines which process
shall be granted access to the CPU.
– This is long-term scheduling.
• The operating system then determines which one will
have access to the CPU at any particular moment.
– This is short-term scheduling.
• Context switches occur when a process is taken from
the CPU and replaced by another process.
– Information relating to the state of the process is preserved
during a context switch.
• Context switches can be cooperative or preemptive.
13
8.3 Protected Environments
• In their role as resource managers and protectors,
many operating systems provide protected
environments that isolate processes, or groups of
processes from each other.
• The operating system is sole controller virtual
memory
– processes and subsystems can not access memory
belonging to another process or subsystem
– processes and subsystems can not access memory
mapped I/O unless the operating system allocates it
14
8.4 Programming Tools
• Programming tools carry out the mechanics of software
creation within the confines of the operating system
and hardware environment.
• Assemblers are the simplest of all programming tools.
They translate mnemonic instructions to machine code.
• Most assemblers carry out this translation in two
passes over the source code.
– The first pass partially assembles the code and builds the
symbol table
– The second pass completes the instructions by supplying
values stored in the symbol table.
15
8.4 Programming Tools
• The output of most assemblers is a stream of
relocatable binary code.
– In relocatable code, operand addresses are relative to
where the operating system chooses to load the program.
– Absolute (nonrelocatable) code is most suitable for
device and operating system control programming.
• When relocatable code is loaded for execution,
special registers provide the base addressing.
• Addresses specified within the program are
interpreted as offsets from the base address.
16
8.4 Programming Tools
• The process of assigning physical addresses to
program variables is called binding.
• Binding can occur at compile time, load time, or run
time.
• Compile time binding gives us absolute code.
• Load time binding assigns physical addresses as
the program is loaded into memory.
– With load time, binding the program cannot be moved!
• Run time binding requires a base register to carry
out the address mapping.
17
8.4 Programming Tools
• On most systems, binary instructions must pass
through a link editor (or linker) to create an
executable module.
• Link editors incorporate various binary routines into
a single executable file as called for by a program’s
external symbols.
• Like assemblers, link editors perform two passes:
The first pass creates a symbol table and the
second resolves references to the values in the
symbol table.
The next slide shows this process schematically.
18
8.4 Programming Tools
19
8.4 Programming Tools
• Dynamic linking is when the link editing is delayed
until load time or at run time.
• External modules are loaded from from dynamic link
libraries (DLLs).
• Load time dynamic linking slows down program
loading, but calls to the DLLs are faster.
• Run time dynamic linking occurs when an external
module is first called, causing slower execution time.
– Dynamic linking makes program modules smaller, but
carries the risk that the programmer may not have control
over the DLL.
20
8.4 Programming Tools
• Assembly language is considered a “second
generation” programming language (2GL).
• Compiled programming languages, such as C,
C++, Pascal, and COBOL, are “third generation”
languages (3GLs).
• Each language generation presents problem
solving tools that are closer to how people think
and farther away from how the machine
implements the solution.
21
8.4 Programming Tools
Keep in mind that the computer can understand only the 1GL!
22
8.4 Programming Tools
• Compilers bridge the semantic gap between the
higher level language and the machine’s binary
instructions.
• Most compilers effect this translation in a six-phase
process. The first three are analysis phases:
1. Lexical analysis extracts tokens, e.g., reserved words and
variables.
2. Syntax analysis (parsing) checks statement construction.
3. Semantic analysis checks data types and the validity of
operators.
23
8.4 Programming Tools
• The last three compiler phases are synthesis
phases:
4. Intermediate code generation creates three address code to
facilitate optimization and translation.
5. Optimization creates assembly code while taking into account
architectural features that can make the code efficient.
6. Code generation creates binary code from the optimized
assembly code.
• Through this modularity, compilers can be written
for various platforms by rewriting only the last two
phases.
The next slide shows this process graphically.
24
8.4 Programming Tools
• Interpreters produce executable code from source
code in real time, one line at a time.
• Consequently, this not only makes interpreted
languages slower than compiled languages but it
also affords less opportunity for error checking.
• Interpreted languages are, however, very useful
for teaching programming concepts, because
feedback is nearly instantaneous, and
performance is rarely a concern.
25
8.5 Java: All of the Above
• The Java programming language exemplifies many of
the concepts that we have discussed in this chapter.
• Java programs (classes) execute within a virtual
machine, the Java Virtual Machine (JVM).
– Java compiles into JVM machine code
– The JVM interprets JVM machine code to run it
• Because the JVM is an interpreter is runs slower than
native machine code.
– The JVM will run compiled java code on any archtecture.
26
8.6 Database Software
• Database systems provide a single definition,
the database schema, for the data elements
that are accessed by application programs.
– A physical schema is the computer’s view of the database
that includes locations of physical files and indexes.
– A logical schema is the application program’s view of the
database that defines field sizes and data types.
• Within the logical schema, certain data fields
are designated as record keys that provide
efficient access to records in the database.
27
8.6 Database Software
• Most database systems also include transaction
management components to assure that the database
is always in a consistent state.
• Transaction management provides the following
properties:
– Atomicity – All related updates occur or no updates occur.
– Consistency – All updates conform to defined data constraints.
– Isolation – No transaction can interfere with another transaction.
– Durability – Successful updates are written to durable media as
soon as possible.
• These are the ACID properties of transaction
management.
28
• Computer system depends as software to run
– The operating system is the system software
component upon which all other software rests.
• Operating systems control process execution,
resource management, protection, and security.
• All programming languages above the machine
level must be translated into machine code.
• Database software provides controlled access to data
files through enforcement of ACID properties.
Chapter 8 Conclusion
Chapter 12
Network
Organization and
Architecture
12.1 Introduction
• The network is a crucial component of today’s
computing systems.
• Resource sharing across networks has taken the
form of multitier architectures having numerous
disparate servers, sometimes far removed from
the users of the system.
• If you think of a computing system as collection of
workstations and servers, then surely the network
is the system bus of this configuration.
30
12.2 Early Business Computer
Networks
• The first computer networks consisted of a
mainframe host that was connected to one or more
front end processors.
• Front end processors received input over
dedicated lines from remote communications
controllers connected to several dumb terminals.
• The protocols employed by this configuration were
proprietary to each vendor’s system.
• One of these, IBM’s SNA became the model for an
international communications standard, the
ISO/OSI Reference Model.
31
12.3 Early Academic and Scientific
Networks
• In the 1960s, the Advanced Research Projects
Agency funded research under the auspices of the
U.S. Department of Defense.
• Computers at that time were few and costly. In
1968, the Defense Department funded an
interconnecting network to make the most of these
precious resources.
• The network, DARPANet, designed by Bolt,
Beranek, and Newman, had sufficient redundancy to
withstand the loss of a good portion of the network.
• DARPANet, later turned over to the public domain,
eventually evolved to become today’s Internet.
32
• To address the growing tangle of incompatible
proprietary network protocols, in 1984 the ISO
formed a committee to devise a unified protocol
standard.
• The result of this effort is the ISO Open Systems
Interconnect Reference Model (ISO/OSI RM).
• The ISO’s work is called a reference model because
virtually no commercial system uses all of the
features precisely as specified in the model.
• The ISO/OSI model does, however, lend itself to
understanding the concept of a unified
communications architecture.
33
12.4 Network Protocols I
ISO/OSI Reference Model
34
• The OSI RM
contains seven
protocol layers,
starting with
physical media
interconnections
at Layer 1,
through
applications at
Layer 7.
12.4 Network Protocols I
ISO/OSI Reference Model
35
• OSI model
defines only the
functions of each
of the seven
layers and the
interfaces
between them.
• Implementation
details are not
part of the
model.
12.4 Network Protocols I
ISO/OSI Reference Model
36
• The Physical layer receives a stream
of bits from the Data Link layer above
it, encodes them and places them on
the communications medium.
• The Physical layer conveys
transmission frames, called Physical
Protocol Data Units, or Physical
PDUs. Each physical PDU carries an
address and has delimiter signal
patterns that surround the payload, or
contents, of the PDU.
12.4 Network Protocols I
ISO/OSI Reference Model
37
• The Data Link layer negotiates frame
sizes and the speed at which they are
sent with the Data Link layer at the
other end.
– The timing of frame transmission is
called flow control.
• Data Link layers at both ends
acknowledge packets as they are
exchanged. The sender retransmits
the packet if no acknowledgement is
received within a given time interval.
12.4 Network Protocols I
ISO/OSI Reference Model
38
• At the originating computers, the
Network layer adds addressing
information to the Transport layer
PDUs.
• The Network layer establishes the
route and ensures that the PDU size
is compatible with all of the
equipment between the source and
the destination.
• Its most important job is in moving
PDUs across intermediate nodes.
12.4 Network Protocols I
ISO/OSI Reference Model
39
• the OSI Transport layer provides end-
to-end acknowledgement and error
correction through its handshaking
with the Transport layer at the other
end of the conversation.
– The Transport layer is the lowest layer
of the OSI model at which there is any
awareness of the network or its
protocols.
• Transport layer assures the Session
layer that there are no network-
induced errors in the PDU.
12.4 Network Protocols I
ISO/OSI Reference Model
40
• The Session layer arbitrates the
dialogue between two communicating
nodes, opening and closing that
dialogue as necessary.
• It controls the direction and mode
(half -duplex or full-duplex).
• It also supplies recovery checkpoints
during file transfers.
• Checkpoints are issued each time a
block of data is acknowledged as
being received in good condition.
12.4 Network Protocols I
ISO/OSI Reference Model
41
• The Presentation layer provides
high-level data interpretation
services for the Application layer
above it, such as EBCDIC-to-
ASCII translation.
• Presentation layer services are
also called into play if we use
encryption or certain types of
data compression.
12.4 Network Protocols I
ISO/OSI Reference Model
42
• The Application layer supplies
meaningful information and
services to users at one end of
the communication and
interfaces with system resources
(programs and data files) at the
other end of the communication.
• All that applications need to do is
to send messages to the
Presentation layer, and the lower
layers take care of the hard part.
12.5 Network Protocols II TCP/IP
Network Architecture
43
• Computer networks are often classified according to
their geographic service areas.
– Personal Area Network
– Local Area Network
– Metropolitain Area Network
– Wide Area Network
• The distance data needs to travel plays a large role in
the type of signal, cable or medium used and the
effective speed that can be obtained.
– Signals lose strength over distance, a mesure of signal
strength is the signal to noise ratio (SNR)
12.6 Network Organization
44
• A hub is a Layer 1 device to connect several devices
• A switch is a Layer 2 device to connect links together
• A router is a Layer 3 device to group links to go longer
distances
• A gateway is a Layer 4 device to span over different
networks.
12.6 Network Organization