Types of Computer Operation
Computers vary considerably in size, capability and type of application. Similarly, there is a wide variety of ways in which they can be operated. Each type of computer operation requires a different type of operating system.
Most microcomputers and some minicomputers can only process one program at a time. This is single program operation, and it requires only a simple operating system. The operating system supervises the loading and running of each program, and the input and output of data. Any errors occurring are reported.
Next in complexity is batch processing. A number of programs are batched together, and then run as a group. Although the programs are actually run one at a time, input and output from various programs can overlap to some extent. Programs are normally queued up for batch processing, and the operating system starts the next program in the queue as soon as sufficient computing resources are available for it.
Similar to batch processing, but much more sophisticated, is multiprogramming. At any one time, a number of programs are on the computer at various stages of completion. Resources are allocated to programs according to the requirements of the programs, and in order to maximize the usage of the different resources of the computer.
A particular type of multiprogramming, which is becoming increasingly popular, is transaction processing. Transaction processing is designed for systems which must run large numbers of fairly small programs very frequently, where each program run deals with a single transaction such as a withdrawal from a cash terminal.
The Nature of an Operating System
Like the question 'What is a computer? the question 'What is an operating system?' can be answered at several levels.
Firstly, an operating system is a program, or set of programs. Operating systems vary in size from very small to very large, but all are pieces of software. In the past, almost all operating systems were written in a low level language. Currently, many operating systems are partly or completely written in a high level language.
Secondly, an operating system is, by virtue of its name, a system. It is a collection of parts, working together towards sonic common goals. The goals, or objectives, of an operating system are discussed below.
Thirdly, a computer may be regarded as a set of devices, or resources, which provide a number of services, such as input, processing, storage and output. The operating system of the computer may be regarded as the manager of these resources. It controls the way in which these resources are put to work.
Finally, an operating system is the lowest layer of software on a computer. It acts directly on the ‘raw’ hardware of the computer. It supports other layers of software such as compilers and applications programs. Part of the task of an operating system is to 'cushion' users from the complexities of direct use of the computer hardware.
In summary, an operating system is a program, or set of programs, driving the raw hardware of a computer, which manages the resources of the computer in accordance with certain objectives, providing higher layers of software with a simplified computer.
The Development of Operating Systems
Operating systems are as old as electronic computers. It was realized from the start that the hardware of a computer on its own is very difficult to use. Various supervisor, executive or monitor programs were written to make aspects of using a computer easier. As time went by, these programs became larger, more complex, and, unfortunately, more cumbersome and less reliable.
Today big operating systems face a new challenge, from cheap, plentiful microcomputers, which require only the simplest of monitor programs for their operation.
The problem with input and output is that different input/output devices have different characteristics, and run at different speeds. For example, a line printer outputs characters one line at a time, whereas a keyboard accepts input one character at a time. A line printer transfers characters more than one hundred times as fast as a keyboard.
The input/ output control module of an operating system deals with these problems by making input and output device independent from the point of view of the programmer. To a programmer, all devices have the same characteristics, and are instructed in exactly the same way. The operating system deals with the special characteristics of each type of device.
Computer structure
The definition of a computer is as follows:
A computer is a collection of resources, including digital electronic processing devices, stored programs and sets of data, which, under the control of the stored ргоgrams, automatically inputs, outputs, stores, retrieves and processes the data, and may also transmit data to and receive it from other computers. A computer is capable of drawing reasoned conclusions from the processing it carries out.
From the hardware point of view, the essential features of this definition are 'a collection of … digital electronic processing devices'.
Computers vary enormously in size, processing power and cost. Nevertheless, all computers consist of one or more functional devices, each carrying out one or mоrе of the tasks described above. Each device performs a precisely specified task, and connects to other modules via defined interfaces. Modules of the same type of computer may be exchanged, and new modules added, without modification to their internal workings. The phrase plug-compatible describes units which may be connected in this manner.
Mainframes, minis and micros
Very broadly speaking, there are three classes of computers, according to their size and complexity. These classes are known as mainframes, minicomputers (or minis) and microcomputers (or micros).
Mainframes are large computers, comprising a number of free-standing units. Mainframes are generally housed in specially designed, air-conditioned rooms. Connections between the units are made by wires running beneath the floor of the room. Mainframes are very powerful, and support a number of applications running concurrently. Examples of mainframes are the ICL 2900 series, the IBM 3000 series and the Burroughs B6700 series. Very large mainframes are known as supercomputers. These include the Cyber 205 and the Cray 2.
Minicomputers are smaller than mainframes, with several functional devices mounted in a rack in a single unit. Minicomputers do not generally require an air-conditioned environment. They are often to be found in laboratories, factories and offices. Minicomputers can support more than one application running concurrently, though not as many as mainframes. The Digital Equipment Vaxseries is the most popular minicomputer. Others are made by Prime, Data General and Hewlett Packard.
Microcomputers are the newest addition to the computer family. They are small and cheap, and are (generally) contained in a few small units. Their distinguishing feature is that processing is carried out on a single microprocessor chip. Although they are very versatile, microcomputers саn only support one application at any one time. Examples of microcomputers are the IBM PC, the Apple Macintosh and the Research Machines Nimbus.
The classification of computers into mainframes, minis and micros is only very approximate. Computers are getting smaller and more powerful all the time. Micros are being introduced with the capability of minis only a few years old. Minicomputers are incorporating microprocessors to assume the capability of mainframes.
What is a high level language?
A high level language is a problem oriented programming language, whereas a low level language is machine oriented. In other words, a high level language is a convenient and simple means of describing the information structures and sequences of actions required to perform a particular task.
A high level language is independent of the architecture of the computer which supports it. This has two major advantages. Firstly, the person writing the programs does not have to know anything about the computer on which the program will be run. Secondly, programs are portable, that is, the same program can (in theory) be run on different types of computer. However, this feature of machine independence is not always achieved in practice.
In most cases, programs in high level languages are shorter than equivalent programs in low level languages. However, conciseness can be carried too far, to the point where programs become impossible to understand. More important features of a high level language are its ability to reflect clearly the structure of programs written in it, and its readability.
High level languages may be broadly classified as general-purpose or special-purpose. General-purpose languages are intended to be equally well suited to business, scientific, engineering or systems software tasks. The commonest general-purpose languages are Algol 68 and PL/1. The language Ada also falls into this category. Because of their broad capabilities, these languages are large and relatively difficult to use.
The commonest categories of special-purpose languages are commercial, scientific and educational. In the commercial field, Cobol still reigns supreme, while Fortran is still the most widely used scientific language. In the computer education field, Basic is widely used in schools, with Logo and Prolog gaining popularity. Pascal is the most popular language at universities. Pascal is a powerful general-purpose language in its own right.
Another way of classifying high level languages is as procedural and declarative languages. Procedural languages state how a task is to be performed, often breaking programs into procedures, each of which specifies how a particular operation is to be performed. All the early high level languages are procedural, with Algol, Pascal and Ada as typical examples.
Declarative programming languages describe the data structures and relationships between data relevant to a particular task, and specify what the objective of the task is. The process by which the task is to be carried out is not stated explicitly in the program This process is determined by the language translation system. Prolog is an example of a declarative programming language.
The defining characteristics of a high level language are problem-orientation and machine independence.
The first objective of a high level language is to provide a convenient means of expressing the solution to a problem. There are two other common ways of doing this - mathematics, and natural languages, such as English. Most high level languages borrow, without much modification, concepts and symbols from mathematics. The problem with natural languages is that in their full richness and complexity, they are quite impossible to use to instruct a computer. Nevertheless, high level languages use words from natural languages, and allow these words, and mathematical symbols, to be combined according to various rules. These rules create the structure of programs written in the language. The result, in a good high level language, is a clear structure, not too different from our customary ways of thinking and expressing ourselves.
This discussion leads to the second objective of high level languages - simplicity. Simplicity is achieved by a small set of basic operations, a few clear rules for combining these operations, and, above all, the avoidance of special cases.
The third objective of a high level language is efficiency. Programs in the language must be able to be translated into machine code fairly quickly, and the resulting machine code must run efficiently. This objective almost always conflicts with the first two. Most high level languages reflect a compromise between these objectives.
The final objective is readability of programs. Many languages allow for the inclusion of comments or additional 'noise' words, to make programs easier to read. However, a good high level language should enable programs to be written which are clear to read without additional comments. Regrettably, some high level languages ignore this objective altogether.
Features of High Level Languages
The character set used by a language is the set of all characters which may be used in programs written in the language. Almost all languages use letters and decimal digits.
Most high level languages use reserved words. These are words which have a specific meaning in programs, and may not be used by the programmer for any other purpose. For example, in Pascal, reserved words include read, if …then … else and write. Some languages permit abbreviations of reserved words. The size and complexity of a language can he measured by the number of reserved words it uses. For example, Occam has 28 reserved words, while Ada uses more than sixty.
Perhaps the most important feature of a high level language is the way in which programs in it are structured. The structure of a program is specified by a set of rules, called rules of syntax. Different languages have different ways of expressing these rules. In some, the rules are written in concise English. Others use syntax diagrams, while others (notably Algol) use a notation originally called Backus-Naur form, now known as BNF.
Much attention has been devoted, in the development and use of high level languages, to the way in which programs are split up into blocks or modules, each module doing a specific task. In some languages, notably Fortran, these blocks are called subroutines, in others such as Algol and Pascal, these blocks are called procedures or functions. Because of the careful structuring of programs into blocks which they permit, Algol, Pascal and similar languages are called block-structured languages.
Procedures, functions or subroutines are activated via calls from other parts of the program. For example, if a program contains a function to calculate the square root of a given number, this function is called every time a square root is required in the rest of the program. Most languages permit a procedure or function to call itself, a feature known as recursion. This is an extremely powerful feature for handling such data structures as lists, stacks and trees, and for such tasks as analyzing the structure of arithmetic expressions.
An important aspect of high level languages is the way in which they handle the data items and data structures used in a program. Broadly speaking, data items fall into two categories: variables, which can change their value during the running of a program, and constants, which keep the same value. In most program languages, variables are given names, or identifiers. In some languages, such as Fortran and Basic, constants are referred to by their values, while in others, such as Algol and Pascal, constants are also given identifiers.
Some program languages require that all variables be declared before they are used. Generally, variables are declared by listing them at the start of the procedure or subroutine in which they are to be used. An attempt to use a variable which has not been declared results in an error.
This gives rise to the idea of the scope of a variable. The scope of a variable is the part of a program in which it may be used. Variables which are declared for use in one procedure only are called local variables. Their scope is limited to that procedure. Variables which are declared for use in the whole program are called global variables. Their scope is the whole program. The intention of providing each variable with a scope is to enable a program to be broken up into 'watertight' blocks, or modules. Each block uses only the information it requires. This simplifies the task of designing, writing and testing programs, and limits the effects of errors.
Almost all high level languages include the notion of data types. In Basic language, the standard data types are numeric and character strings. These types can be incorporated into arrays, which are tables of items of the same type. In most high level languages, numbers can be integers or real numbers (generally stored in floating point form). PL/I even permits the number of significant figures in a number to be declared. Another common standard data type is Boolean, with the range of values ‘true' and 'false'. Data types can contain single elements, or be structures such as arrays, stacks, lists, trees, etc.
A pointer is a data type which contains the address of another data item. Pointers can be used to construct such data structures as lists and trees. For example, a list of peoples names could be constructed as follows:
name name name etc.
pointer pointer pointer
Pointer types are only available in certain high level languages, notably Algol and Pascal. The problem with pointers is that careless use of them can result in program errors which are very difficult to detect and correct.
Some languages permit the programmer to declare his or her own data types, built up from standard data types. Records can be constructed, containing data of different types. The following section of a Pascal program shows how this can be done.
type name=array (1. . .20) of char;
day= (mon, tues, wed, thur, fri, sat, sun);
pay_record=record
employee-name: name;
payrate: real;
hours-worked: •integer;
pay: real;
payday: day end;
In the above example, char is a standard data type. Variables of type char have values consisting of a single character. The data type 'name' is an array of twenty characters. Variables of the data type 'day' can have one of the values listed in the brackets.
The purpose of data types is to make programs more meaningful, and to provide additional checks for errors. For example, if an attempt is made to add an integer variable to a character variable, then an error will be caused.
Computers and algorithms
PART 1
We live in the age of the computer revolution. Like any revolution, it is widespread, all-pervasive, and will have a lasting impact. It is as fundamental to our economic and social order as was the industrial revolution. It will affect the thinking patterns and life style of every individual.
The industrial revolution was essentially the augmentation of man's physical powers, the amplification of man's muscle. The pressing of a button could cause a large machine to stamp a pattern in a metal sheet. The movement of a lever could result in a heavy scoop scraping out a mass of coal. Certain repetitive aspects of man's physical activities were replaced by machines.
By analogy, the computer revolution is the augmentation of man's mental powers; the amplification of man's brain. The pressing of a button can cause a machine to perform intricate calculations, to make complex decisions, or to store and retrieve vast quantities of information. Certain repetitive aspects of man's mental activities are being replaced by machines.
What is a computer, that it can have such a revolutionary impact? A first step toward an answer is to say that a computer is a machine which can carry out routine mental tasks by performing simple operations at high speed. The simplicity of the operations (typical examples are the addition or comparison of two numbers) is offset by the speed at which they are performed (about a million a second). The result is that large numbers of operations can be performed, and significant tasks can be accomplished.
Of course, a computer can accomplish only those tasks which can be specified in terms of the simple operations it can execute. To get a computer to carry out a task one must tell it what operations to perform—in other words, one must describe how the task is to be accomplished. Such a description is called an algorithm. An algorithm describes the method by which a task is to be accomplished. The algorithm consists of a sequence of steps which if faithfully performed will result in the task, or process, being carried out.
The notion of an algorithm is not peculiar to computer science—there are algorithms which describe all kinds of everyday processes.
In general, the agent which carries out a process is called a processor. A processor may be a person, a computer, or some other electronic or mechanical device. A processor carries out a process by obeying, or executing, the algorithm which describes it. Execution of an algorithm involves execution of each of its constituent steps.
From the discussion above it is apparent that a computer is simply a particular kind of processor. Of course, it is rather a special kind of processor; otherwise computers would not have had such a rapid and significant impact on so many areas of life. The features which make it special are described below.
(1) the central processing unit (CPU), which performs the basic operations,
(2) the memory, which holds;
(a) the algorithm specifying the operations to be performed
(b) the information, or data, upon which the operations are to act;
(3) the input and output devices (l/0 devices), through which the algorithm and the data are fed into the memory, and through which the computer communicates the results of its activities.
These components comprise the computer hardware: that is, the physical units from which a computer is built.
PART 2
(1) Speed
The CPU of a typical computer can perform between one million and ten million operations a second. Although these operations are very simple the formidable speed with which they are performed means that even quite complex algorithms, requiring large numbers of operations, can be executed very quickly. By comparison the human brain is very slow, so it is not surprising that people have been replaced by computers in many activities where speed is a major requirement. Human beings do, however, currently retain significant advantages over computers. For example, it appears that the brain is capable of performing many operations at once whereas (with minor exceptions) a present-day computer can perform only one operation at a time.
Despite the high speed of computers there remain many processes which are simply too time consuming to be feasibly carried out. (An example is the formulation of a winning strategy for chess by studying all chess games which could possibly be played.)
(2) Reliability
Contrary to popular mythology computers seldom make mistakes, though they do occasionally break down. The mistakes which achieve prominence in the news media, such as an electricity bill for a million dollars or a false alert about nuclear attack, are almost invariably a result of a fault in the algorithm being executed or an error in the input data. On very rare occasions an electronic fault may cause a computer to execute an algorithm incorrectly, but the probability of this is minute, and in any case such malfunctions are usually detected immediately.
A computer is in a sense a totally willing and obedient slave: it will faithfully execute the algorithm it is given, and if necessary it will do so repeatedly without complaint. Such fidelity is of course both a strength and a weakness, since the computer will execute the algorithm quite blindly, whether or not it correctly describes the process intended.
(3) Memory
One of the prime characteristics of a computer is its ability to store vast quantities of information which it can access very quickly. Memory capacities and access speeds vary widely according to the storage medium used; some computers can store several thousand million items of information, and can access some of these items in as little as 100 nanoseconds (a nanosecond is 10-9 seconds, or one thousand millionth of a second). Impressive though these figures are, they are somewhat deceptive. As we shall see later, computer memory is organized in such a way that an item of information can be retrieved only if its location in the storage medium is precisely known. This means that a lot of effort must be put into keeping track of where information is located—effort which increases both the time to design an algorithm and the time to execute it.