Computers for Nurses Made Easy Karanbir Singh
INDEX
×
Chapter Notes

Save Clear


Introduction to ComputersOne

Let us begin from scratch! COMPUTER…We all know what that word means: A small box which enables us to surf on the Internet, play extremely life-like games, listen to music and watch movies. Well, half a century ago the word referred to ‘a person who performs computations’. At that time, there were no computers yet.
 
THE WORLD OF COMPUTERS
Have you used a computer recently? Chances are you have. If you have made a phone call, ridden in a car, watched a TV, played a video game you have used a computer. Even as you read this paper, keep in mind the words you are reading were put into type by a computer.
Here is a list of statements about computers. Decide whether each statement is true or false.
  1. Computers are smarter than humans.
  2. Computers have brains.
  3. Some computers have feelings.
  4. Computers can solve any problems.
  5. You need to know a lot of science and math to use a computer.
If you decided that each statement is false, you are correct. Then what is a computer? What can it do? It is, basically, a machine that can handle large amounts of information and work with amazing speed. A computer is built to do these four jobs:
  1. Accept information: You can put raw data into it. The information might be a collection of facts and figures or possibly a set of instructions to the computer telling it what to do.2
  2. Store information: It has a device called the memory (described later in this chapter) that holds the information as long as you want it to.
  3. Process the information: This means it does something with the information. It might do an addition problem or compare and sort the information depending upon the users' wish.
  4. Give out the processed information: It gives you the results of the processing which are, indeed, the results of the instructions given to the computer by the user.
Computers are conformed to follow instructions from humans. They can solve only the problems that people tell them to solve. Since people cannot solve every problem, neither can computers. To tell a computer what to do, you have to know what problem you want to solve and should have a plan for solving it.
Since computers can't do anything without instructions from a human, what makes them so special? They can do some things better than humans. Computers calculate faster than humans. They are more accurate than humans. Computer can also store vast amounts of information and they do not “forget” what they store. These kinds of qualities make computer wonderful tools to help people solve complex problems.
Hence, a computer is a device that accepts information and manipulates it for some result.
Computers can be confusing and it takes a little time before you gain confidence in using and understanding them. Just like a new TV or VCR, computers can be a bit cumbersome to figure out, but once you achieve a certain level of knowledge, these machines can become powerful allies.
Don't be afraid to tinker around on a computer. Like learning to drive or riding a bike, computing simply takes patience and practice!3
 
COMPUTER BASICS
Let's try a simple approach….
A library is a big building, in this case we are visiting a random library. So a library when it is first built is simply a big empty building. Similarly, a computer is simply an empty metal box until you put the parts in.
Ok, a library needs someone to run it, a librarian … A computer needs someone to run it, a CPU or processor chip.
A librarian needs a name, Betty, Nancy, Ramu, Shamu, Chimpu … A computer “Chip” needs a name, 486, Celeron, Pentium I, Pentium II, Pentium III, Pentium IV, etc.
No matter what the librarian's name is, a librarian is the person that runs the library and makes all the decisions. No matter what the processor chips name, the chip is the part of the computer that runs the computer and makes all the decisions.
The faster the librarian in a library, the faster the work gets done. Similarly, the faster a processor chip, the faster the actions are performed.
The oldest and thus the slowest chip listed here is the 8086, then came the 286, 386, 486, etc. With each one being faster than the chip before it. Finally a new chip was invented that was even faster. It was named Penny. As in the Pentium I. Few years ago came the fastest chip available to home PC users called the Pentium IV… And right at this moment as you read this…there are people working to develop a faster processor still.
With librarians and processor chips, if you are waiting, faster is better. With that I want to mention that each chip is listed with a processing speed. As any child can tell you the bigger the number, the faster the computer. So if someone tells you that their librarian can stack 100 books an hour and your librarian can stack 200, then you know your librarian is faster.
So a computer that has a speed of 2.1 GHz (Giga Hertz), is slower than one with 3.2 GHz. The speed is referred to the processing speed or you can think of it is the “speed of thought” of your librarian.4
 
COMPUTERS — HOW THEY WORK AND RELATED TERMS
Why is it important to know how computers work? Easy, if you don't, they will be hard to control. Computers were never built to control us even though that is how it appears. Their creation was just another tool to be used to benefit society. What can you do to learn more about computers? There is an easy answer. Just read and use computers more. They are not that hard and with time you too can become the master over this tool.
Computers, the ones we know have not been around for that long. The first home personal computer was not sold until 1977. We have come a long way since then. Did you know that in 1983 there were approximately 2 million personal computers in use in the United States? However, just 10 years later in 1993 the number had jumped to more than 90 million.
Computers, today are small, fast, reliable and extremely useful. Back in 1977 that really was not the case. However, they both operated in basically the same way. They both receive data, stored data, processed data and then output data similar the way our own brain functions. Following are the 4 functions that a computer performs:
  1. Memory
  2. Processing
  3. Input and
  4. Output.
 
MEMORY
Let's look at computer memory first. The function of storage in a computer comes in many different sizes, types and shapes. However, there are two basic categories: short-term memory and long-term memory. A typical computer contains numerous types of memory including RAM, ROM, virtual, cache and various long-term storage devices. Each type of computer memory serves a specific function and purpose.
The computer must be given instructions, in the form of software, which tells it exactly what to do. The instructions that the computer follows are stored in locations known as memory. For simplicity purposes think of memory in two categories:
5
  1. The computer's Internal memory (e.g. microchips)
  2. The computer's External memory (e.g. diskettes and hard drives).
Computer memory is measured in bytes (Refer to Fig. 1.1). A single byte is made up of a series of 1's and 0's normally traveling in pairs of eight. These eight 0's and 1's are the way the computer communicates and stores information. With each keystroke or character a byte of memory is used.
The computer's internal memory which is composed of computer chips is divided into two types: RAM (random-access memory) and ROM (read-only memory).
 
 
ROM
ROM or Read-Only Memory is permanent, long-term, nonvolatile memory. Nonvolatile means is does not disappear when the computer is shut off. It also cannot be erased or changed in anyway. ROM's primary purpose is to store important instructions that the computer will reuse over and over such as what to do when the computer is turned on and how to control specific requests made by the computer.
ROM is permanent memory that cannot be changed or erased. This is why it is called Read-Only Memory.
 
RAM
RAM or Random-Access Memory unlike ROM works only when the computer is turned on. This memory is vital to the computer 6because it controls the moment by moment processes of the computer. The first thing that goes into RAM is the OS (operating system), which in most cases is some version of Windows. Next, for the RAM is an application program that might be a game or the Internet browser, etc.
RAM's primary purpose is to temporarily store programs given to it by a programmer or operator of the computer. This type of memory is temporary because it is erased when the computer is turned off (powered down). In other words, all the information in RAM is erased when the computer is turned off. It is called random access because the processor can jump directly form one location to another in random order as the program is needed.
Multitasking has put more demand on RAM in the past few years. Multitasking is the ability to run more than one program at the same time. For instance, many people like to listen to music along with their word processing software. This means you need lots of RAM to hold both programs.
Other types of temporary-memory are cache (pronounced “cash”) and virtual memory. Both of these types of memory supplement the computer's primary RAM and perform the same function as RAM.
 
Storage Devices
RAM and ROM may be very important parts of the computer; however, without storage devices like hard drives and disk drives your computer would not be near as useful.
 
DISK DRIVES
Let's go back to our library again…
We have a librarian and she has to put away the books. She needs someplace to store these … there are several rooms in the building and so she gives each one a name so that she knows which room is which. Books simply sit in these rooms until someone wants to read them. If the books are put away neatly, saved and there is a power outage, when the power comes back on, the books are still where she put them.
7
Okay, the computer is basically a library where you store books. The books are called files, programs, etc. The computer librarian needs someplace to put your books. These storage areas are called disk drives.
Every computer has a main disk drive called your hard drive. It is normally called your “C” drive and is written like this C:\
Once you “Save” something to this drive, it is there whether you turn off the computer or not. When you are typing, things are in a temporary work area and can be lost unless you save them. So it is advisable to save them often!
Your main library is huge. It is rated in Megabytes (MB) or Gigabytes (GB). The bigger your hard drive the more storage space you have for (books) files, graphics, programs. If you start to run out of space though, old unneeded files (books) can be deleted (thrown away) to make room for new ones.
Your computer has one big Hard Drive “C” which is the main room of the library. But you need to have some way to store extra books that won't fit in your library or that you want to take to work with you. Think of these as the storage carts that a librarian uses. Without the carts it would be impossible for the librarian to transport all the files from one place to another and the building might explode if you kept to much information inside.
And the same is true of computers. We need a way to save some things to carry with us to share with friends at work or to keep as an emergency backup in case of a system failure.
This method of storage is referred to as a disk. Disks come in basically three different sizes. Diskettes (3.5 inches wide), CDs and DVDS (compact disks), Zip Drive Disks.
Diskettes hold a small amount of information (1.44 MB) and can be used to write information from the computer or to read information from the disk to be used inside the computer. These are also a good way to take your important files to an office computer, etc.
CDs (CDs or compact disks) can hold a huge amount of information. There are two types of write-able CDs: CD-RW and CD-R.8
  1. CD-R: These are very inexpensive disks that can only be written onto (burned) one time. Whatever is written onto them is permanent. Think of it as though the disk had been written on in ink.
  2. CD-RW: The disks are more expensive, but you can write on them, erase them, write on them again, over and over. In other words, you can use them over and over again for different things. Think of these disks as having been written on in pencil. You can erase them and use them over again.
Either disk can be written (burned) using a CD Burner (CD writer) and so you can copy files, songs, etc. onto these disks to share with your friends, use them for storage, etc.
Zip Disks can hold a wealth of information. Many are now portable and can carry anywhere from 100 MB of data to 3-4 GB of data.
So, if you understand all a disk drive does is store information, then you understand disks.
The most common forms of storage devices found on computers are illustrated in Figure 1.2.
 
PROCESSING
If someone had to find the brains of the computer they would most certainly say it's the microprocessor. The computer chip that receives and carries out these instructions is called the processor. All computer systems, regardless of size or manufacturer, have processors (also referred to as central processing units or CPUs). The microprocessor is a chip, the size of a postage stamp. The processor is the one part of the computer that is most important to the computer. The microprocessor controls how data is sorted and directs the flow of data.
All computers do processing by following a series of instructions in a software program. The processor performs many different functions. It receives and temporarily stores instructions as well as the data to be processed. It moves and changes stored data. It does arithmetic calculations. It makes decisions of logic, such as determining if two numbers are equal. It directs the action of the input and output devices.9
zoom view
Fig. 1.2: Types of storage media
10
To a great extent a computer is defined by the power of its microprocessor. Chips with higher processing speed and more recent design offer the greatest performance and access to new technologies. Most microprocessors made for PCs are made by Intel or by companies that clone Intel chips, such as Advanced Micro Devices (AMD) and Cyrix.
 
INPUT
One of the best features of a computer is the ability to give the computer commands and feed it information. Without an input device this would not be possible. A hardware device which enables the computer to accept data is called an input device. The most common example of an input device is a keyboard and a mouse. Input devices can be built into the computer, like the keyboard in a laptop or it can be connected to the computer by a cable. There are lots of others such as: trackballs, touch pads, touch screens, pens, joy sticks, scanners, bar code readers, video and digital cameras and microphones. In addition, storage devices such as disk drives can serve as input devices.
 
OUTPUT
Input is important but equally important is the ability to read what the computer is doing. The computer output devices are used to serve the user. A hardware device which reports the information in a form we can understand is called an output device. The most common output device is the monitor or screen. However, most computer come with speakers and a printer which are excellent output devices. Storage devices such as disk drives and diskettes also serve as output devices when it is necessary to write new or updated data files to disk or tape.11
 
EXPLORING COMPUTERS
 
Hardware and Software
Computers are made up of hardware and software. Hardware is the tangible, physical equipment that can be seen and touched. Examples of hardware are things such as the keyboard, printer, monitor and computer chips.
Software is the intangible instructions that tell the computer what to do. Software are things such as games, MS Word, etc. People who write software (instructions that tell the computer what to do) are called programmers.
Programmers write instructions or programs, to the computer so that it is able to execute a task or operate properly. A program can be defined as a series of detailed step-by-step instructions that tell the computer precisely what actions to perform.
Many people believe that computers can do just about anything and that their level of sophistication requires a genius to program and run them. In reality, computers are very simple devices that can perform only four basic functions. A computer can:
  1. Store data and programs
  2. Function unattended due to its ability to interpret and follow instructions it is provided
  3. Do arithmetic calculations and
  4. Perform logical comparisons.
What makes the computer such a powerful device, given only these four basic functions, is its tremendous speed, its accuracy and its ability to store vast volumes of data.
 
External or Auxiliary Storage
Nearly all general-purpose computers include the ability to connect to additional storage devices that hold data outside the memory of the computer. These additional storage devices are known as external or auxiliary storage. External storage devices are on-line to the computer; that is, they are connected directly to the computer. They are, therefore, under the control of the processor and can be used at all times. The most common form of external storage is a disk drive. Other forms of external storage include hard drives and CD-ROM drives.
12
The disk drive records data in a method similar to that used by a cassette tape recorder. The information is actually magnetically encoded into the floppy disk. External storage is used to hold computer programs so they may be read into the computer's random-access memory (RAM) when they are needed. Large amounts of data may also be stored on external storage. Remember from an earlier section that data is stored in RAM is temporary. Therefore, any data that is needed to be kept for future use is usually recorded on a magnetic disk before the computer is powered down.
 
Microchips
Scientists all over the world are trying to overcome the physical limits of chip design technology that began less than three decades ago. Each country is rushing to be the first to develop microcomputer chips that contain one billion transistors.
Transistor technology preceded chip technology. Transistors were smaller, more efficient and more reliable than their predecessors the vacuum tubes. The major limitation of transistors was in their limited number of interconnections. In other words, because of the sophisticated circuitry of computers, an enormous number of connections between transistors were required. Unlike the number of transistor connections needed in a radio, the number of transistor connections in computers extends into the millions. Chip technology solved the interconnections problem by placing several transistors on a tiny silicon surface. Thus, computer power and storage capabilities expanded dramatically.
The number of transistors which can be placed on a chip has increased from fewer than ten in the early chips to thousands in the chips used today. Chip technology has enabled the chip's capacity to double every year since its creation until just a few years ago. However, today's chip designers have run into a problem concerning the physical limitations of a single chip; therefore, the interconnections problem of the transistor technology has resurfaced.13
 
THE COMPUTER'S WORLD
To help you understand how the computer works, imagine that each character is represented inside the computer by a series of electronic switches. In many ways, these electronic switches can be compared to the light switches in our home. A light switch can be in only one of two states: on or off. The circuits inside the computer can be thought of in much the same way as the light switches. The electronic switches can be either on or off. Since on and off represent only two conditions, it is impossible to directly store numbers or letters. Instead, they are converted into binary numbers. Binary means “consisting of two things”, so a binary number is made by using only two digits, 0 and 1. Our and much of the world's number system is based on the 10 system but the computer's system is based on the binary system. Therefore the binary number system is the only coding system the computer actually understands.
But, in order to actually understand and then appreciate the present state of the world of COMPUTERS, we must know how it all started…
 
EVOLUTION OF COMPUTERS
Calculating machines capable of performing the elementary operations of arithmetic (addition, subtraction multiplication and devision) appeared in the 16th century. These were clever mechanical devices constructed from gears, levers and the like. The French philosopher Blaise Pascal invented a mechanical calculator that could add and subtract decimal numbers. Decimal numbers were engraved on the counter wheels much like those in a car's odometer or in the counter in a petrol delevering machine. In Germany, Gottfriend Leibniz extented Pascal's design to one that could also perform multiplication and division.
The evolution of the computers can be explained mainly on their five generations, as follows:14
 
GENERATIONS OF COMPUTERS
The history of computer development is often referred to in reference to the different generations of computing devices. A generation refers to the state of improvement in the development of a product. This term is also used in the different advancements of computer technology. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. As a result of the miniaturization, speed, power and memory of computers have proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.
Read about each generation and the developments that led to the current devices that we use today.
 
 
The Zeroth Generation: Mechanical Computers
In the 19th century Chalrles Babbage designed the first computer to perform multistep operations automatically, that is, without a human involement at every step. It was entirely mechanical in nature. He called it – The Difference Engine.
It was designed to compute and print mathematical tables automatically. The Difference Engine only performed one arithmatic operation — addition. However, using the Method of Finite Difference included in the Difference Engine, it can calculate many complex and useful functions by means of addition alone. The most interesting feature of the difference engine was its output method: it punched its results onto a copper engraver's plate with a steel die.
 
The ELECTRONIC ERA began…
The first electric computer apperared in the 1940s. Since then development has reached an enormous rate. The Second World War brought changes in every walk of life. During the war precise machines were needed for ballistic calculations and hence the need of ‘something’ was felt.15
 
The First Generation (1946–1958) The Vacuum Tube Years
The first computers used vacuum tubes (Fig. 1.3) for circuitry and magnetic drums for memory and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations and they could only solve one problem at a time. Input was based on punched cards and paper tape and output was displayed on printouts. The first generation computers were huge, slow, expensive and often undependable.
The vacuum tube was an extremely important step in the advancement of computers. Vacuum tubes were invented the same time the light bulb was invented by Thomas Edison and worked very similar to light bulbs. Its purpose was to act like an amplifier and a switch. Without any moving parts, vacuum tubes could take very weak signals and make the signal stronger (amplify it). Vacuum tubes could also stop and start the flow of electricity instantly (switch).
In 1946, based upon the above mentioned properties, two Americans, Presper Eckert and John Mauchly built the ENIAC (Electronic Number Integrator and Calculator) which used vacuum tubes instead of the mechanical switches of the Mark I. The ENIAC used thousands of vacuum tubes, which took up a lot of space and gave off a great deal of heat just like light bulbs do. The ENIAC led to other vacuum tube type computers like the EDVAC (Electronic Discrete Variable Automatic Computer) and the UNIVAC I (UNIVersal Automatic Computer).
16
The UNIVAC (Universal Automatic Calculator) and ENIAC (Electronic Numerical Integrator and Calculator) computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the US Census Bureau in 1951.
ENIAC was the first generation of today's modern computers. But, by today's standards for electronic computers the ENIAC was a grotesque monster. Its thirty separate units, plus power supply and forced-air cooling, weighed over thirty tons. Its 19,000 vacuum tubes, 1,500 relays and hundreds of thousands of resistors, capacitors and inductors consumed almost 200 kilowatts of electrical power.
ENIAC could discriminate the sign of a number, compare quantities for equality, add, subtract, multiply, divide and extract square roots. ENIAC stored a maximum of twenty 10-digit decimal numbers. Its accumulators combined the functions of an adding machine and storage unit. No central memory unit existed, storage was localized within the functioning units of the computer.
The primary aim of the designers was to achieve speed by making ENIAC as all-electronic as possible. The only mechanical elements in the final product were actually external to the calculator itself. These were an IBM card reader for input, a card punch for output and the 1,500 associated relays.
It is interesting, that the creation of the first real computers is linked to a Hungarian scientist, Neumann János. After graduating at the University of Budapest he was invited to the University of Priceton in 1930. Three years later he became one of the six mathematics professors at the Institute of Advanced Studies (IAS). He worked out the basic concepts of modern computers.
The ENIAC gave off so much heat that they had to be cooled by gigantic air conditioners. However, even with these huge coolers, vacuum tubes still overheated regularly. It was time for something new.17
 
The Second Generation/(1959–1964) The Era of the Transistors
Just like the evolution of televisions, radios and amplifiers during this time revolved around the shift from vacuum tube to transistor, so did computers. The use of transistors (Refer to Fig. 1.4) allowed radios, TVs, amplifiers and computers to become much smaller, faster, less energy draining, etc.
The transistor computer did not last as long as the vacuum tube computer lasted, but it was no less important in the advancement of computer technology. In 1947 three scientists, John Bardeen, William Shockley and Walter Brattain working at AT and T's Bell Labs invented what would replace the vacuum tube forever. This invention was the transistor which functions like a vacuum tube in that it can be used to relay and switch electronic signals.
There were obvious differences between the transistor and the vacuum tube. The transistor was faster, more reliable, smaller and much cheaper to build than a vacuum tube. One transistor replaced the equivalent of 40 vacuum tubes. These transistors were made of solid material, some of which is silicon, an abundant element (second only to oxygen) found in beach sand and glass. Therefore they were very cheap to produce. Transistors were found to conduct electricity faster and better than vacuum tubes. They were also much smaller and gave off virtually no heat compared to vacuum tubes. Their use marked a new beginning for the computer. Without this invention, space travel in the 1960s would not have been possible. However, a new invention would even further advance our ability to use computers.
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers 18to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second generation computers still relied on punched cards for input and printouts for output.
Second generation computers moved from cryptic binary (0s and 1s) machine language to symbolic or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. Thanks to this rapid development new types of careers (programmer, analyst and computer systems expert) began with these second generation computers. Throughout the early 1960s, there were a number of commercially successful second generation computers used in business, universities and government from companies such as Burroughs, Control Data, Honeywell, IBM, Sperry-Rand and others. These computers also contained all the components we associate with modern computers: printers, tape storage, disk storage, memory, operating systems and stored programs described. The first computers of this generation were developed for the atomic energy industry.
Main memory (RAM) shifted from revolving magnetic drums to tiny wire-wrapped magnetic donuts called magnetic core memory. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. This also allowed computers to become much smaller and more efficient.
The main advantage of second generation computers was the capability of storing program and the programming language. The stored program concept meant that instructions to run a computer for a specific function (known as a program) were held inside the computer's memory and could quickly be replaced by a different set of instructions for a different function. A computer could print customer invoices and minutes later design products or calculate paychecks.
19
Although transistors meant improvement compared to the vacuum tube, overheating was still a major drawback. The quartz rock eliminated this problem. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit (IC) in 1958.
 
The Third Generation/(1965–1970) Integrated Circuits — Miniaturizing the Computer
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were a tremendous breakthrough in advancing the computer. However, no one could predict that thousands even now millions of transistors (circuits) could be compacted in such a small space. The integrated circuit (Refer to Fig. 1.5) or as it is sometimes referred to as semiconductor chip, packs a huge number of transistors onto a single wafer of silicon. Robert Noyce of Fairchild Corporation and Jack Kilby of Texas Instruments independently discovered the amazing attributes of integrated circuits. Placing such large numbers of transistors on a single chip vastly increased the power of a single computer and lowered its cost considerably.
Since the invention of integrated circuits, the number of transistors that can be placed on a single chip has doubled every two years, shrinking both the size and cost of computers even further and further enhancing its power. Most electronic devices today use some form of integrated circuits placed on printed circuit boards20thin pieces of bakelite or fiberglass that have electrical connections etched onto them — sometimes called a mother board.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
This was the third generation of modern computers. The IC combined three electronic components onto a small silicon disk, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor. As a result, computers became ever smaller as more components were squeezed onto the chip. Another third-generation development included the use of an operating system (OS) that allowed machines to run many different programs at once with a central program that monitored and coordinated the computer's memory.
Still in the sixties, 1965, precisely, the mouse was invented. However, it was not introduced to the public until the mid-eighties. Another important event was that Intel Company was founded by Robert Noyce and a few friends in 1968. They developed the first Random Access Memory (RAM) chip in 1970. It was called 1103 and had a capacity of 1 K-bit, 1024 bits.
These third generation computers could carry out instructions in billionths of a second. The size of these machines dropped to the size of small file cabinets. Yet, the single biggest advancement in the computer era was yet to be discovered.
 
The Fourth Generation/(1971-Present) - Microprocessors and Very Large Scale Integration (VLSI)
This generation can be characterized by both the jump to monolithic integrated circuits (millions of transistors put onto one integrated circuit chip, Refer to Fig. 1.6) and the invention of the microprocessor (a single chip that could do all the processing of a full-scale computer). By putting millions of transistors 21onto one single chip more calculation and faster speeds could be reached by computers. Because electricity travels about a foot in a billionth of a second, the smaller the distance the greater the speed of computers.
The microprocessor brought the fourth gene-ration of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer — from the central processing unit and memory to input/output controls — on a single chip.
In 1981 IBM introduced its first computer for the home user and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs (Graphics user interface), the mouse and hand held devices.
In the mid-seventies people started to realize how diverse the use of computers can be. They could be used not only for business but also for entertainment at home.
Pioneers in this field were Commodore, Radio Shack and Apple Computers. In the early 1980s, arcade video games such as Pac Man and home video game systems such as the Atari 2600 ignited consumer interest for more sophisticated, programmable home computers.
Soon everyday household items such as microwave ovens, television sets and automobiles with electronic fuel injection incorporated microprocessors.
22
This demand caused the introduction of the first Personal Computers in 1981. The biggest firm was IBM and spread the new machines all over the world. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. Computers continued their trend toward a smaller size, working their way down from desktop through laptop computers (which could fit inside a briefcase) to palmtop (able to fit inside a breast pocket). IBM's biggest opponent was Apple's Macintosh, which was introduced in 1984. Its main advantage was its user-friendly application, the use of mouse instead of typing commands.
 
The Fifth Generation (Present and Beyond)—Artificial Intelligence (AI)
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.
The development of computers has arrived at its last stage, the fifth generation. These machines are referred to as the ‘computers of the future’. Two important engineering advances are parallel processing, which replaces von Neumann's single central processing unit design with a system harnessing the power of many CPUs to work as one. Another advance is superconductor technology, which allows the flow of electricity with little or no resistance, greatly improving the speed of information flow. However, this generation is in an infant state.
No one knows exactly when the improvements will stop. Companies are lauching faster and faster computers, the latest being around 3.4 GHz fast and counting. Development is so much at a high rate, that no processor can be referred to as modern for more than a month. And this fact ensures that this is one of the most profitable businesses in the world.
23
It is interesting or rather confusing, that it took 1200 years to develop a more modern counting device than an abacus, but our latest computers become out-of-date in just around six months. Who can keep up the pace with these improvements? And who sets the pace? Is it essential to keep it? Do we actually need a Pentium IV with 3.2 GHz, 1024 RAMs and a 100 GByte HDD? Not necessarily. But if we want to find our way in the 21st century, a computer is essential. However, the dependance on these machines could be hazardous to whole mankind. Televisions, radios, HI-FI systems, microwave ovens, walkmans, diskmans, cameras, mobile phones, digital watches, credit cards, etc. Let us imagine just one day without all the electronic gadgets.
Let's take a look at all the generations simultaneously to have an over all view (See Table 1.1).
 
IN SHORT, THE GENERAL TREND IS:
  • Smaller, Faster, Lower Cost, More Reliable, More Complex
  • Larger Capacity Memories, Easier Inputs and Outputs
  • Easier to Use, More Flexible, Smarter, Easier Access
“…With the advent of everyday use of elaborate calculations, speed has become paramount to such a high degree that there is no machine on the market today capable of satisfying the full demand of modern computational methods. The most advanced machines have greatly reduced the time required for arriving at solutions to problems which might have required months or days by older procedures. This advance, however, is not adequate for many problems encountered in modern scientific work and the present invention is intended to reduce to seconds such lengthy computations…”
 
CLASSIFICATION OF COMPUTERS
There used to be two fundamentally different types of computers:
  • Analog
  • Digital
24
Table 1.1   Generations of computers
GENERATION:
FIRST
SECOND
THIRD
FOURTH
FIFTH
Years:
1946-58
1959-64
1965-70
1971-?
Soon-?
Typical Machine:
UNIVAC
IBM 1400
IBM 360
Micros
Super micros?
Size:
Room
Closet
Desk
Breadbox
Tablet’?
Circuit Device:
Vacuum tubes
Transistors
Integrated circuits
LSI1
VLSI2
Circuit Density: (#/unit)
One
100s
1,000s
Hundreds of 1,000s
Millions?
Speed: (Inst/Sec)
100s
1,000s
Millions
Tens of millions
Billions?
Reliability: (Time to fail)
Hours
Days
Weeks
Months
Years?
Main Memory Device:
Magnetic drum
Magnetic core
Magnetic Core
LSI1 Circuits
VLSI2, super-conductors
Memory Size:
1,000s
Tens of 1,000s
Hundreds of 1,000s
Millions
Billions?
S/Million Instructions:
$10
$1
$0.10
$0.001
$0.00001?
Input Devices:
Cards and paper tape
Cards
Key to tape/disk
Key, direct, optical
Speech touch
Output Devices:
Cards and print
Cards and print
Print and video
Print and audio/video
Graphics and voice
Software:
machine language
symbolic languages
Hi-level languages
DBMS, 4GLs SW packages
natural lang. GP packages
Some Features:
Batch process
Real-time datacomm overlaps
timeshare multiprog multiproc
Virtual memory. distributed processing
Parallel process. AI, robots
LSI1 = Large Scale Integration of Chips with several thousand transistors. VLSI2 = Very Large Scale Integration produced a chip containing a microprocessor — this made the development of the microcomputer possible
25
(Hybrid computers combine elements of both types.) In our everyday use, the term “computer” refers to digital computers, with a typical example being the common personal computer (PC). Digital computers are essentially simple machines that can understand and manipulate only series of elementary symbols: 0's and 1's (yes or no, true or false). The real power of a digital computer lies in the blinding speed with which it can check and manipulate these symbols, outperforming any human being. In computer lingo a bit (binary digit) is the term one uses to represent a 0 or a 1. A nibble is a series of four bits, a byte is a series of eight bits and word is used to represent multiple bytes. The trick is to arrange these series of 0's and 1's in such a manner that they represent whatever needs to be symbolized. For example, one could assign these 0's and 1's to represent a special code to instruct the machine what to do next and with what data. These designated sequences of 0's and 1's are called computer instructions. It is these instructions, when arranged in a structured sequence, i.e. a computer program that distinguishes the digital computer from a fast calculator. The sequence in which these instructions are executed can be altered depending on the outcome of the actions of previous instructions within the machine or input from the outside world. Instructions arrange and shuffle the bits, nibbles and bytes in such a manner that the computer can perform complex calculations and then, minutes later, help us with word processing or enable us to record variables from a monitoring device we use during anesthesia.
Computers are available in different shapes, sizes and weights, due to these different shapes and sizes they perform different sorts of jobs from one another. They differ in size, speed of operation, amount of data that can be stored and the number of simultaneous users.
They can also be classified in different ways. All the computers are designed by the qualified computer architectures that design these machines as their requirements.
A computer that is used in a home differs in size and shape from the computer being used in a hospital. Computers act as a server in large buildings, while the computer also differs in size and shape performing its job as a weather forecaster.
26
A student carrying a laptop with him to his college is different in shape and size from all the computers mentioned above.
Here we are going to introduce different classifications of computers one by one. We will discuss what are in classifications and what job they perform.
 
 
Super Computer
The biggest in size, the most expensive in price than any other is classified and known as super computer. It can process trillions of instructions in seconds. This computer is not used as a PC in a home neither by a student in a college.
Governments specially use this type of computer for their different calculations and heavy jobs. Different industries also use this huge computer for designing their products.
In most of the Hollywood's movies it is used for animation purposes. This kind of computer is also helpful for forecasting weather reports worldwide.
 
Mainframes
Another giant in computers after the super computer is mainframe, which can also process millions of instruction per second and capable of accessing billions of data. Mainframes allow many simultaneous users, handle typically huge databases and can perform complex mathematical operations. We find them mainly in industry, research and university computing centers.
This computer is commonly used in big hospitals, airline reservations companies and many other huge companies prefer mainframe because of its capability of retrieving data on a huge basis.
This is normally too expensive and out of reach from a salary-based person who wants a computer for his home.
This kind of computer can cost upto thousands of dollars.27
 
Minicomputer
This computer is next in the line but offers less than mainframe in work and performance. These are the computers, which are mostly preferred by the small type of business personals, colleges, etc. Minicomputers can support a smaller number of simultaneous users, typically 50 to 100. These machines are primarily used by larger businesses to handle accounting, billing and inventory records.
 
Microcomputers
The microcomputer is essentially a personal or desktop computer. These desktop PCs, which dwarf the capabilities of the huge early computers, are used extensively in the home (entertainment, communication, personal databases and spreadsheets) and in all types of businesses (word processing, accounting, inventory control, research).
 
Personal Computers
Almost all the computer users are familiar with the personal computers. They normally know what the personal computer is and what its functions are.
This is the computer mostly preferred by the home users. These computers are lesser in cost than the computers given above and also, small in size; they are also called PCs in short for personal computers.
This computer is small in size and you can easily arrange it to fit in your single bedroom with its all accommodation. Today this is thought to be the most popular computer in all.
 
Notebook Computers
Having a small size and low weight the notebook is easy to carry to anywhere. A student can take it with him/her to his/her school in his/her bag with his/her book.
28
This is easy to carry around and preferred by students and business people to meet their assignments and other necessary tasks.
The approach of this computer is also the same as the personal computer. It can store the same amount of data and having a memory of the same size as that of a personal computer. One can say that it is the replacement of personal desktop computer.
 
Palmtop/Hand-held Computers
Now the computers, which can be held easily using a single hand, are available. They too have a screen generally a touch screen and are fitted with a pen like stylus for input instead of the keyboard or mouse. Handhelds are usually popular with people whose work involves the things done automatically by pointing rather than typing. These pen-based computers are known as personal data assistants (PDA). Being fitted with electronic pen like stylus, they can accept hand written input also.