next up previous
Next: About this document

Digital Computers: History: Contemporary Systems

For articles on related subjects see APPLE COMPUTER, INC.; ATLAS; COMPUTER ARCHITECTURE; COMPUTER INDUSTRY; CONTROL DATA CORPORATION COMPUTERS; DIGITAL EQUIPMENT CORPORATION VAX SERIES; IBM 1400 SERIES; IBM 360/370/390 SERIES; IBM PC AND PC-COMPATIBLES; LIVERMORE AUTOMATIC RESEARCH COMPUTER; MICROSOFT; MINICOMPUTERS; NAVAL ORDNANCE RESEARCH CALCULATOR; NCR COMPUTERS; PERSONAL COMPUTING; STRETCH; SUPERCOMPUTERS; UNIVAC I; VON NEUMANN MACHINE; WORKSTATION.

Since 1950, computers have advanced at a pace unparalleled in the history of technology. Processing speed and memory capacity have increased; size and cost have decreased by orders of magnitude. The pace has not been steady on all fronts, but has always been rapid, and it continues.

This phenomenal growth has led many to describe what happened in computing before some arbitrary date in the recent past irrelevant "prehistory" or prologue. For some that date is 1945, before which there existed a primitive world of mechanical and electromechanical systems with little or no programmability. For others that date is 1974, before which computers were things that were batch-programmed, inaccessible, and too big and expensive to be used as a personal device. For others that date is 1993, before which computers were isolated islands unable to communicate with one another over the World Wide Web. About all that is certain is that we have not seen the last of these transformations.

One can chronicle each dramatic advance, each "milestone" of computing that marks the passing of a certain threshold; e.g from mechanical to electronic; from mainframe to personal, from isolated to networked, etc. Such listings are valuable, but do little to aid one's understanding of the subject. Still it appears impossible to make general statements about computing since 1945, as each new development threatens to render any such statement obsolete.

The notion that there were three major generations of computers, based on device technology (vacuum tubes, discrete transistors, and integrated circuits), served well to characterize machines for the beginning of the electronic era. But nearly all computers have used silicon integrated circuits since the 1970s. Yes, it is true that IC technology has advanced in those years, and the microprocessor was a significant milestone, but computers today still use a descendant of the IC technology invented by Robert Noyce and Jack Kilby around 1959. Computing has thus been in the "third generation" for as long as it took to progress from the ENIAC to the PDP-11, an early minicomputer.

The notion of generations is nevertheless useful if interpreted more broadly. All machines, especially those tested by the rigors of the marketplace, tend to be improved or modified by their designers in incremental ways. Periodically, designers introduce more radical improvements, and when they do, is appropriate to speak of a new generation of product. Introducing a new device technology is one of several ways this can happen; also common is a thorough redesign of the machine's architecture. Thus, the history of computing is characterized not by three or four but by many generations. Present generation cycles in the computer business can last as little as 2 or 3 years. (See GENERATIONS, COMPUTER.)

Given this context, the question remains: Are there general characteristics one can use to understand the evolution of computing since 1950? A closer look reveals that at least a few such trends are present.

The von Neumann Architecture

First among these trends is the persistence of the von Neumann machine model of computer architecture through successive waves of hardware and software advances. That model, originally conceived by J. Presper Eckert q.v., John Mauchly q.v., and John von Neumann q.v. in the mid-1940s, emerged in response to the need for a practical design for the EDVAC q.v., a machine they were proposing as a follow-on to the ENIAC q.v., then under construction. But the von Neumann model's influence was to be much greater. Its persistence has come from its ability to organize and unify what otherwise would be a bewildering range of options about computer design. It has persisted also because it could be extended and radically modified without altering its basic structure. Despite limitations, the model has served as the foundation upon which the edifice of computer science and engineering has been built, and shows signs of remaining so into the future.

Modern computers hardly resemble those sketched out by the EDVAC team in the 1940s. Yet, just as one can see in a modern automobile decisions made by Henry Ford seven decades ago, the ancestral lineage is there. Today the term "von Neumann Architecture" implies a rigid division between memory and processing units, with a single channel between the two. Instructions as well as data are stored together in the primary memory, which is configured to be large, random-access, and as fast as practical. The basic cycle of a computer is to transfer an instruction from memory to the processor, decode that instruction, and execute it with respect to data that is also retrieved from memory. Despite all that has happened, these patterns, especially the last, remain. (The late Alan Perlis q.v. once remarked, "Sometimes I think the only universal in the computing field is the fetch-execute cycle.") From time to time, designers propose computers that radically deviate from the von Neumann model; since about 1990 these machines have found a small but secure niche in a few specialized areas. Still the simpler structure outlined in the EDVAC Report remains the starting point even in the cases of massively parallel, "non-Von machines" (see PARALLEL PROCESSING).

The ideas contained in von Neumann's 1945 report were not his alone, nor was that report the definitive statement of what has become the accepted architecture of the modern computer. A full understanding came with the cooperative effort of many persons, working on different projects, between 1945 and about 1950. The EDVAC report described a machine that economized on hardware by doing everything, including register addition, one bit at a time. When von Neumann moved from the EDVAC project to one at the Institute for Advanced Study at Princeton, that concept was modified to allow for parallel arithmetic on each 40-bit "word." That required more hardware but simplified the design of the logical control unit and yielded faster arithmetic speeds. Data were likewise transferred to and from memory a word, not a bit, at a time, but like the sequential execution of program steps, memory transfer remained a serial activity, since become famous as the "von Neumann bottleneck."

The notion of having the word and not the bit as the basic unit of processing emerged among other one-of-a-kind computer projects in the late 1940s, as did the related notion of having a large, reliable, random access memory that could transfer a full word at a time. Most first-generation computers used serial memories, however, until reliable magnetic core memory became available in the mid-1950s.

What is most remembered about the EDVAC Report is its description of the stored program principle: the notion of storing a program in the same memory device as the data those instructions acted on. As initially conceived, it had three features. First, it meant that the processor could fetch instructions at the same high speeds as it fetched data. Second, it meant that a computer could solve a variety of problems, in which the ratio of instructions to data would vary. Third, it meant that the processor could operate on and modify instructions as if they were data, especially by computing new addresses for operands required by an instruction.

The first two features were obvious advantages, but the last was not, and to some at the time it seemed unnecessary and even radical. By the mid-1950s people recognized that a program's ability to modify itself, if not in precisely the way von Neumann and his colleagues envisioned, was the most profound innovation of all. Indeed, by allowing computers to be programmed at levels far higher than individual processor instructions, this innovation is as much responsible for the present-day "computer age" as any advance in hardware. Although the EDVAC group hardly foresaw this, it is testimony to the originality of their thinking that the concept has proved so adaptable and seminal.

Classes of Computers

One may classify computers into a few rough categories: supercomputer, mainframe, mini, workstation, and personal. These terms did not come into common use until the 1970s (and for the workstation the 1980s), but today they have fairly precise meanings. A look back reveals a functional as well as price differentiation in computers almost from the beginning of commercial computing. The ENIAC, with its emphasis on numerical processing for classified military customers, was the ancestor of the supercomputer, while the UNIVAC I, optimized for business data processing, was an early mainframe. Small, inexpensive drum-based computers such as the Bendix G-15, the Librascope LGP-30, and the Alwac III-E found a decent market in the late 1950s, although their use of vacuum tubes and their architecture differentiate them from the minicomputers of the 1960s.

Besides price, architecture is the principal way of assigning these classifications, but memory capacity, processing speeds, packaging, intended market, software, and other factors also come into play. At any given moment, the classes are distinct and represent a descending order of computing power, but over the years each category ratchets upward. Thus, today's personal computer has the power of yesterday's mini (and the day before yesterday's mainframe), but is still called a personal computer.

In the past each new categories seemed to bubble up from a lower level, due mainly to advances in device technology. They often began as modest offerings designed to take advantage of a small niche poorly served by an established class, but soon grow out of that to become a full-fledged class of general-purpose computers of their own. New classes did not arise from the reduction in cost and size of the machines of a higher category. For example, in 1975, Digital Equipment Corporation introduced the LSI-11, a single-board, low-cost version of its popular PDP-11 minicomputer, but the LSI-11 did not inaugurate the personal computer era. The PC came instead from an upward evolution of simple 4-bit processor chips that were developed for cash registers, adding machines, and other modest devices. As these increased in power, they took on more and more properties of general-purpose computers. In the mid 1980s, a similar phenomenon occurred as companies introduced machines ("mini-supercomputers") that reached toward the performance of the supercomputer, but at far lower cost.

Software-Compatible Families of Computers

A third pattern has emerged, and it, too, is likely to persist: the emergence of not just single products optimized for scientific, process-control, or business applications, but families of general-purpose computers that offer upward compatibility of software. This gives customers an easy path to upgrade with the same vendor as their needs increase, and allows the manufacturer to lower costs by broadening the customer base.

A major portion of the costs of any computing system is the software developed for it. With a family of products, a vendor can amortize these costs over a longer period of time. That in turn can justify higher initial development costs, and thus produce better software. Alternatively, a successful family of machines allows a vigorous third-party software industry to flourish. This offsets the principal disadvantage of having such families, namely that it prevents one from taking advantage of advances in architecture or design and producing the "best" machine of the moment. Likewise, offering a family of machines based on a general-purpose architecture compensates for the fact that special-purpose architectures might work better for specific customers.

Philco marketed a series of software-compatible Transac S-2000, models 210, 211, and 212, between 1958 and 1964. Philco sold out to Ford, who subsequently left the computer business. IBM System/360, introduced in April 1964, was the first commercial system based on a family of upward compatible processors, all of which were announced on the same day. Other notable families include the Univac 1100 series, the Burroughs B5000 and its successors, the CDC Cyber series, and the Digital Equipment Corporation VAX series. Since about 1980 Intel has maintained software compatibility with its 80x86 line of microprocessors. The company typically markets its latest chip for personal computers, while relegating the older chips (now at reduced prices) to embedded or other less-visible applications.

The need to maintain compatibility has retarded the adoption of advances in architecture or instruction set design. But if improvements in device technology can be incorporated without destroying compatibility, a manufacturer will do so as soon as practical. The overall results are short, generational cycles of device technology, but less frequent cycles of changes in architecture. Some advances in circuit technology require modifications to a system architecture to take full advantage of it. A good initial design, though, can and should be robust enough to absorb advances and incorporate them while still maintaining software compatibility. The IBM System/360 used hybrid circuits, magnetic core memory, and a batch-oriented operating system. Over the years, IBM introduced integrated circuits q.v., semiconductor memory, virtual memory q.v., time-sharing q.v., and a host of other technical innovations, all while preserving software compatibility. The result kept this architecture commercially competitive into the 1990s.

Whether to drop a proven architecture and adopt a new one is a decision manufacturers constantly face. Given the relentless march of device technology, a company may feel it must take that step, although one can keep an obsolete design viable for a long time. When a company adopts a new architecture, its managers "bet the company" on the future design. The history of computing is full of examples of those who waited too long or who plunged too early into a new design.

Following are brief descriptions of representative machines that reflect the patterns described above. Machines up to and including the IBM System/360 are classified by the traditional generations; those following by their type: super, mainframe, mini, etc. In these descriptions the emphasis is on both the device technology and the overall system architecture.

The First Generation, 1950-1960

The first generation began around 1950 with the introduction of commercial computers manufactured and sold in quantity. Computers of the first generation stored their programs internally and used vacuum tubes as their switching technology. Beyond that they had little else in common. Each design used a different mix of registers, addressing schemes, and instruction sets. The greatest variation was found in the devices used for memory, and this affected the processor design. Each of the memory technologies available at the time had a drawback, which led to a variety of machine designs that favored one approach over another.

The reports describing the Institute for Advanced Study computer, written by Arthur Burks, Herman Goldstine, and John von Neumann, emphasized the advantages of a parallel memory device that could read and write a full word at a time. The device they favored, the RCA Selectron tube, took longer than expected to appear, and only the RAND Corporation's "Johnniac" used it. America's first commercial machine, the UNIVAC, used a mercury delay line, which accessed data one bit at a time. The only parallel devices available at the time were cathode-ray tubes (Williams tubes q.v.). These tubes, originally intended for other commercial applications, were notoriously unreliable. By far the most popular memory device for first-generation machines was the rotating magnetic drum. It was slow, but its reliability and low cost made it suitable for small-scale machines like the IBM 650, Bendix G-15, Alwac III-E, and Librascope LGP-30.

By the end of this period, machines were introduced that incorporated magnetic core memory. With the advent of ferrite cores - and techniques for manufacturing and assembling them in large quantities - the memory problem endemic to the first generation was effectively solved.

UNIVAC

The UNIVAC was designed by J. Presper Eckert and John Mauchly, and first delivered in 1951 (by which time their company had been acquired by Remington Rand). It was the first American computer to be produced as a series and sold to commercial customers. Eventually, over 40 were built. Customers included the U.S. Census Bureau, the Lawrence Livermore Laboratory, the U.S. Army and Air Force, the General Electric Corporation, and several insurance companies. Most customers used the UNIVAC for accounting, statistical, and other applications that would later be known as data processing q.v..

The UNIVAC used binary-coded-decimal arithmetic performed in four general-purpose accumulators. Word length was 45 bits; each word could represent 11 decimal digits plus a sign, or 6 alphabetic characters (6 bits per character plus 1 parity bit). Basic clock speed was 2.25 Mhz, and the multiplication time was about 2 msec. Mercury delay lines stored 1,000 words in high-speed memory, while magnetic tape units stored up to 1 million characters on reels of metal tape.

The UNIVAC was ruggedly designed and built. Its central processor contained over 5,000 tubes, installed in cabinets that were arranged in a 10-foot by 14-foot rectangle. Inside this rectangle were the delay-line tanks. Many design features that later became commonplace first appeared with the UNIVAC: alphanumeric as well as numeric processing, extra bits for error checking, magnetic tapes for bulk memory, and buffers that allowed high-speed data transfer between internal and external memories without CPU intervention.

IBM 701, 650

At the time of the UNIVAC's announcement, IBM was not committed to electronic computation and was vigorously marketing its line of punched card tabulators. In response to the UNIVAC, IBM entered the computer market with several machines.

In 1952, IBM announced the 701 computer, originally called the Defense Calculator after its perceived market. True to that perception, most of the 19 models installed went to U.S. Defense Department or aerospace customers. Initial rental fees were $15,000 a month; IBM did not sell the machines outright. For primary memory, the machine used Williams tubes that could store up to 4,096 36-bit words. Oxide-coated plastic tape was used for back-up memory, and a magnetic drum provided intermediate storage. It could perform about 2,000 multiplications/second, but, unlike the UNIVAC, the 701's central processor handled control of the slow input/output facilities directly. IBM also developed a similar-sized, but character-oriented machine, the 702, for business customers.

IBM also initiated development of a smaller machine, whose origins lay in proposals for extensions of punched card equipment. In the course of its development, its nature shifted to that of a general-purpose, drum-based, stored program computer. IBM's acquisition of drum memory technology from Engineering Research Associates in 1949 was a key element in this shift. The machine, now called the IBM 650, was delivered in 1954, rather later than planned. It proved to be very successful; eventually, there were over a thousand 650 installations at a rental of about $3,500 per month.

By the time of its announcement, the 650 had to compete with a number of other inexpensive, drum-memory machines. It outsold them all, partly because of IBM's reputation and existing customer base of punched card users, and partly because the 650 was perceived to be easier to program and more reliable than its competitors. The 650's drum had a faster access time (2.4 msec.) than other drum machines, although that was still slow. This was a limitation that precluded the use of drum-based machines for many important applications. The 650 may have had less impact among the business customers, for whom it was intended, than at universities, who were able to acquire the computer at a deep discount. There, 650s helped shape the emergence of the new discipline of academic Computer Science.

ERA 1103

Another important first-generation computer was the ERA 1103, developed by Engineering Research Associates, the Minnesota firm that Remington-Rand bought in 1952. This machine was geared toward scientific and engineering customers, and thus represented a different design philosophy from Remington-Rand's other large machine, the UNIVAC.

The 1103 used binary arithmetic, a 36-bit word length, and parallel arithmetic operation. Internal memory (1K words) was supplied by Williams tubes, with an ERA-designed drum for backup. It employed a two-address instruction scheme, with the first six bits of a word used to encode a repertoire of 45 instructions. Arithmetic was performed in an internal 72-bit accumulator. In late 1954, the company delivered to the National Security Agency and to the National Advisory Committee for Aeronautics an 1103 with magnetic core in place of the Williams Tube memory - perhaps the first use of core in a commercial machine. (Core had by that time already been installed in the Whirlwind q.v. at M.I.T. and in a few other experimental computers. For the NACA, ERA modified the instruction set to include an interrupt facility for its I/O, another first in computer design. Interrupts and core memory were later marketed as standard features of the 1103-A model.

IBM 704, 709

In late 1955, IBM began deliveries of the 704, its successor to the 701. The 704's most notable features were core memory (initially 4K words, up to 32K by 1957) and a rich instruction repertoire. The 704's processor had hardware floating-point arithmetic and three addressable index registers-both major advances over the 701. Partly to facilitate the use of floating point, an IBM team led by John Backus developed the programming language Fortran. Backus has said that he had not envisioned Fortran's use much beyond the 704, but Fortran became and has remained, with Cobol, one of the most successful programming languages of all time. IBM installed over a hundred 704s between 1955 and 1960.

In January 1957, IBM announced the 709 as a compatible upgrade to the 704, but did not enjoy the same success. As it was being introduced, it became clear that transistors were finally becoming a practical replacement for vacuum tubes. Indeed, the transistorized Philco Transac S-2000 and Control Data 1604 were just being announced. IBM withdrew the 709 from the market and replaced it with the transistorized 7090. The new machine was architecturally identical to the 709, so IBM engineers used a 709 to write software for the as-yet-unbuilt 7090. The first delivery of the 7090 in late 1959 marked the beginning of IBM's entry into the solid-state era and serves as a marker for computing's "Second Generation."

The first-generation computers established a beachhead among commercial customers, but even considering the success of the IBM 650, they did little more than that. Punched card accounting equipment still did most of the work for businesses, while engineering and scientific calculating was done with slide rules, desk calculators, or analog computers. Machines like the ERA 1103 were too big, too expensive, and required too much specialized programming skills to be found anywhere but at the largest aerospace firms or government research laboratories. People still spoke of the total world market for large computers as being limited to very small numbers, much as one might speak of the demand for particle accelerators or wind tunnels.

The Second Generation, 1960-1965

The second generation of computing lasted from about 1960 to 1965, and was characterized by discrete transistors for switching elements and ferrite magnetic cores planes for internal memory. In software, this era saw the acceptance of high-level programming languages like Fortran and Cobol, although assembly language programming remained common.

From the perspective of the late 1990s, these generations appear more like transitional periods than a major discrete eras in computing. The term "revolution," as applied to the invention of the integrated circuit, obscures the fact that the IC's inventors saw their work as an evolutionary outgrowth of their work in materials, circuits, and packaging pioneered in the discrete transistor era. This evolutionary approach hastened the acceptance of the otherwise exotic technology of the IC among computer designers. It was during the second, not third, generation, when some of the toughest challenges were faced, especially regarding the serial production of reliable devices with consistent performance. It took from 1949 to 1959 to bring transistors from the laboratory to commercial production in computers, but the basic knowledge gained during that decade hastened the advent of the IC, which went from invention to commercial use in half that time.

Transistors, replacing vacuum tubes on a one-to-one basis, solved the problems of a tube's unreliability, heat, and power consumption. As they solved those problems, they exposed another, which proved to be more fundamental. That was the complexity of interconnecting many thousands of simple circuits to obtain a complete system. Some manufacturers labored under the burden of hiring and training workers to hand-wire and solder the components to one another. Others built sophisticated assembly lines, adapting machines supplied by the shoe industry to insert the components into the proper places, after which automated wire-wrap machines (supplied by the Gardner-Denver Corporation) wired the backplane. Still, this "tyranny of numbers" would only be solved when integrated circuit put the interconnections onto the same piece of silicon as the devices.

IBM 1401

One of the most important transistorized computers was the IBM 1401, introduced in 1960. This machine employed a character-oriented, variable-length data field, with one bit of each character code reserved to delimit the end of a field. As with the 650, the 1401's design evolved from a plug-wired, punched card calculator to a stored-program, general-purpose computer that utilized magnetic media (tape) as well as punched cards for its Input/Output. Magnetic cores provided a central memory of from 1,400 to 4,000 characters, while transistorized circuits supported a multiplication speed of about 500 numbers/second. With the 1401 IBM also introduced the Type 1403 printer, a rugged and fast printer that carried type on a moving chain. This printer played an equally-important role in effecting the transition from tabulators to computers for data processing.

IBM engineers took pains to make the 1401 easy to program by those trained to work with punched card equipment. A simple language called "Report Program Generator" (RPG) made it easy to automate routine processes, and to print results on standard tabular forms. The system's relatively small size meant that a customer could install a 1401 in the same room that was already used for punched card accounting equipment. This combination of features made the 1401 attractive to many small- and medium-sized businesses. Eventually, over 10,000 were installed - ten times as many as the 650. Its success marked the ascendancy of IBM over Univac as the dominant computer supplier.

Concurrently with the 1401, IBM also offered the 1620, a small machine intended for scientific applications. And in 1962 the company introduced the 7094, a version of the 7090 that added a set of index registers to its CPU. It, too, sold well and became the standard large-scale scientific computer of the time.

By the mid-1960s, the IBM Corporation had seized and was vigorously defending a dominant share of the U.S. computer market. Univac, Burroughs, NCR, RCA, Control Data, Philco/Ford, General Electric, and Honeywell were its chief competitors. Each produced machines that were comparable in price and capability to the IBM machines. By 1970, GE, Philco, and RCA had left the computer business, their places taken by new companies offering computers of a different nature than the classic mainframes of this era.

LARC, Stretch, Atlas, B5000

Several architectural innovations first appeared in second-generation computers, but they were premature. That is, the features saw only limited use until the next generation when they became commonplace.

In 1955, Remington Rand Univac contracted with the Lawrence Livermore Laboratory to produce a high-performance computer for weapons design. Design and development of the LARC (Livermore Automatic Research Computer) were beset with problems, but in 1960 the first model was completed and accepted by Livermore, with a second model delivered to the Navy's David Taylor Model Basin. The LARC achieved high processing speeds by having a separate processor whose only job was to handle I/O. Logic circuits used Surface Barrier Transistors, developed by Philco in 1955, but already obsolete by 1960. The LARC was an impressive performer, but after delivering two models for a total price of $6 million, Univac stopped production and absorbed a $20 million loss.

IBM undertook a similar project called "Stretch," implying that it would dramatically extend the state of the art. Work began in 1956, with the first delivery (to Los Alamos Laboratory) in 1961. Like the LARC, the Stretch introduced a number of innovations in architecture and device technology. Among the former was its use of a pipelined processor; among the latter was its use of very fast transistors and Emitter-Coupled Logic (ECL). A total of seven Stretch computers, under the name IBM 7030, were delivered before IBM withdrew the product. As with Univac's experience with the LARC, IBM absorbed a huge financial loss on the project.

The Atlas computer, introduced in 1962 by the British firm Ferranti, Ltd., employed virtual memory with paging, and provision for multiprogramming q.v.. Whereas most first- and second-generation computers had at best only a rudimentary job control facility, Ferranti provided the Atlas with a "Supervisor" program that foreshadowed the operating systems q.v. common after 1965. In 1962, Burroughs introduced the 5000 series of computers that incorporated some of these innovations. This series was further designed for optimal execution of programs written in a high-level language (Algol - q.v.). Its processor architecture was also novel in using of a stack-oriented addressing scheme. Neither of these features prevailed in the marketplace, but multiprogramming and virtual memory became common a generation later.

The Third Generation, 1965-1970

The IBM System/360, announced on 7 April 1964, inaugurated the third generation of computers. This series did not use integrated circuits, but rather small modules consisting of discrete devices laid onto a ceramic substrate. IBM had considered using the newly invented IC for the 360, but went instead with what they called Solid Logic Technology, in part because they had a better grasp of its manufacture in large quantities than they had with ICs.

The initial announcement was for a series of six computers, offering compatibility over a range of 25:1 in performance. System/360 computers were intended to be applicable to the full circle of applications (hence the name): specifically, to character-based data processing as well as number-oriented scientific problems. Eventually, over ten models were offered, plus additional models announced but not delivered or else withdrawn soon after initial delivery. The series eventually offered a several hundred-fold range in computing power.

The 360's designers achieved compatibility over that range by adopting several design innovations. The first was the use of base-register addressing, whereby an instruction referred to a short address. This address was added to a base address (stored in a register) to yield the actual location in core of the desired data. This kept the cost of address-decoding circuits low for the low-end models.

A second innovation was the use of microprogramming q.v. to achieve compatibility. Except for the Model 75, at the top of the line, each model of the 360 obtained its instruction set from a read-only memory (ROM - q.v.) containing a microprogram. That allowed designers of each model to aim for optimum cost/performance without being unduly constrained by the specifics of the 360 instruction set. The concept of microprogramming was first suggested by Maurice Wilkes in 1951, and had been implemented in the design of the Ferranti Atlas. Another British computer, the KDF-9, used microprogramming; 360 engineers later acknowledged that this machine inspired their decision to adopt it. The 360 established microprogramming firmly in the mainstream of computing, and led the way for its use in the minicomputer and microcomputer classes that followed.

The System/360 used channels q.v. for I/O - independent processors that handled the transfer of data between primary memory and peripheral devices. This allowed IBM to market a common set of I/O equipment to all customers, regardless of model. (The proliferation of incompatible peripherals for previous lines of products was one of the main forces behind the decision to develop the 360.)

By all accounts the 360 series was very successful. IBM sales personnel recorded over a thousand orders for systems within a month of the April 1964 announcement, and by 1970 there were over 18,000 installations worldwide. The architecture did, however, have serious shortcomings that were later corrected to varying degrees. Chief among them was its lack of dynamic address translation (the ability to associate program data with memory locations at run-time), which, among other things, made the System/360 unsuitable for time-sharing. When IBM upgraded the 360 series to the System/370 in 1970, its architecture was extended to provide this feature and virtual memory as well. A further extension of the 360 architecture was made in 1981, when the number of addressing bits was increased from 24 to 31. The basic architecture, much extended, was still being used in the 1990s in two lines of IBM products, the 43xx series and the 30xx series, which together were marketed as a "System/390" series.

The success of the 360 spawned competitors. In 1965, RCA began delivering four computers, the Spectra Series, that were software compatible with the equivalent 360 models. These had the distinction of being built with true Integrated Circuits, but RCA was unable to sustain the line and sold its computer business to Univac in 1971. By that time other companies were offering computers with integrated circuit logic and semiconductor memory instead of magnetic core. IBM countered - some say it was forced to counter - with its System/370 series that used Integrated Circuits for both logic and memory.

Because semiconductor memory, unlike core, loses its information when power is switched off, the 370 needed a way to store its microprogrammed instructions in a non-volatile fashion. IBM engineers invented the floppy disk q.v. for this purpose. The floppy became the pivotal technology for establishing the personal computer class later in that decade.

The notion of a compatible family of machines was not the only 360 innovation that later became widely copied. The 360 adopted the 8-bit byte as the standard for representing characters, and it used multiple-spindle disk systems with removable disk packs. Microprogramming soon became the most common way to implement architectures. From the marketing of the system came the acceptance of many terms now used in computing: "byte," "architecture," and "generation," among others.

Minicomputers

The term "minicomputer" was coined in the mid-1960s by a Digital Equipment Corporation salesman to describe the PDP-8. The term really has two meanings, one informal and the other specific. Informally, a minicomputer is low in cost, small in size, and intended for use by a single individual, small department, or for a dedicated application. That concept was expressed as early as 1952, when several companies introduced computers aimed at such a market.

Producing such a machine with adequate performance was another matter. First-generation computers like the Bendix G-15, Alwac III-E, or Librascope LGP-30 achieved low cost by using a drum memory, which was incapable of high-speed random access to data. The low processing speeds meant that these computers were ill-suited for process control, laboratory instrumentation, or other similar applications where minicomputers first found a market.

A more specific definition recognizes the technical constraints that have to be overcome for a compact and inexpensive computer to be useful. By this definition, a mini is a compact, solid-state computer, with random-access memory, whose internal structure is characterized by a short word length and a variety of memory addressing modes. This definition requires that a minicomputer be small and rugged enough to fit in a standard equipment rack and thus serve as an embedded controller for other systems. Minis were much smaller and more rugged than what many people previously thought practical; their realization had to await advances in circuit technology as well as circuit board fabrication, power supply design, and packaging techniques.

This definition makes sense only in the context of the era in which the machines appear. Minicomputers, with microcomputers following close behind, have evolved to mainframe-class word lengths of 32 bits, and they eventually included models big enough to require a full-sized computer room. But the category has persisted. (For a time, 32-bit minicomputers were called "superminis," but the differences were not enough to constitute a separate class.)

The M.I.T. Whirlwind, completed in the early 1950s, used a 16-bit word length, and was envisioned for real-time simulation and control applications. It was housed in several rooms of a building on the M.I.T. campus, and in its initial configuration used fragile and sensitive electrostatic tubes for memory. It was hardly a minicomputer, but it was used like one. Many of the M.I.T. students and faculty who worked on it later founded the minicomputer industry located around the Boston suburbs.

In 1960, Control Data Corporation introduced a transistorized, 12-bit machine called the CDC 160. The 160 was intended as an input/output controller for the 48-bit CDC model 1604. The 160 could also be used as a computer on its own, and as such, was one of the first machines to fit the definition of a mini. The 160 was very compact - in fact, it was built into an ordinary office desk. Both the 160 and the 1604 sold well and helped establish CDC as a major computer manufacturer. The company continued building small machines, but concentrated on very fast, long-word computers - later called supercomputers - for which the 160 was designed as an I/O channel. Thus CDC failed to establish a minicomputer niche.

The Digital Equipment Corporation PDP-8, a 12-bit computer announced in 1965, made the breakthrough. Up to that time, DEC had produced and sold a variety of machines with varying word lengths, including the 36-bit PDP-6, and its PDP-10 successor, a full-size mainframe widely used in a time-sharing environment. The success of the PDP-8 established the minicomputer class of machines, with DEC the leading supplier. The success of the PDP-8 spawned competitors: Varian, Hewlett-Packard, Computer Automation, and others.

Data General, formed by ex-DEC employees, brought out the 16-bit Nova in early 1969, and the company quickly became DEC's main competitor. The Nova had a simple but powerful instruction set and was the first to use medium-scale-integrated (MSI) circuits. Its word length set a standard for minis from then on. Just as influential was the Nova's packaging, especially for a model introduced in 1971. For both logic and memory the "Super" Nova used ICs housed in Dual In-line Packages (DIP), which were soldered onto a large printed circuit board. That was plugged into a bus along with other boards, and the whole computer was housed in a low rectangular metal box. A front panel contained a row of switches that gave access to individual bits of the CPU's registers. Modern computers no longer have the front panel, but in every other respect the Nova's physical packaging has been the standard ever since - so much so that one forgets there ever were alternatives.

DEC countered the Nova with their 16-bit PDP-11 in 1970, which kept DEC competitive with Data General. These two computers, along with the HP-2000 series offered by Hewlett-Packard from the West Coast, may be said to define the minicomputer's "second generation." The PDP-11, in particular, redefined the role of minicomputers. The first minis like the PDP-8 were programmed mainly in machine code and typically embedded into other systems, but with the PDP-11 one could program in a high level language like Fortran and, with a full set of peripheral equipment, build a general-purpose computing facility around it instead of around a mainframe.

The mini's low cost, ruggedness, and compact packaging made them attractive for "original equipment manufacturers" (OEMs - q.v.), who purchased minis and embedded them into specialized systems for typesetting, process control, and a host of other applications. Having others develop the specialized software and interfaces was well-suited to small, entrepreneurial minicomputer firms who did not have the resources to develop specialized applications in-house. Several of the mainframe companies, including IBM, introduced minicomputers at this time, but the smaller firms propelled the industry.

A typical mini was microprogrammed and transferred data internally over a high-speed channel called a bus q.v.. To gain access to more memory than could be directly addressed by a short word, their central processors contained sets of registers for base-offset, indirect, indexed, and other types of addressing. These designs made optimum use of the medium-scale integrated memory and logic circuits then becoming available.

The result was considerable processing power for the money. It was not long before customers began using them for general-purpose computation. As they did, the need for more address bits soon became pressing, in spite of the innovative addressing techniques the machines employed. Interdata, Systems Engineering Laboratories, and Prime all introduced machines with a 32-bit word length in the mid-1970s. These machines quickly became popular with NASA and other aerospace customers, who needed that power for computer-aided design and manufacture (CAD/CAM - q.v.) and real-time data reduction. DEC responded to this trend in 1978 with its VAX-11, a 32-bit "Virtual Address Extension" to the PDP-11. Data General announced its 32-bit Eclipse MV/8000 in 1980. Although these "super minicomputers" had the same word length, 32 bits, as mainframes, there were still differences in their instruction sets and use of buses instead of I/O channels.

The VAX soon began outselling the other 32-bit minis and went on to become one of the most successful computers of all time. Part of the reason was DEC's existing market position, but success was also due to the software compatibility the VAX had with the large installed base of PDP-11s. Internally, the VAX was a different machine, but it had an emulation mode that executed PDP-11 programs (eventually this feature was dropped). Also crucial to success was the VAX's ability to be networked through Ethernet, the Xerox-developed networking system that DEC chose in 1980. The VAX was further blessed with having available not one but two good operating systems: Digital's own VMS (Virtual Memory System) and Unix q.v., developed by AT&T and originally offered on a PDP-11. The combination of inherently good design, an adequate supply of semiconductor memory chips, networking, and software support enabled the VAX to compete with all but the largest mainframe computers, whose designs were beginning to look dated by 1980.

The VAX's success thus followed that of the IBM 360, in which a microprogrammed architecture allowed a range of models all running the same software. DEC continued supporting the system by offering a range of VAX machines that merge into the mainframe at the high end and the micro at the low end. The machine continued to be popular into the 1990s, until it was driven out of the marketplace by the 32-bit workstations, to be described later.

Supercomputers

On several occasions throughout the history of digital computing, there has been a desire to push the state of the art to obtain the highest performance possible. Indeed, one sees this force driving Charles Babbage, who in 1834 abandoned work on his promising Difference Engine q.v. to attempt a far more powerful Analytical Engine q.v., which he never was able to complete. The various "Giant Brains" of the late 1940s and early 1950s reflect this desire as well.

In 1954, IBM built a fast computer called the Naval Ordnance Research Calculator (NORC - q.v.) for the Naval Proving Ground in Dahlgren, Virginia. At its dedication, John von Neumann spoke of the tremendous advances in computer speeds, ending his talk with the hope that computer companies would continue from time to time "..to write specifications simply calling for the most advanced machine which is possible in the present state of the art."

IBM's Stretch and Univac's LARC fit that category. In the late 1960s, Burroughs built the ILLIAC-IV, a parallel-processing machine based on a design by Daniel Slotnik of the University of Illinois. These were well regarded by the customers who used them, but they usually incurred financial losses for the companies that manufactured them, even with the government subsidies each of these machines enjoyed. It remained for Control Data Corporation to find a way, not only to make reliable and practical supercomputers, but to sell them profitably as well. The machine that brought the term "supercomputer" into common use was their 6600, designed by Seymour Cray (1925-1996) and delivered in 1964.

The CDC's architecture employed a 60-bit word central processor, around which were arranged ten logical 12-bit peripheral processors, each having a memory of 4K words. Within the central processor were ten "functional units," which contained specialized circuitry that performed the operations of fixed- or floating-point arithmetic and logic. Logic circuits, taking advantage of the high-speed silicon transistors then becoming available, were densely packed into modules called "cordwood" from the way they looked.

The functional units permitted a measure of parallel processing, since each could be doing a different specialized operation at the same time. Added parallelism was provided through "lookahead," a process (pioneered on the Stretch) by which the CPU examined the instruction stream and determined to what extent operations could be fetched in advance of the time the functional units needed them. (Interestingly, this made a branch instruction that actually branched the most time-consuming operation on the machine.) Likewise, the peripheral processors could each be busy handling I/O, while the central processor was executing program steps that did not require connection with the outside world.

The 6600 went against the trend of using microcode to build up an instruction repertoire. It more closely resembled the approach taken by the first digital computers, including the electromechanical Harvard Mark I (1944) and the ENIAC (1946). In the Mark I, for example, there was no operation to "multiply." Instead, lines of paper tape were punched to route numbers to a multiplying unit. While doing the multiplication, the Mark I could be coded to do something else as long as did not need that product (or the multiplying unit). The 6600 had two floating multiply units, each of which could perform a multiplication in 1 microsecond. It had no integer multiply command. Seymour Cray believed in a very sparse instruction repertoire, and his ideas presaged in many ways the current trend toward reduced instruction set computers (RISCs - q.v.).

CRAY-1

Control Data upgraded the CDC 6600 with the 7600 in 1969 and produced an incompatible supercomputer called the STAR in 1972. The latter machine was capable of parallel operations on vector data - a feature also used in the design of the Texas Instruments Advanced Scientific Computer (1972). Around that time, Seymour Cray left CDC and formed Cray Research, whose goal was to produce an even faster machine.

In 1976, Cray Research announced the CRAY-1, with the first delivery in March to the Los Alamos National Laboratory. Preliminary benchmarks showed it to be ten times faster than the 6600. The CRAY-1 had 12 functional units and extensive buffering between the instruction stream and the central processor. Memory options ranged from 250K words to 1 million 64-bit words. The chief difference between the 6600 and the Cray was the latter's ability to process vector as well as scalar data.

The CRAY-1 also achieved high speeds through innovative packaging. The computer used only four types of chips, each containing only a few circuits that used emitter-coupled logic (ECL). The circuits were densely packed and arranged in a three-quarter circle to reduce interconnection lengths. Circuit modules were interconnected by wires, laboriously soldered by hand. The modules were cooled by liquid Freon, which circulated through aluminum channels that held the circuit cards. Large power supplies located at the base of each column supplied power. These design decisions resulted not only in a fast machine, but also one that had a distinctive and deceptively small size and shape.

Prices for a CRAY-1 were on the order of $5 million and up. The CRAY-1 sold well and the company prospered. Control Data continued offering supercomputers for some time but eventually withdrew from the business. IBM had countered the announcement of the 6600 with its own 360 Model 91 (1967), which, however, was a commercial failure. Other machines based on the 360/370 architecture in the late 1980s established IBM as a competitor in the class. Cray research announced the X-MP, a multiple processor version of the CRAY-1, in 1982, the CRAY-2 in 1985, and the Y-MP in 1988. Several Japanese firms, including NEC and Fujitsu, entered the arena with machines in the supercomputer class in the mid-1980s. In the U.S., several start-up companies entered the field in the late 1980s with machines with performance approaching the CRAYs at a much lower price.

By the 1970s, the supercomputer was established as a viable class, rather than as specialized, one-of-a-kind experimental machines. The persistence and ingenuity of one man, Seymour Cray, had a lot to do with that. Although the class is well established, the design of these machines tends to be idiosyncratic, with the personal preferences of individual designers playing a much larger role than it does in other classes. Each designer seeks the fastest device technology and pays close attention to packaging, but various architectural philosophies are followed. In contrast to Cray's approach, for example, Thinking Machines, Inc. of Cambridge, Massachusetts introduced a computer in the mid-1980s called the Connection Machine, which was characterized by a massively parallel architecture. Meanwhile, Seymour Cray left Cray Research and founded Cray Computer Corporation in 1989, where he continued to pursue fast performance using innovative packaging and materials. All agree that a degree of vector processing and other parallelism is necessary, but just how much is far from settled - whether to harness multiple von Neumann architectures in parallel or to find a more radical alternative to the von Neumann fetch-execute cycle that some regard as a bottleneck.

With the end of the Cold War the nuclear weapons labs no longer had the financial resources or desire to push supercomputer technology along as it had. Many of the suppliers ran into financial difficulty. Cray Computer and Kendall Square Research went bankrupt in 1995-96; Cray Research was bought by Silicon Graphics in 1996; and Thinking Machines was reorganized as a software house in 1995. The demand for supercomputing is strong and growing for commercial applications, such as commercial aircraft and automobile design, chemical engineering, weather forecasting, and many others. What has changed is that since the end of the Cold War, this segment of the industry has had to deal with the issue of cost. If it can provide high performance at low cost, as other segments of the computer industry have figured out how to do, this segment will not only survive but even grow, with or without Federal support.

Personal Computers

Many in the computer business saw the trend toward lower prices and smaller packaging occurring through the 1960s. They also recognized that lowering a computer's price and making it smaller opened up the market to new customers. With the hindsight of two decade of furious growth, it seems inevitable that a computer company would introduce a "personal computer." The truth is more complex. The personal computer's invention was not inevitable; if anything its viability was unforeseen by those in the best position to market one. The personal computer was the result of a conscious effort by individuals, whose vision of the industry was quite different from that of the established companies.

To understand the transformation of computing brought about by the personal computer, one must begin with an understanding of such a machine's technical and social components. Some of the first electronic computers of the late 1940s were operated as personal computers, in that all control and operation of a machine was turned over to one user at a time. Prospective users took their place in line with others waiting to use it, but there were no supervisory personnel or computer operators between them and the machine. This mode of operation is one of the defining characteristics of what constitutes a personal computer.

In the mainframe world of the late 1960s, batch operation prevailed. But an alternate style of access arose that became known as a "computer utility": computing power made accessible to individuals through remote terminals accessing a centralized, time-shared mainframe. The physical location, maintenance, and operation of the mainframe was a problem for computer specialists and technicians, not the user. The user had the illusion that the full resources of the mainframe were available to him or her. That illusion, created by complex systems programming on a mainframe with lots of disk or drum memory, was crucial. Nearly all the pioneers of the personal computer era had such experiences on a time shared system, and it was that illusion's appeal as an alternative to batch operation that they sought to recreate on a small system.

Ironically, while some used the time sharing model as the inspiration for their work on personal systems, others were blinded by the structure of time shared systems. For this latter group, time sharing's analogy to an electric power utility, with the implication that one needed a complex, expensive, centralized system to provide computer power, created a mental block that prevented them from recognizing how advances in semiconductors were rendering that model obsolete.

Throughout the late 1960s, the semiconductor manufacturers were continuing to place even more circuits on single chips of silicon. Around 1970, these developments led to the first consumer products: digital watches, games, and calculators. Four-function pocket calculators, priced near $100, appeared around 1971, and the following year Hewlett-Packard introduced the HP-35, which offered floating-point arithmetic and a full range of scientific functions. The HP-35 sold for $395 and was an immediate success for Hewlett-Packard, a company that had not been part of the consumer electronics business.

Consumer sales of these products led to very long production runs of the chips that powered them. That in turn led to a stunning drop in price: within a few years watches and calculators with more circuits than the ENIAC had were being given away as promotional trinkets. The computer industry had also enjoyed price reductions as sales increased, but nothing on this scale. To build and sell a general purpose computer that way seemed impractical: the chips would be too specialized, and they would become obsolete too quickly to generate enough volume production to reach a consumer price level.

That changed in late 1971, when Intel introduced the microprocessor (the 4004), a chip on which much of the architecture of a minicomputer was implemented, and whose functions could be modified by programming a read-only-memory (the 4001). Intel designed this chip set for a customer, Busicom, wanting to build calculators. When Busicom dropped the project, Intel was then free to find another market for what was a set of chips that provided general purpose computing functions. Compare Intel's experience with that of IBM and the first-generation 650: IBM started with a design for a special purpose design for specific punched card applications. It ended up designing a low-cost, general purpose computer that could serve the initial application by software, not hardware. It ended up with a very successful product that found applications across the computing spectrum.

Some individuals within DEC, Xerox, HP, and IBM proposed to build and market an inexpensive, general-purpose personal computer around this time, but their proposals were either turned down or only weakly supported. Meanwhile, Intel designed and sold developer's kits that it sold or even gave away to potential customers, to familiarize them with the nuances of designing with a microprocessor. Rockwell, Texas Instruments, and others all announced microprocessors by 1973. Intel followed the 4-bit 4004 with an 8-bit 8008 in 1972, followed by a more powerful 8080 in April 1974. The price was set at $360.

While that was going on among the large electronics and computer firms, other forces were pushing up from below. Radio and electronics hobbyist magazines started publishing articles on how to build modest digital devices using the TTL chips then becoming available at low prices. The space that the personal computer would eventually fill was being nibbled at from above, by cheaper and cheaper minicomputers, and from below, by pocket programmable calculators and hobbyist's kits.

In January 1975, Popular Electronics published a cover story on a computer kit that sold for less that $400. The "Altair" was designed for the magazine by MITS, a company consisting of about ten employees located in Albuquerque, New Mexico. Despite its many shortcomings, this kit filled the space of "personal computer" that had been empty. It cost less than an HP-35 calculator. It was designed around the Intel 8080 microprocessor, with a rich instruction set, flexible addressing, and a 64 Kbyte addressing space. Ed Roberts, the head of MITS, designed the Altair along the lines of the best minicomputers, with a bus architecture and plenty of slots for expansion. There were many things the Altair lacked, however, including decent mass storage and I/O. As delivered, it represented the minimum configuration of circuits that one could legitimately call a "computer."

But hobbyists were tolerant. In fact, hobbyists were the key to the launching of the personal computer. Their energy, enthusiasm, and talent had not been recognized by Intel, DEC, or other established companies. Without that talent and energy, it was perfectly reasonable to predict, as one executive reportedly did, that the "personal computer will fall flat on its face." The personal computer established itself by tapping into that community and exploiting its labor to overcome the machine's severe technical deficiencies. Those who bought the Altair did so not because they had a specific computing job to do, but rather because they understood the potential of owning a general-purpose, stored program computer. They understood, as the mini and mainframe makers did not, the social implications of the word "personal." The personal computer's social appeal was that its owner could do with it as he or she wished. Obviously that was not true of batch-operated mainframes. Nor was that true with minicomputers or even with time shared systems, even if they superficially resembled personal computers in terms of their interactive capabilities.

Between the time of the Altair's announcement and the end of 1977 the personal computer field witnessed an unprecedented burst of creativity and talent that transformed the device into something truly practical. This drama was played in three arenas: hardware, software, and in the social community of users.

The social community was perhaps most important. Computer users groups had been present ever since SHARE, founded shortly before IBM delivered its 701. Digital Equipment Corporation had a good relationship with DECUS; the mini companies also developed close ties with Original Equipment Manufacturers (OEMs) who added value to the basic machine supplied by a manufacturer. For personal computers this community, and the work they did, was even more critical. On the west coast, the legendary Homebrew Computer Club was founded in March 1975, with the early meetings devoted to getting Altairs and comparable kits working. Newsletters and magazines sprouted, the most famous survivor of which was Byte, founded in September 1975. Byte was a fairly "normal" magazine, while Doctor Dobb's Journal of Computer Calisthenics and Orthodontia [sic] was typically filled with machine language codes written in hexadecimal. There were over a hundred such periodicals that sprouted up in the decade following the Altair's announcement

Some critical design decisions made by Ed Roberts and his small group at MITS set the course for the early hardware evolution of personal computers. The first was his choice of the Intel 8080, a decision that would reverberate through the computer industry for the next 25 years. The second was to design the machine along the lines of the Data General Nova and advanced DEC minicomputers, with their open, bus architecture. That allowed entrepreneurs to come out with circuit boards that added capabilities such as better memory and I/O to the original Altair, which sorely needed those features. Other companies like IMSAI designed machines that were "clones" of the Altair. The result was that personal computers established a beachhead in the market without being tied to the fortunes of the tiny MITS (which was bought by Pertec and vanished from sight after a few years anyway).

Similar standards rapidly emerged in software. Not long after seeing the Popular Electronics article describing the Altair, William Gates III contacted Ed Roberts and told him that he would have a version of the BASIC programming language for the Altair by July 1975. Gates, with Paul Allen and Monte Davidoff, wrote and delivered the language on a paper tape as promised. Gates never became a MITS employee but instead retained the rights to the language for his company "Micro Soft," later "Microsoft." After MITS fell into financial troubles Microsoft marketed the language to all the others who were making 8080-based machines. There were other versions of BASIC available, but Microsoft's was regarded as the best; it combined the ease of use of the original BASIC developed at Dartmouth, with the power and ability to use machine codes (through commands like PEEK and POKE) that Gates and Allen took from versions of BASIC developed at Digital Equipment Corporation.

Another critical piece of software was an operating system that allowed the newly-invented floppy disk to serve as the personal computer's mass storage device. The system was called CP/M, for "Control Program (for) Microcomputers; it was written by Gary Kildall (1942-1994), almost as an afterthought. CP/M, like Microsoft BASIC, was strongly influenced by work done at Digital Equipment Corporation and even used many of DEC's cryptic acronyms such as "PIP," "TECO," and "DDT." Like the DEC minicomputer systems, it took up very little memory of its own and had none of the bloat that was characteristic of mainframe OSs. Kildall sold it for under $100, and it made the floppy an integral part of the PC.

By 1977 computers were being packaged and sold as appliances, with three models introduced that year from Apple, Radio Shack, and Commodore setting the trend. None of those three used the Altair bus or architecture, but the Apple II used Microsoft BASIC, and one could plug a card into it that let it run CP/M. The Apple II, though more expensive, was by far the superior machine, with very good color graphics, tight integration with floppy disk storage (the others relied on unreliable audio cassettes), and attractive packaging.

The field matured in 1981, when IBM introduced a machine called simply the IBM Personal Computer, which combined the best of the features described above with the respectability of the IBM name. The IBM PC was not IBM's first attempt in this market, but it quickly set a new standard. In many ways it was a descendant of the Altair. Its microprocessor was the Intel 8088, a 16-bit version of the 8080. It had a bus architecture that invited others to provide cards to expand its abilities. It used Microsoft BASIC, supplied in a ROM. Like the Apple II it could be configured with a color monitor to play games. Early versions had a cassette port, but most came with at least one floppy drive. Customers were given the option of one of three disk operating systems; but the cheapest, simplest, and first to market was PC-DOS, supplied by Microsoft and based in part on CP/M.

Apple, by 1981 one of the industry's fastest growing companies, did not feel threatened by IBM's entry into the field. The company considered its Apple II a superior machine anyway, and it thought that IBM's entry would make its own products more accepted by business customers. When IBM quickly took the lead in sales, Apple responded with an Apple III, which suffered from reliability flaws, and then, in 1984, with the Macintosh. The "MAC" was a closed machine and a philosophical opposite of the Altair/IBM approach, but its graphical user interface (GUI) was, once again, revolutionary. Sales were slow at first, but Apple had set a new standard.

Microsoft grew on the sales of its DOS operating system, which, like Microsoft BASIC, it was free to sell to others besides the company for which it developed the software. As with the Altair, a vigorous clone market for IBM-compatible PCs emerged, led by the Texas firms Compaq and Dell. Compaq grew even faster than Apple, from the delivery of its first clone in 1983 to one of the top 100 computer firms by 1985. As long as they could prove that they had not copied the code from IBM's proprietary ROMs in its PC, these companies were free to buy a copy of MS-DOS from Microsoft and sell a machine that ran any software developed for the IBM computer. Competition soon forced the prices of very capable personal computers to low levels. Like Apple, Microsoft had also developed a version of a windows and icons-based interface, but its "Interface Manager" was no match for the Macintosh's elegant design. Later versions, however, now renamed "Windows," were better, and by 1991 Microsoft was able to reorient the IBM-compatible market from MS-DOS to Microsoft Windows, version 3.1.

Workstations

In the mid 1980s, a number of companies introduced personal workstations, and since the late 1980s, their architecture has reversed the trend set by the 360, VAX, and 80x86-series. Instead of using a complex, microcoded instruction set, these workstations use Reduced Instruction Set Computers (RISC), which have small instruction sets applied to many fast registers. These computers are intended for use by a single individual, and provide high-resolution graphics, fast numerical processing, and networking capability. As such they combine the attributes of the personal computer with those of the higher classes. Their performance reaches into the low end of the supercomputer range, but prices for simpler models touch the high end of the personal computer class. At present the more advanced personal computers, using the latest Intel microprocessor and running Microsoft's Windows 95 or NT, overlap the cheapest workstations, which typically use a SUN Sparc or other RISC microprocessor and run under a version of Unix. But the two classes, for the moment, remain distinct.

Several Boston-area companies were the pioneers in introducing workstations, but the lead was soon taken by the Silicon Valley company SUN Microsystems, founded in 1982. Part of the reason for SUN's success was its ability to make use of the "free" research being done at nearby universities: the hardware design was based on a "Stanford University Networked" workstation project (hence the name), while Bill Joy, a student at Berkeley, developed, with ARPA funding, a version of Unix that SUN adopted. Joy, who became a SUN employee in 1982, was the final carrier of AT&T's Unix, as it made its journey from New Jersey, to Urbana, Illinois, to Berkeley, finally to Mountain View, California, where Berkeley Unix became the operating system of choice not only for most workstations but also for the emerging Internet world.

IBM researcher John Cocke had done the pioneering work on RISC in the mid-1970s. After some false starts, IBM introduced the successful RS/6000 line of computers in 1990. In 1992 Digital Equipment Corporation "bet the company" on a RISC chip called Alpha, which achieved processing speeds that few thought possible for a microprocessor. The following year IBM, Motorola, and Apple joined forces to introduce a RISC processor for personal computers. The trio hoped that the superior architecture of their "Power PC" chip would give it overwhelming advantages over Intel's 80x86 line.

To the surprise of many, neither the DEC Alpha nor the Power PC have been able to take many customers away from SUN workstations or from the Intel-Microsoft personal computers, though both have achieved respectable market niches. There may be two reasons. Commercial software developers tend to write software for machines that already have a big market, such as those with Intel chips and Microsoft operating systems. Academic users rely heavily on university-produced free software, which until recently has often been developed on SUN workstations and hence has been available to run on them first. Clearly the conditions that governed earlier decisions to move to a new architecture, as IBM customers did when the System/360 was announced, have changed.

Networking

The biggest change in computing since 1990 has been the emergence of networking as an integral part of what it means to have "a computer." Not only workstations but also personal computers are linked into local networks, except in the home. Typically some form of Ethernet is used, although for personal computers the leader has been a proprietary network offered by Novell. On the national and even global network, the Internet moved rapidly from something available only to a few, to a necessary feature bundled into every installation.

The potential of such an interconnection was long recognized by those with access to its predecessor, ARPANET, and to early versions of the Internet. What caused the breakout to a mass market was the development of software called the World Wide Web q.v. around 1992, by researchers at the European particle physics laboratory CERN, who wanted to communicate the results of their work to their colleagues. Shortly after that a program called "Mosaic" was developed by Marc Andressen and colleagues at the University of Illinois supercomputer center. Mosaic allowed one to "browse" the World Wide Web through a graphical interface similar to the Macintosh. Both the World Wide Web and Mosaic were developed by government-funded scientific research centers: not only was this yet another example of how government funding pushed computing to a mass market, it also meant that the products of that research would be available for free or at low cost. One may say that the Internet came of age in April 1994, when Jim Clark left Silicon Graphics and, with Marc Andressen, founded Netscape, a company whose goal was to commercialize Mosaic.

Conclusion

"What is Past is Prologue" - the phrase inscribed on the facade of the U.S. National Archives building in Washington - applies to computing with a vengeance. It is hard to avoid the impression that whatever happened in computing from 1945 to 1995 was "mere" prologue to the present culture of the World Wide Web. Although computers continue to be designed, manufactured, and sold as discrete entities, it seems no longer right to discuss the history of computing without focusing as much on the way the machines are networked. That implies that the patterns of stability that have guided the history just told may no longer apply.

The von Neumann architecture still reigns, but it seems no longer to be as central an organizing principle. Supercomputers with massively parallel designs are now the norm, while the Internet has blurred the distinction between computing that goes on inside the box and what goes on "out there." New programming languages like Java point to a future where such a distinction may become meaningless.

As far has human-computer interaction goes, many of the basic choices appear quite stable. The rectangular box, filled with silicon integrated circuits, with an attached keyboard and monitor, has now been a physical standard for about 20 years-as long as anything has remained stable in computing. There are as many advantages now as ever for smaller machines, but designers now face the limits of the human body: the size of the fingers and hands, and the visual acuity of the human eye. "Laptop" q.v. computers have emerged as a subclass, but these are plagued with short battery life and keyboards that are hard to type on.

An even smaller class of shirt-pocket machines known as "personal digital assistants" has also emerged. These use a stylus and handwriting recognition for input, but they are not displacing the more standard configuration. Many people prefer the old "QWERTY" keyboard leftover from mechanical typewriter days, even if using one exposes a person to injury from repetitive motion. As for output, LCD screens can be made smaller and lighter, but users often prefer CRTs, which, though not as good as the printed page, have crisp, easy to read screens. The tiny LCD screens of the Personal Digital Assistants may look fine to their young developers, but as people age their eyes can no longer focus on such tiny type. Other I/O methods such as voice are being developed and may soon become commonplace. Researchers at Xerox PARC are working on a concept of wearable computers-devices embedded into eyeglasses, credit cards, id badges, and clothing. The idea holds great promise, although it will more likely supplement, not replace the general-purpose machine on one's desk. Software compatibility, which began as a relatively simple notion with the IBM System/360, continues to be as important as ever but in a different way. For personal computers the issue has been compatibility with the latest version of Microsoft's operating system; for workstations it has been with a version of Unix. The Internet has established the networking standard TCP/IP, while the World Wide Web has been built around a standard called HTML (see MARKUP LANGUAGE). Some are arguing that the last two standards make adherence to operating system standards less critical. How that will play out remains to be seen, but in one form or another software compatibility will remain an issue in the future.

In the 1992 version of this article for the previous edition of the Encyclopedia, the author identified two needs that computers at that time were not meeting. They were ease of use and communications. With the emergence of the World Wide Web, along with cellular digital telephones, pagers, and new-generation satellite systems, the second need has been met (though the great success of the Internet does have the potential to create new problems). The first need, ease of use, has not been met. The Macintosh graphical screen, rightly heralded in 1984 as a breakthrough in ease of use, has evolved into a baroque clutter of icons whose meanings are by no means obvious and whose functions often contradict or overlap each other. The situation has gotten worse, not better. Computing cries out for a new generation of designers who can, as could those at Xerox PARC and Apple, cut through the Gordian knots of software complexity. Until that day, what the previous version of this essay said still unfortunately holds true: "computers remain difficult to use, frustrating, and overly complex in the way they present software to their owners." Perhaps in the next decade, with the other problems of performance, reliability, memory capacity, and communications well under control, this problem will be once again attacked and solved.

References

1982. Siewiorek, Daniel P., Bell, C. Gordon, and Newell, Allen. Computer Structures: Principles and Examples . New York: Mcgraw-Hill.

1986. Bashe, Charles J., Johnson, Lyle R., Palmer, John H., and Pugh, Emerson W., IBM's Early Computers . Cambridge, MA: The M.I.T. Press.

1989. Smith, Richard E. A historical overview of computer architecture. Annals of the History of Computing , 10, 277-303.

PAUL E. CERUZZI National Air & Space Museum Washington, DC 20560



next up previous
Next: About this document

David Hemmendinger
Thu Mar 26 13:29:53 EST 1998