Skip to main content

IBM Mainframes -- Operator's console

The IBM z Systems mainframe has never stopped evolving throughout its history. The world's most competitive businesses trust the mainframe for the industry's best security, 99.999% reliability, the fastest insights and lower TCO than public cloud. Read about the history of the mainframe below. Then, learn what's new with the mainframe.

"Mainframe" defined

The IBM Dictionary Of Computing defines "mainframe" as "a large computer, in particular one to which other computers can be connected so that they can share facilities the mainframe provides (for example, a System/370 computing system to which personal computers are attached so that they can upload and download programs and data). The term usually refers to hardware only, namely, main storage, execution circuitry and peripheral units."

Evolution of the mainframe

The first general purpose automatic digital computer built by IBM dates back to 1944. It was an electromechanical machine developed in conjunction with Harvard University and was known as the Automatic Sequence Controlled Calculator. It performed additions in one-third of a second and multiplications in six seconds.

In 1948, IBM introduced the Selective Sequence Electronic Calculator which contained 21,400 electrical relays and 12,500 vacuum tubes, enabling it to do thousands of calculations in seconds.

The Korean War sped the development of large-scale computers. In 1952, IBM announced its first fully electronic data processing system, the IBM 701 . The four years between the Selective Sequence Electronic Calculator and the 701 produced great advances in information technology. The 701 was only one-quarter the size of the SSEC and 25 times faster.

During the next few years, even faster and more versatile vacuum tube machines were developed. The IBM 650 was among the best known, with nearly 2,000 units produced. In fact, the 650 was the most popular computer of the 1950s.

Along with the improvement in vacuum tube machines came the introduction of the IBM RAMAC 305 in 1956. This system utilized a vertical stack of 50 aluminum disks coated with iron oxide. It permitted information to be magnetically coded on these revolving disks and entered and retrieved from the storage file on a completely random basis. Prior to this development, information had to be "batched" or sorted into sequence before processing. RAMAC, and developments which evolved from it, greatly increased the scope of data processing.

By the mid-1950s, transistors had begun to replace vacuum tubes in computers. In 1958, IBM announced the 7070 Data Processing System which incorporated solid-state technology offering several advantages over vacuum tube machines. Solid-state devices, such as the transistor, were generally smaller, more reliable and generated less heat than comparable vacuum tube components.

In 1959, IBM introduced two of its most important computers. These were the 1401 Data Processing System, widely used for business applications, and the 1620 Data Processing System, a small scientific and engineering computer used for such diverse applications as automatic typesetting, highway design and bridge building.

IBM 1401 Data Processing System
IBM 1401 Data Processing System

The following year saw the introduction of the large-scale 7000 series, the 1410 and Stretch (IBM 7030), the most powerful scientific computer designed up to that time.

These were the years when the range of systems greatly increased. Much smaller and much larger systems became available. The compact, low-cost 1440 Data Processing System , a machine designed for small and medium-sized business, was introduced in 1962. At the other end of the scale, IBM made available the 7094, a powerful system widely used in the aerospace industry for such jobs as simulation of rocket engines and for scientific computing in research laboratories around the world.

The usefulness of computers was greatly expanded by the introduction of IBM data transmission terminals enabling far-flung locations to communicate with a central computer to enter or retrieve information. This ability to communicate with the computer meant that information stored in the system could be automatically updated as transactions occurred and made available upon request to headquarters management as well as field personnel. IBM "Tele-processing" terminals were used, for example, by airlines to provide instant passenger reservation service, banks to update customer files, insurance companies to speed claims processing, factories to report production status and assure quality control, and retailers to speed ordering from wholesalers.

Yet, with all the advances in computer technology, programming and applications in the late-1950s and early-1960s, there were still obstacles to overcome in making maximum use of electronic data processing capabilities.

To provide an expandable system that would serve every data processing need, IBM redesigned its entire product line. The result was the new generation System/360, combining new electronic techniques with advanced computer concepts.

System/360 -- announced in April 1964 -- represented the first basic reorganization of the electronic computer by IBM since the development of the 701 in 1952. More than any other computer development, it tied together the "loose ends" of electronic data processing and offered users a total system capability at a price they could afford.

Specifically, the new system enabled companies to integrate all of their data processing applications into a single management information system. Virtually unlimited storage and instant retrieval capabilities provided management with up-to-the-minute decision-making information.

System/360 included in its central processors 19 combinations of graduated speed and memory capacity. Incorporated with these were more than 40 types of peripheral equipment. Built-in communications capability made the system available to remote locations, regardless of distance.

A System/360 installation
A System/360 installation

Until the advent of the System/360, unlimited storage had been expensive and costly. A certain amount of reprogramming had been necessary to use added core units providing additional memory. With System/360, limited storage capacity was no longer an obstacle to the maximum use of a computer. System/360 processors provided a central memory capacity of from 8,000 to 524,000 characters. Additional low-cost storage of up to eight million characters was available with any of the larger configurations.

With System/360, it was no longer necessary to match a user's problem to a specific piece of equipment because of differences in machine design and problem-solving capacity. System/360's units could be combined in an almost infinite variety of ways so that the system was literally tailored to a customer's job.

The built-in communications capability of System/360 allowed the user to greatly increase the scope of computer usefulness. Up to 248 data transmission terminals could communicate with the computer simultaneously -- even when it was busy on a batch processing job.

The System/360 also ended the distinction between commercial and scientific computers. Each System/360 processing unit had the ability to process work through small binary, decimal or floating point arithmetic centers. This meant that the same System/360 configuration could handle commercial work, scientific work or a combination of the two, with equal effectiveness.

Starting with the System/360, the mainframe's circuits were closely combined on half-inch ceramic modules. With the new Solid Logic Technology (SLT), the smallest processor of System/360, out of the five originally announced with that computer family, could perform 33,000 additions a second; the largest, three quarters of a million. Statistically, an SLT module averaged 33 million hours before failure. SLT provided not only a technological building block for the System/360, it also solidified IBM's commitment to supplying much of its own component technology. In addition to developing this technology, IBM also built the equipment to make and test it.

The era of micro miniaturization had begun. But it was still a substantial step from SLT to integrated circuits in which all of the same elements -- resistors, capacitors and diodes -- were fabricated on a single slice of silicon. The resulting monolithic technology was an industry wide development spanning the 1970s that opened the door to large-scale integration. In 1970, IBM rolled out a 128-bit bipolar chip that was used in the industry's first all-monolithic main memory. Introduced in the IBM System/370 Model 145 that year, the chip measured less than 1/8-inch square. It launched IBM into a promising new technology.

A System/370 Model 145 installation
A System/370 Model 145 installation

Barely had the computer lexicon digested "monolithic" than it was embellished by "RAM" -- an expression coined some years before for random access memory. The low power requirement and low cost of the chip helped make it the choice for main memory where data is constantly in motion.

With the whole computer world trying to cram more and more circuits onto a chip, IBM led the industry when in 1978, it became the first to mass produce and use a RAM chip storing more than 64,000 bits of data.

The 64K chip was the first of a whole family of progressively larger capacity chips produced by a unique process known as SAMOS -- short for Silicon and Aluminum Metal Oxide Semiconductor. In 1982, IBM announced an experimental chip capable of storing more than 288,000 bits of information -- equivalent to four copies of the Declaration of Independence.

In April 1984, the most recent addition to the SAMOS family -- a one-megabit chip -- arrived. "Mega" means million but the chip actually held 1,048,576 bits of information in a space smaller than a child's fingernail.

Improvements over the years in the electronic devices used in mainframes could not have been realized without equally ingenious advancements in packaging. Because increasing the number of logic circuits on a chip increases the number of connections that must be made between them, IBM devised new multilayer ceramic packaging technology to create fine three-dimensional networks linking thousands of devices.

In the IBM 3081 processor, for example, the length of wiring between chips was about one-eighth that in the previous large-scale mainframe -- the IBM 3033 processor, reducing the time it took for electric pulses to pass between components. The result was a twofold decrease in process cycle time.

But greater densities created another challenge. Components jammed together give off a fair amount of heat. If not dissipated, the heat is enough to destroy the chips. The solution in the 3081 was to draw off the heat through a plunger surrounded by helium gas into a "hat," which, in turn, was cooled by chilled water circulating inside an attached conduit. The whole assembly was called the Thermal Conduction Module (below).

Thermal Conduction Module
Over the years, advances in mainframe performance have rested not only on revolutionary developments in microelectronics. Innovations in processor architecture and in programming have also played a significant role.

For example, computer scientists long sought to break down complicated tasks into simpler ones so that different parts of a problem could be worked on in parallel. A common name for the technique is "pipelining." The pioneering Stretch computer was among the first to overlap operations so that it could start processing a second set of numbers while the first was still in the "pipeline."

Employment of more than one processor was another departure to make mainframes more productive. Guided by a sophisticated operating system, the IBM 3084 processor complex, for example, could keep four processors busy at work -- all able to dip into the same pool of data and instructions. If one or more processors were shut down for maintenance, the rest could stay on the job.

In a September 1990 blockbuster announcement, IBM introduced System/390 -- the company's most comprehensive roll-out of products, features and functions in more than a quarter of a century. Encompassing a family of 18 new IBM Enterprise System/9000 processors (10 air-cooled models and eight water-cooled models), System/390 drew on such technologies as high-speed fiber optic channels with IBM's new ESCON architecture, ultra-dense circuits and circuit packaging for higher performance, integrated encryption/decryption for sensitive data, extended supercomputing capabilities, and twice the processor memory previously available.

System/390 processor
System/390 processor

It was around that same time that some industry observers were declaring the impending death of the mainframe. One such analyst wrote in the March 1991 issue of InfoWorld, for example, "I predict that the last mainframe will be unplugged on March 15, 1996."

To be fair, the "mainframe," circa 1991, was a dead end. But IBM believed (along with a lot of its customers) that this way of computing -- serious, secure, industrial-strength -- would always be in demand. Hence, the System/390. With the 390, IBM stuck with "big iron" but reinvented it from the inside -- infusing it with an entirely new technology core, reducing its price, and building support for open standards and operating environments like Linux. That new technology was expressed by Complementary Metal Oxide Silcon (CMOS)-based processors, which used far less electricity, took up much less space and cost less than bipolar processors.

Those advantages and the other benefits of near-constant availability, ironclad security and massive computing power stimulated increased demand for IBM large-scale computers. After 1992, shipments of mainframe computing capacity have increased more than 30 percent annually. Indeed, that same InfoWorld pundit, who in 1991 predicted the death of the mainframe, wrote in February 2002: "It's clear that corporate customers still like to have centrally controlled, very predictable, reliable computing systems -- exactly the kind of systems that IBM specializes in." In other words, the king is dead ... long live the king.

One of the main drivers of mainframe acceptance and growth in recent years has been the proliferation in network computing and e-business users and applications. To help customers better harness the power of this pervasive medium, IBM unveiled in October 2000 the IBM eServer zSeries 900 (below), the first mainframe built from scratch with e-business as its primary function.

eServer zSeries 900
eServer zSeries 900

The reinvented mainframe was built to handle the unpredictable demands of e-business, allowing thousands of servers to operate within one box. The first in a new class of e-business servers, the z900, which works hand-in-hand with z/OS -- the z900's flagship operating system -- was designed for high speed connectivity to the network and to data storage systems, scalability in the face of unpredictable spikes in workload or traffic, and near zero downtime when clustered. In other words, the z900 allows customers to push performance and connectivity to the outer limits without any concessions to reliability and security. The ability to run thousands of virtual servers within one physical box makes the z900 the ideal platform for users with intensive e-business operations, such as application service providers, Internet service providers and technology hosting companies.

The mainframe has enabled many of today’s most groundbreaking innovations and is leading clients into the future. Technology, innovation and research have produced over 7,000 active mainframe-related "function" patents and pending patent applications worldwide, including well over 3,500 active US patents and pending patent applications. The IBM z13 mainframe was introduced in 2015 following a multi-year project and more than $1 billion in investment. This mainframe system was designed from the ground up to help companies and governments address the mega trends around mobile, analytics and cloud. Ideal for hybrid cloud, the z13 offers unprecedented capacity and processing power, speeds real-time insight, and protects transactions to minimize client exposure and risk of cyber threats. The z13s, announced in 2016, delivers these capabilities in a smaller footprint and affordable price.

eServer zSeries 990
For over 50 years, the mainframe has served as the backbone of large-scale computing. It has adapted to new requirements and adopted and exploited new technology. It is being used today in ways that were unimaginable back in the era of the IBM 701. To see what's new and how the mainframe is helping the world's most complex and dynamic businesses compete in the cognitive era today, visit the IBM z Systems mainframe site.

For detailed information about and images of many of IBM's mainframes down through the years, visit our mainframe Mainframes reference room.


Source: https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_intro.html