On April 7, 1964, IBM announced the Systems/360 family of mainframes. “System/360 represents a sharp departure from concepts of the past in designing and building computers,” said IBM’s then chairman and CEO Thomas J. Watson Jr. “It is the product of an international effort in IBM’s laboratories and plants and is the first time IBM has redesigned the basic internal architecture of its computers in a decade. The result will be more computer productivity at lower cost than ever before. This is the beginning of a new generation - not only of computers - but of their application in business, science and government.”
In April of 1964 I was a second year college student at the University of Chicago, working part time at the university’s computation center, which used IBM computers. I still remember attending a presentation on the announcement given by a visiting IBM executive. Over the next several years I used high-end models of S/360 for the physics calculations I was doing as part of my doctorate studies. My thesis sponsor, - Chicago professor Clemens Roothaan, one of the early leaders in computational sciences, - was consulting with IBM on the design of future versions of S/360, and I was also involved in some of this work. This relationship with IBM researchers led to my joining the computer sciences department at IBM’s Watson Research Center once I finished my studies in 1970.
I was closely associated with mainframes through most of my 37 years in IBM. I was involved in a number of research initiatives on the future of large systems. After moving to the large systems products divisions in the mid 1980s, I worked on the evolution of mainframes to CMOS-based microprocessors and parallel architectures. Later in the 1990s and 2000s I worked closely with the mainframe teams as they supported the Internet, Linux and other emerging initiatives I was involved in.
Having lived through the near demise of mainframes in the early 1990s, - which would have inevitably led to IBM’s own demise, - their ability to have survived after all these years is truly impressive. A look at the IT industry over the past several decades will reveal the large number of once great products and companies that are no longer around. Few computer families could trace their vintage to the 1980s, let alone the 1960s. There is something pretty unique about the mainframe being not only alive but doing so well after all these years.
Why is the mainframe still around, celebrating its 50th birthday last week? What enables it to keep reinventing itself while embracing the latest technologies, including the Mobile Internet, Cloud Computing and Big Data? In a world where product life-cycles are measured in web years, what can we learn from the mainframe’s rather unique longevity?
These are questions I’ve long been thinking about. Over the years, many have predicted that the end was near. Some were competitors hoping to replace mainframes with their own product. Others just couldn’t imagine that anything this old could still have value in such a fast changing industry. In March of 1991, for example, VC and journalist Stewart Alsop famously wrote in InfoWorld: “I predict that the last mainframe will be unplugged on March 15, 1996.” A decade later, with mainframes still alive and going strong, Alsop metaphorically ate his words.
A few days ago, The Register published an article which asked in its title: Why won’t you DIE? IBM's S/360 and its legacy at 50. “Fifty years after the first S/360 was announced and 30 years after the rise of distributed systems that were supposed to replace them, the mainframe is smaller in market share, but its principles are being embraced once again.” Let’s take a look at some of these principles.
To a large extent, the mainframe’s longevity is a result of two major architectural innovations first introduced with S/360. The first was the notion of a family of computers, from low to high performance, all based on the same instruction set which allowed customers to upgrade to larger systems as well as to future models without having to rewrite their applications. The second was OS/360, a common operating system that supported the various members of the S/360 family except for the smaller ones which ran a subset with more limited capabilities. Today’s z/Architecture and z/OS are direct descendants of the original S/360 and OS/360.
“The System/360 ushered in a whole new way of thinking about designing and building computer systems, a perspective that seems so fundamental to us today that we may not realize it was rather radical 50 years ago” notes this recent article in Computerworld.
“Before the System/360 introduction, manufacturers built each new computer model from scratch. Sometimes machines were even built individually for each customer. Software designed to run on one machine would not work on other machines, even from the same manufacturer. The operating system for each computer had to be built from scratch as well. . .”
“The idea [the S/360 design team] came up with was to have a common architecture shared among the lower-end, less expensive machines and the priciest high-speed models. The top-end models would perform 40 times as fast as the low-end models. Keep in mind that applying the word "architecture" to the design of a computer was all but unheard of in the early 1960s.”
“IBM saved a lot of resources on the hardware as well. No longer would components, such as processors and memory, need to be designed for each machine. Now different models could share general-purpose components, allowing IBM to enjoy more economies of scale.”
Today, these architecture and business ideas are well accepted and widely taught in engineering and management schools. They’re embodied in the concept of platforms, which make it possible to implement an architectural design across different technologies and models. In addition, platforms are generally accompanied by an ecosystem of complementary products and services, as has been the case from the original S/360 to today’s Systems z.
These platform innovations have enabled mainframes to incorporate major advances in technology and keep evolving over the years. This was the case with the transition to CMOS microprocessors and parallel architectures in the early 1990s. A few years later, mainframes embraced TCP/IP and just about all Internet standards, enabling the platform to integrate seamlessly with the Internet and World Wide Web, and enabling its customers to leverage their transaction and data base applications in all kinds of e-business solutions. Then came zLinux, which made it possible to easily port just about any Linux application to mainframe virtual machines.
Through all these years, mainframes had to provide good value to customers, who would have otherwise long abandoned the platform. They’ve been designed to support the most critical applications of a business, including banking, inventory management and airline reservations. They’ve thus had to support high volumes of transactions and very fast response times, as well as be up and running just about all the time with minimal outages. Moreover, these mission critical applications require the highest degrees of security, availability, systems management and other so-called ilities.
What about the future? The ability to support huge volumes of transactions is all important given the explosive growth of mobile devices, an area where new mainframe offerings were announced last week. Another potential growth area is cloud computing, leveraging their industrial strength and ability to run large numbers of simultaneous virtual machines. So is Big Data, not surprisingly given the mainframe’s data processing roots.
It’s nice to see a system that’s been so intertwined with my own career still going strong after all these years. It’s a remarkable tale of longevity in one of the most competitive industries in the world. It shows that given the proper investments, commitment to innovation and management focus, a company’s core assets can be a source of competitive advantage and financial returns well into the future.