At the end of August, IBM announced the latest member of its mainframe family, the zEnterprise EC12. As pointed out in the announcement, the new system is a result of more than $1B in R&D investments over the past four years, resulting in major improvements in performance, security, availability and other key enterprise features. But perhaps, what is most impressive about this announcement, is the longevity of the IBM mainframe, which is now in its 48th year. Few computer families having major announcements in 2012 could trace their vintage to the 1980s, let alone the 1960s. There is something pretty unique about the mainframe being not only alive but well after all these years.
I have had a long association with mainframes. I was a second year college student at the University of Chicago when the System 360 was first announced in April of 1964. At the time, I was also working part time at the university’s computation center which used IBM computers. I still remember attending a presentation on the announcement given by a visiting IBM technical executive. Later on as a physics graduate student, I used high-end S/360 models for my thesis research. In IBM, I was closely associated with mainframes through major portions of my 37 year career, in particular, the period from 1977-1992 when I was involved in a number of R&D initiatives on the future of the mainframe.
To a large extent, the mainframe’s longevity is a result of two major architectural innovations first introduced with S/360. The first was the notion of a family of computers, from low to high performance, all based on the same instruction set which allowed customers to upgrade to larger systems as well as to future system models without having to rewrite their applications. The second was OS/360, a common operating system that supported the various members of the S/360 family except for the smaller ones which ran DOS/360, a subset with more limited capabilities. Today’s z/Architecture and z/OS are direct descendants of the original S/360 and OS/360.
Through the 1960s, 1970s and 1980s, mainframes were built using highly sophisticated and expensive technologies. It was important to get the maximum efficiency and price-performance out of each machine, so the hardware architecture and operating system were carefully designed to achieve high processor utilizations and fast response times to high volume transactions. Over the years, the IBM mainframes have continued to carefully design the hardware and operating systems software together to optimize performance and industrial strength, e.g., security, systems management, availability and other ilities.
The advent of these simpler, less expensive PC and UNIX servers caused mainframes to lose the unique leading position they had previously enjoyed. They could no longer command their previous high prices and profit margins. The new client-server systems became the fast growing platforms where most of the innovations took place. A few industry experts started to predict the impending demise of mainframes, which would have inexorably led to IBM’s own demise. In March of 1991, for example, VC and journalist Stewart Alsop famously wrote in InfoWorld: “I predict that the last mainframe will be unplugged on March 15, 1996.” A decade later, with mainframes still alive and going strong, Alsop metaphorically ate his words.
To survive, mainframes had to transition to exploit microprocessors and parallel architectures. IBM’s R&D labs had been developing such designs. We had built prototypes in the labs, so we knew the designs worked and were compatible with the huge investment in mainframe software made over the past few decades by IBM, software vendors and clients. The move to microprocessor technologies would enable IBM to profitably sell the new mainframes at the lower prices and margins that the markets now commanded. But to survive, the company had to go through a massive restructure to significantly lower its costs and expenses, including closing factories and sales offices around the world and laying off large numbers of employees. While it was close, both mainframes and IBM survived their near death experiences of the early 1990s.
Through the 1990s and 2000s client-server systems grew rapidly. Over the years, many companies ended up with very large numbers of relatively small servers, distributed over various departments in the organization. Because the servers were not shared among multiple application or large enough groups of users, they often ran at relatively low utilizations. These factors eventually led to significantly increased management complexities and costs.
Companies started to consolidate their distributed systems into data centers managed by the central IT organizations. A number of vendors, including IBM, introduced scalable servers based on PC and Unix technologies but with mainframe-like capabilities like virtualization to improve their overall utilization and industrial strength. At the same time, mainframes were adding support for industry standards and open interfaces, culminating with the introduction of Linux in 2000, which made it possible to use mainframes as a platform for consolidating client-server systems on thousands of native Linux virtual machines.
Over the past several years, IT has been going through another major architectural transition. A new Internet-centric computing model is emerging based on the rise of cloud computing and the proliferation of mobile devices and smart sensors. The center of innovation in the IT industry has moved from PCs and client-server systems to clouds and smart mobile devices.
In this emerging world, data centers are being transformed into services distribution factories providing all kinds of apps, information and support to consumers, business, governments, health-care, educational institutions and so on. Cloud-based data centers are also the brains behind the Internet of Things, which is embedding digital technologies in the physical world all around us and giving rise to all kinds of smart applications based on real-time information analysis.
This emerging cloud-based model of computing requires systems that are able to provide very fast response times to huge volumes of requests. And, mission critical services in healthcare, finance, transportation, electric utilities, and other industries require very high levels of availability, security and other industrial strength capabilities. The new zEC12 mainframes are particularly designed to support such mission critical cloud-based workloads.
Given my long association with mainframes, it’s nice to see the system alive and strong after all these years, and continuing to aggresively pursue new markets. It is a remarkable tale of longevity in one of the fastest changing industries in the world. It shows that given the proper investment, commitment to innovation and management focus, a company’s core legacy assets, - infrastructures, platforms, products and services, - can be a major source of competitive advantage and solid financial returns well into the future.
Fascinating discussion on the evolution of Servers into Enterprise Servers and why the mainframe --- which started it all --- may well be even more relevant to a client's business than ever before.
As always, thanks for the insight and clarity!!! (And, on a personal note, I really do apologize for the Mets this year!)
Posted by: Rickfuchs | September 12, 2012 at 03:00 PM
Well..... www.z390.org emulates a System/Z z186 on any J2Se platform or greater, x86/x64,,,, Win/Linux/Mac via Emulation in JAVA,,,, PC, Tablet, Server, Cloud.. Rewrite IBM HLASM, Never Run it as IS, w/ a Little Modification here and there,,, over 52,000+ downloads so far...
Migrate your HLASM/Assembler to the CLOUD,,,, 100% Free,,, 100% Open
Source.... www.z390.org is sort of like the Missing Link....
Posted by: Zhiteam | September 12, 2012 at 07:15 PM