On July 22, I attended IBM’s announcement of its new generation of mainframes, the zEnterprise System in New York City. As you would expect with any product announcement, this new mainframe is significantly better and faster than its predecessors. The new z196 platform has better and faster processors, more processors per box and offers significantly greater energy efficiency.
But, what I found most compelling about the announcement, is its role in the continuing re-invention and evolution of the mainframe, which was first announced as S/360 in 1964. With this announcement, IBM is positioning the mainframe as an architecture for integrating and managing heterogeneous systems in the data center, not just platforms based on the z processor architecture, the descendant of the original S/360.
IBM’s engineers have deconstructed the mainframe architecture into two major parts: the processor architecture, including its instruction set, which is mostly implemented in the hardware of the machine; and the management architecture, including virtualization, which is mostly implemented in firmware and software although it is often supported with some hardware assists. The major new part of this announcement is that the mainframe management architecture is now being supported not just on systems based on the z processor architecture, but also on systems with totally different processor architectures.
IBM is now extending the benefits of its mainframe management architecture, including high efficiency, scalability, security and reliability, to other platforms in the data center, starting with Intel-based Linux systems and Power-based Unix systems. This means that you can now integrate heterogeneous systems in the data center and manage them using the mainframe management architecture, which most everyone agrees is the most advanced in the industry. This is important because the ability to operate highly efficient, reliable, safe data centers with different platforms and workloads is an absolutely critical requirement in the age of cloud computing.
I was closely associated with mainframes for a significant portion of my IBM career. I was directly involved in mainframe R&D initiatives from about 1977 to 1992. Later, I also worked closely with the mainframe teams as they supported the Internet, Linux and other emerging initiatives I was part of.
Having lived through the near demise of mainframes about twenty years go, - which would have inexorably led to IBM’s own demise, - I am fascinated by their ability to have survived after all these years. A look at the IT industry over the past several decades will reveal the large number of once great products that are no longer around. Why is the mainframe not only still around but holding a major new strategic announcement in 2010?
I think that mainframes owe their continued survival to two key qualities. The first one, is that they must obviously provide good value to their customers, otherwise they would have long abandoned the platform. Mainframes were designed to support the most critical applications of a business, including banking, inventory management and airline reservations. They thus had to provide very good response times to business transactions, as well as be up and running just about all the time with minimal outages. Because they were used to manage the critical resources of the enterprise, security was absolutely essential.
In the 1960s, 1970s and 1980s, mainframes were built using highly sophisticated and expensive bipolar technologies. It was important to get the absolute maximum efficiency out of each machine, and so the hardware architecture, firmware capabilities like virtualization, and the operating system were carefully designed to achieve very high processor utilizations and fast response times to high volume transactions, as well as very high reliability, availability and security.
The essence of the mainframe, in my opinion, lies in these management capabilities and the management architecture needed to support them, much more so than in its processor architecture or specific instruction set. This management architecture was the result of the very well engineered, carefully designed interplay between the hardware, firmware and operating system. It continues to be the best in the industry for large information and transaction workloads that must be managed with a very high degree of efficiency, reliability and security. A recent analysis of the economics of computing shows that mainframes continue to be the most cost effective platforms for large, critical, commercial workloads.
The second major quality that has enabled mainframes to survive through the years is their ability to evolve. Customers would have long stopped buying mainframes if they had not been able to incorporate the major advances in technology over the years. Mainframes have shown that, given the proper R&D investments, they are able to keep evolving and reinventing themselves, through a series of architectural transformations and re-inventions, not unlike the latest one in the new zEnterprise systems.
Twenty years ago, the bipolar technologies we had been using to build mainframes were no longer competitive with the increasingly powerful and inexpensive CMOS microprocessors. Many thought that the end of bipolar technologies meant the end of mainframes. But, by successfully developing mainframes based on CMOS microprocessor, we showed that the overall architecture was independent of the underlying technologies on which it was implemented.
Because initially the new CMOS processors were slower than the bipolar processors they were replacing, the hardest part of this transition was the introduction of parallel sysplex, so many more processors could be part of a single mainframe system. We had to adapt parallel architectures from the world of supercomputing into the commercial world of mainframes, which turned out to be quite complicated because of the much higher degree of sharing inherent in large business applications.
Another major mainframe transition was the introduction of open Internet standards in the mid 1990s and Linux in 2000. For years, the mainframe was the quintessential proprietary architecture. In the 1980s, it was difficult to connect to and integrate mainframes with the client-server systems sprouting all around them. But later, by supporting TCP/IP and just about all other Internet standards, mainframes were able to integrate seamlessly with the Internet and World Wide Web infrastructures, enabling IBM clients to leverage their mainframe transaction and data base applications to develop all kinds of exciting, new e-business solutions. Similarly, a few years later, we brought the rapidly growing world of Linux to the mainframe. Many of us were frankly surprised at how quickly Linux was ported to mainframe virtual machines and how well it was accepted in the marketplace.
Which brings me to the zEnterprise announcement and cloud computing.
Mainframes have continued to be widely used in transaction and data base oriented workloads, including business applications like enterprise resource planning (ERP). Mainframes have been designed and optimized for such complex, data intensive workloads.
But, the fastest growing workloads in the past several years have been simpler, market-facing Internet-based applications including Web access and collaboration, as well as computationally intensive analytics applications including search and data mining. In the coming years, we expect an even more explosive growth in such Internet-based and analytics workloads, as we connect and provide services to billions of people all over the world via their mobile devices, as well as to the huge number of sensors and digital technologies that are being embedded in the physical world all around us and giving rise to all kinds of information-based smart applications.
Cloud computing has emerged in response to this need for massive scalability. These cloud workloads are run mostly on very large clusters of Intel-based processors with Linux or Windows, or on high performance Unix systems for the more computationally demanding applications. The characteristics of the workloads drive the choice of platform.
The basic digital technology components - micros, memory, storage, networking - have advanced so much in the last decades that aggregated in sufficient numbers they should be able to handle just about any workloads thrown at them. The future holds more of the same. The technology is all there. Not so for engineering and management discipline
Most companies are trying to support these fast growing cloud-based workloads using their distributed, client-server systems now consolidated in the data center. Many such data centers have been relatively lax in architecting and integrating their disparate systems into a more coherent whole. Such ad-hoc, custom data centers will not be able to achieve the kinds of volumes, costs and quality needed to successfully compete in the emerging cloud computing marketplace. They will need to become much more disciplined in every aspect of their operations and embrace a much more industrialized approach to services management and delivery.
These heterogeneous cloud-based data centers need something like the well-engineered management architectures and disciplines that have long been the hallmarks of mainframes across all their platforms and workloads. The zEnterprise announcement is introducing such capabilities. That is why I believe that this announcement is not only an important step in the evolution of the mainframe, but an equally important step in the evolution of cloud data centers.
Comments