The buzz and excitement around cloud computing has been steadily building over the last year. There is general agreement that something big and profound is going on out there, although we may not be totally sure what it is yet. "There is a clear consensus that there is no real consensus on what cloud computing is," was one of the key conclusions at a recent conference on the subject.
I believe that one of the major reasons for both the excitement and lack of consensus is that we are basically seeing the emergence of a new model of computing in the IT world. For the IT industry, a new computing model is a very big deal. In the fifty years or so since there as been an IT industry, this would be only the third such model, centralized and client-server computing being the two previous ones.
What characterizes a computing model? There is no single dimension around which to define a computing model, which I believe accounts for the variety of definitions of cloud computing. It’s like the fable of the blind men and the elephant. Each one touches a different part of the elephant. They then compare notes on what they felt, and learn that they are in complete disagreement.
Similarly, different people focus on the particular aspect of computing they are interested in. The vast majority are involved with computers as users. They don’t care about the hardware, software or the details of the underlying technology. They are just using an application to get something done, whether it is sending an e-mail, making an on-line purchase, checking an item in the inventory or simulating the crash-worthiness of a car.
Then you have the people making acquisition decisions about computers and computer services. Large acquisition decisions are typically the responsibility of the IT organization, working with their procurement and finance departments. Smaller such decisions are often delegated to individual departments.
The last group I want to mention consists of all the people and companies selling ad-on products and services on top of the basic computer systems, such as application developers. Unlike users, such members of what we sometimes refer to as the IT ecosystem are quite involved with the details of the computing model so they can design and program their applications, including what programming languages are available, what user interfaces they can design to, what system services their application can invoke, and what middleware and/or operating systems are supported.
Let me briefly review a bit of history before getting to our new computing model.
In its early decades, - the ‘50s,‘60s, and '70s, - just about all computing was centralized, typically consisting of mainframes and supercomputers located behind the glass walls of the data center. Everything and everyone worked off this centralized computing model. The computers with all their software, storage devices, network controllers, printers, and all other gear were quite expensive, typically costing millions of dollars.
A central IT organization managed the operations, made the purchase and leasing decisions, and tightly controlled all the software that ran on them, including applications, systems management and development tools. Users interacted directly with the applications on the central machines via text-based terminals connected over networks. Airline reservation and banking systems were among the best known applications on large, central computers.
Mainframes were built using fairly sophisticated, powerful and expensive technologies. They were designed to be shared by multiple applications and many users simultaneously. A lot of attention had to go into reliability, since a failure would impact all the users sharing the computer and its various applications. The hardware architecture and software were carefully designed so the machines could provide very good response times to high volumes of transactions while operating very efficiently, that is, at high utilizations.
Over time, the IBM S/360 family, announced in 1964, emerged as the leading products of the central computing model. A fairly large ecosystem developed around S/360 of companies providing a large variety of hardware and software add-on products, applications and services.
The 1980s saw the emergence of increasingly powerful and inexpensive microprocessors, personal computers and Unix-based workstations. These technologies paved the way for the new, distributed client-server model of computing. The architecture of these client-server systems was quite different from the architecture of the mainframes of the central computing model. The designs were optimized for low costs and simplicity, rather than sharing and reliability.
Typically, each PC and Unix-based server was dedicated to a single application. High utilization, efficiency and reliability were not major design objectives because they would add to the cost and complexity of the machine. The underlying assumption in the client-server model was that because the individual systems were fairly inexpensive, you could add as many systems as you needed to support the various applications and users in the installation.
In addition to lower costs, the client-server model offered a lot more flexibility. Different departments could buy their own systems. It was easier and faster to develop and operate new applications because there was no central IT organization to slow you down. This flexibility, coupled with the much improved graphical user interfaces (GUI) made possible by the use of PCs and workstation as client devices, ushered an explosion of innovative new applications in the ‘80s and ‘90s, including the exciting new class of web-based applications which any user could access with a PC, a browser and an Internet connection.
Microsoft and Intel emerged as the dominant companies in the client-server model, and a very large ecosystem was developed around their products.
Over time, companies ended up with very large numbers of relatively small servers, distributed over various departments in the organization, each dedicated to a single application. Because the server were not shared among multiple application or a large enough group of users, they often ran at low utilizations, between 10% and 20%. These factors eventually led to significantly increased management complexities and costs.
As the Internet and World Wide Web exploded into the general computing world in the mid 1990s, you saw the beginnings of a new Internet-based computing model beyond the centralized and client-server models of earlier decades. While web-based applications generally followed a client-server model, the much larger number of users now able to access these web applications required servers that were significantly more scalable and reliable, as well as offering significantly better systems management.
Many of these large web servers were housed in the central data centers and managed by the IT organizations. They now co-existed right alongside mainframes, which in the intervening years had been significantly transformed with new technologies, open interfaces and Internet capabilities. A number of the mainframe capabilities, including the ability to share workloads, operate at high utilizations and offer sophisticated system management tools began to migrate to these growing family of large servers built around PC technologies and distributed architectures.
Around that time, roughly a decade or so ago, we launched a number of initiatives in IBM to better support the new requirements for highly scalable Internet-based computing, including Linux and open source, service oriented architecture (SOA), Grid computing, pervasive computing, autonomic computing and utility computing. By 2002, it had become pretty clear that these were not independent initiatives, but part of the Next Big Thing, which we launched in October of 2002 and called On Demand computing. Other companies announced similar initiatives around the same time.
Over the next few years, additional pieces of the puzzle fell into place. Whereas On Demand and similar initiatives were oriented toward enterprises, these new initiatives came from the consumer world. We saw the rise of very large web sites from Google, Amazon, Yahoo and others, that provided all kinds of consumer services to huge numbers of users, including search, maps, shopping and news. We also saw the rise of Web 2.0 concepts like blogs and wikis, and social networking sites like MySpace and Facebook, which rapidly grew to support large numbers of users communicating and sharing information with each other.
In my opinion, the key piece of the puzzle that has brought it all together and is giving us unmistakable signals that a new computing model is indeed emerging, is the explosive rise of intelligent mobile devices, such as Blackberrys, iPhones, Web-capable cell phones of all kinds, e-book devices and netbooks. Beyond them, is the even larger number of sensors and other digital technologies that are being embedded into myriads of things in the physical world, like cars, appliances, medical equipment, cameras, roadways, pipelines, pharmaceuticals or livestock. These are bringing together the world’s digital and physical infrastructures and giving rise to all kinds of new, smart applications.
I believe that the transition from dumb, text based terminals to PCs as the primary client devices went hand in hand with the transition from the central to the client-server computing models, and enabled orders of magnitudes more people to begin using computers. Similarly, I see the transition from PCs to mobile devices and sensors as signaling the transition from the client-server to this new, Internet-based computing model, and once more increasing the number of people, and now physical things accessing computing by orders of magnitude.
As is often the case, it has taken a number of years for the marketplace to reach consensus and finally give this new computing model a name around which everyone can rally. Cloud computing has emerged as the name most people have settled on for this new computing model. It is not really new, which is why so many of its ingredients feel familiar. Rather, it represents a confluence of forces that have been building over the past decade, that finally reached a critical enough mass to be declared as something different enough from the previous model to be given its own name.
Let me conclude this long entry by briefly mentioning a few of the key characteristics of cloud computing that I believe truly differentiate it from the centralized and client-server models.
First of all, cloud computing is essentially an Internet of services, offering a huge variety of services to all those billions of people and trillions of things connected to clouds. You could not possibly support these volumes of services with the relatively custom, ad-hoc architectures that characterize most IT applications. Cloud computing thus requires standardized, mass customized services, much as those that have been offered for years by telecommunication companies.
In addition, you cannot possibly support all those services and devices unless data centers embrace a much more standardized, process oriented, industrialized approach to services management and delivery. Many companies have not exercised the needed discipline in their IT operations, and have allowed different departments in their organization to architect their own systems and applications, so that their various systems do not work well with each other, and application take a long time from development to deployment. Larger enterprises have often grown through mergers and acquisitions, and have been lax in architecting and integrating their disparate systems into a more coherent whole.
Such ad-hoc, custom data centers will not be able to achieve the kinds of volumes, costs and quality needed to successfully compete in the emerging cloud computing marketplace. They will need to become much more disciplined in every aspect of their operations, as was the case with manufacturing plants around 25 years ago or so.
Finally, clouds offer companies and individuals a much more flexible acquisition model than has been the case with centralized or client-server computing. An enterprise can decide to run its data centers as cloud delivery centers for those applications it wants to offer as cloud services, that is, with the proper scalability, quality, costs and discipline. Alternatively, it can choose to acquire IT or business services from the growing number of companies offering such services in the marketplace. Or the enterprise can adopt a hybrid model - delivering some of services from its own data centers, and acquiring others from service providers.
I suspect that in the foreseeable future, most larger enterprises will support such a hybrid acquisition model, while most smaller businesses will start getting the majority of their services from cloud service providers.
We should view computing models much more like forests than trees. These computing model forests have a variety of different trees, and the transition between them is gradual, not abrupt. With the passage of time, as you walk around them you begin to see new trees, but the old ones are still around. But one day you realize that the forest you are now walking through is markedely different from the one you were in twenty years ago. It is therefore now time to give this forest a new name, while you keep on walking and looking for new trees.
Glad I saw this post. Very informative post. I am well informed. All I can say is if Cloud makes a big help to us. it is better to introduce into the people. thanks.
-seff-
Posted by: Electronics Philippines | May 13, 2009 at 12:07 AM
Please check out my paper on a cost-benefit analysis of one of IBM's Computing on Demand Cloud Offering...
http://www.ciofinancesummit.com/media/pdf/confronting_data_center_crisis.pdf
Posted by: Srini Chari | June 04, 2009 at 01:58 PM
What are the implications of offering certain functions within an organization in the cloud when the company gets acquired by an organization that is in a different cloud or is still in the client-server model?
Posted by: Sanju | February 28, 2010 at 03:23 AM