The March 12 issue of The Economist includes a special report on the future of computing after the very impressive 50-years run of Moore’s Law.
In his now legendary 1965 paper, Intel co-founder Gordon Moore first made the empirical observation that the number of components in integrated circuits had doubled every year since their invention in 1958, and predicted that the trend would continue for at least ten years, a prediction he subsequently changed to a doubling every two years. The semi-log graphs associated with Moore’s Law have since become a visual metaphor for the technology revolution unleashed by the exponential improvements of just about all digital components, from processing speeds and storage capacity to networking bandwidth and pixels.
The 4004, Intel’s first commercial microprocessor, was launched in November, 1971. The 4-bit chip contained 2,300 transistors. The Intel Skylake, launched in August, 2015, contains 1.75 billion transistors which collective deliver about 400,000 more computing power than the 4004. Moore’s Law has had quite a run, but like all good things, especially those based on exponential improvements, it must eventually slow down and flatten out.
In its overview article, The Economist reminds us that Moore’s Law was never meant to be a physical law like Newton’s Laws of Motion, but rather “a self-fulfilling prophecy - a triumph of central planning by which the technology industry co-ordinated and synchronised its actions.” It also reminds us that its demise has been long anticipated: for a while now, the number of people predicting the death of Moore’s Law has also been doubling every two years.
So what happens as the end is now in sight? I’ve been thinking about this question for a while. And, and as is often the case when it comes to highly complex systems, I find myself turning to biology as a source of inspiration, in particular, to evolutionary biology.
Cells, the basic building blocks of all life, first emerged on Earth around 4 billion years ago. Evolution continued to perfect the cell over the next few billion years, giving rise to a variety of single-celled organisms, followed later by multi celled organisms of various types. Then around 550 million years ago, a dramatic change took place in life on Earth, - the Cambrian Explosion. Evolution took off in a totally separate direction. Cells were now good-enough, - not worth devoting evolution’s precious energies to their continuing optimization. The time had come to use the cells as building blocks and, essentially, go up-the-stack.
Over the next 70 to 80 million years, evolution accelerated by an order of magnitude, ushering a diverse set of organisms far larger and more complex than anything that existed before, and organizing living organisms into mutually dependent ecosystems. By the end of the Cambrian geological period, the diversity and complexity of life began to resemble that of today.
Something similar has been taking place in the world of IT. For the past 50-60 years, we’ve been perfecting our digital components - microprocessors, memory chips, disks, and so on. Initially, we used these components to develop what in retrospect were not-very-powerful, stand-alone computer systems. High component costs and a lack of industry standards made it difficult to cluster and/or network large numbers of individual computers into larger, more powerful systems.
But this all started to change about 20-25 years ago. First, as the cost of components continued to drop, we were able to develop far more powerful computers. At the same time, the Internet brought a culture of standards to just about all aspects of the IT industry. Our Internet-based systems were now increasingly made-up of a diverse set of interconnected computers of all sizes, most notably, highly parallel systems in centralized data centers supporting huge numbers of distributed mobile and IoT devices.
Once IT started moving up-the-stack, we became less dependent on the exponential improvements in components, relying more on innovations in systems architecture, algorithms and applications. The slowdown and potential end of Moore’s Law, is no longer as big a deal as it once would have been because, as The Economist points out, “the future of computing will be defined by improvements in three other areas, beyond raw hardware performance:” software, specialized architectures, and cloud computing.
Software
As Marc Andreessen noted in a 2011 essay, Software is Eating the World. “My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy. More and more major businesses and industries are being run on software and delivered as online services - from movies to agriculture to national defense.” Entrepreneurial companies all over the world are disrupting established industries, with innovative software-based solutions.
“Why is this happening now,?” he asked. “Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.”
Software has long been applied to digitize and automate an increasing number of every-day activities, from back- and front-office business processes to personal payments and navigation. And, as digitization increasingly permeates every nook and cranny of society, including our personal lives, it’s generating vast amounts of data.
All these data is now enabling us to better understand many aspects of the world that have never been quantified before. A whole new round of tools and applications has been emerging to augment our intelligence by analyzing vast amounts of information. Software is now being increasingly applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans.
We were all wowed when in 1997 Deep Blue won a celebrated chess match against then reigning champion Gary Kasparov, and when in 2011, Watson won the Jeopardy! Challenge against the two best human Jeopardy! players. But AI wowing has now become commonplace. A few weeks ago, AlphaGo claimed victory against Lee Sedol, - one of the world’s top Go players, - in a best-of-five match, winning four games and losing only one. Go is a much more complex game than chess, for which there are more possible board positions than there are particles in the universe.
“As a result, a Go-playing system cannot simply rely on computational brute force, provided by Moore’s law, to prevail, “notes The Economist. “AlphaGo relies instead on deep learning technology, modelled partly on the way the human brain works. Its success… shows that huge performance gains can be achieved through new algorithms. Indeed, slowing progress in hardware will provide stronger incentives to develop cleverer software.”
Specialized architectures
Tissues, organs and organ systems have evolved in living organisms to organize cells so that together they can better carry out a variety of common biological functions. In mammals, organ systems include the cardiovascular, digestive, nervous, respiratory and reproductive systems, each of which is composed of multiple organs.
General purpose computers have long included separate architectures for their input/output functions. Supercomputers have long relied on vector architectures to significantly accelerate the performance of numerically-intensive calculations. Graphic processing units (GPUs) are used today in a number of high performance PCs, servers, and game consoles. Most smartphones include a number of specialized chips for dealing with multimedia content, user interactions, security and other functions. Neural network architectures are increasingly found in advanced AI systems.
As in evolution, innovations in special-purpose chips and architectures will be increasingly important as Moore’s Law fades away.
Cloud computing
“When computers were stand-alone devices, whether mainframes or desktop PCs, their performance depended above all on the speed of their processor chips,” observes The Economist. “Today computers become more powerful without changes to their hardware. They can draw upon the vast (and flexible) number-crunching resources of the cloud when doing things like searching through e-mails or calculating the best route for a road trip. And interconnectedness adds to their capabilities: smartphone features such as satellite positioning, motion sensors and wireless-payment support now matter as much as processor speed.”
As is the case in biology, most computing now takes place in mutually dependent ecosystems. User-oriented and IoT devices get the bulk of their services over the Internet, - from clouds of all sorts out there. In the digital economy, clouds are essentially the production plants of services, supporting billions of mobile devices and 10s of billions of IoT devices. To achieve the required scalability, security and efficiency, cloud-based data centers have had to become much more disciplined in every aspect of their operations.
“The twilight of Moore’s law… will bring change, disorder and plenty of creative destruction,” writes The Economist in conclusion. “An industry that used to rely on steady improvements in a handful of devices will splinter. Software firms may begin to dabble in hardware; hardware makers will have to tailor their offerings more closely to their customers’ increasingly diverse needs.”
But, in the end, consumers do not care about Moore’s Law, transistors, or computers per se. “They simply want the products they buy to keep getting ever better and more useful. In the past, that meant mostly going for exponential growth in speed. That road is beginning to run out. But there will still be plenty of other ways to make better computers.”
>As in evolution, innovations in special-purpose chips and architectures will be increasingly important as Moore’s Law fades away.
I agree Irving. When I was an analyst I saw all these specialized architectures largely fail basically because why bother when Moore's Law would get you there in a year or two anyway? I'm not sure the implications of losing the CMOS scaling lever are as widely appreciated as they should be. (The former head of DARPA microelectronics peg them at about about a 3500X improvement over the past couple of decades; you don't lose a lever like that and just go on with business as usual.)
Posted by: Gordon Haff | April 06, 2016 at 10:09 AM
Turning to biology, as Irving points out, is an inspiration and a guide. Prior to possibly going to quantum computing, as at the end we live in a quantum reality, the bio-inspired system architectures are trending today. “memcomputing” (classical) approaches apparently truly overcome the von Neumann’s bottleneck architecture (CPU-memory) and computing, as our brain does, were memory is located. As Irving rightly point out, specialized architectures and chips will be increasingly important and I think this will ensure a bright future to the industry. E.g. see IBM’s TrueNorth, the developments in the frame of the European Human Brain Project’s Neuromorphic platform or the Nvidia’s P100.
Posted by: Pasquale Di Cesare | April 16, 2016 at 06:38 AM