In his now legendary 1965 paper, Intel co-founder Gordon Moore first made the empirical observation that the number of components in integrated circuits had doubled every year since their invention in 1958, and predicted that the trend would continue for at least ten years, a prediction he subsequently changed to a doubling every two years. The semi-log graphs associated with Moore’s Law have since become a visual metaphor for the technology revolution unleashed by the exponential improvements of just about all digital components, from processing speeds and storage capacity to networking bandwidth and pixels.
The 4004, Intel’s first commercial microprocessor, was launched in November, 1971. The 4-bit chip contained 2,300 transistors. The Intel Skylake, launched in August, 2015, contains 1.75 billion transistors which collective deliver about 400,000 more computing power than the 4004. Moore’s Law has had quite a run, but like all good things, especially those based on exponential improvements, it must eventually slow down and flatten out.
In its overview article, The Economist reminds us that Moore’s Law was never meant to be a physical law like Newton’s Laws of Motion, but rather “a self-fulfilling prophecy - a triumph of central planning by which the technology industry co-ordinated and synchronised its actions.” It also reminds us that its demise has been long anticipated: for a while now, the number of people predicting the death of Moore’s Law has also been doubling every two years.
So what happens as the end is now in sight? I’ve been thinking about this question for a while. And, and as is often the case when it comes to highly complex systems, I find myself turning to biology as a source of inspiration, in particular, to evolutionary biology.