From the early days of the industry, supercomputers have been pushing the boundaries of IT, identifying the key barriers to overcome and experimenting with technologies and architectures that are then incorporated into the overall IT market a few years later. While we generally focus on their computational capabilities as measured in FLOPS, - Floating-point Operations Per Second, - supercomputers have been at the leading edge in a number of additional dimensions, including the storage and analysis of massive amounts of data; very high bandwidth networks; and highly realistic visualizations.
Through the 1960s, 1970s and 1980s, the fastest supercomputers were based on highly specialized, powerful technologies. But, by the late 1980s, these complex and expensive technologies ran out of gas and parallel computing became the only realistic alternative to scaling up performance.
Instead of building machines with a small number of very fast and expensive processors, the early parallel supercomputers ganged together 10s, 100s, and over time 1000s of much less powerful and inexpensive CMOS microprocessors, similar to the micros used in the rapidly growing personal computer and workstation industry. A similar evolution to microprocessor components and parallel architectures took place a few years later in the mainframes used in commercial applications.
The transition to parallel supercomputing was seismic in nature. Everything changed, from the underlying computer architecture, to the operating systems, programming tools, mathematical methods and applications. It took considerable research and experimentation to learn to effectively use these new kinds of machines. Moreover, there were widely different parallel architecture designs, some coming from universities and others from industry. It wasn’t clear at all which designs worked well for different kinds of applications and would thus be commercially viable.
The Department of Energy (DOE) national labs have long been among the world’s leading users of advanced supercomputers and played a leading role in the transition to parallel architectures. In 1983, the DOE’s Argonne National Lab established the Advanced Computing Research Facility (ACRF), an experimental parallel computing lab which brought together computer scientists, applied mathematicians and supercomputer users and vendors to learn how to best use this new generation of parallel machines.
This past May, Argonne convened a Symposium to mark the 30th anniversary of the ACRF. The Symposium looked both at the progress made in parallel computing over the past 30 years and the major trends for the future. I attended the Symposium and led a panel on The Impact of Parallel Computing on the World.