« Technology, Leadership and Innovation in the Service Economy | Main | The Business Value of Social Networks »

February 15, 2010

Comments

Jon Duke

Some quick thoughts. The number of processors being talked about here is getting ever closer to human neuronal levels. So we should expect to take leads from knowledge in that area. Our brains work slowly with very high interconnectivity. Thus, getting really low power processors harnessed in their billions seems to be something to explore. Perhaps the nodes need to store little apart from state information and the explicit representations of data that we are used in today's digital systems can give way to a more distributed 'system' memory. One can also envisage hybrid situations with a bit of both. The organisations and reoganisability of such machines itself needs exploration, and perhaps genetic algorithms have an important role here. Thay can drive solution methods from the simple to the complex without humans being directly involved, although explaining how results were obtained could be interesting. One would think that the reproducability of results, and their sensitivity to variations in input, might be important criteria for system utility value.

Tomforemski

Interesting comments from Jon Duke. But human neurons are in the multi-billion number range rather than hundreds of millions.

But it is an interesting model to consider. Neurons aren't very fast. Yet what our brains can do cannot be matched by the largest numbers of the fastest supercomputers.

Are we pursuing the wrong goal? Your excellent post talks about developing faster processors that are very low power consuming.

Should we be instead focusing on the software rather than the speed of the individial processors?

We could build a "neuron-scale" supercomputer running at just 2,000 calories a day that is 1,000 times as powerful as an exascale supercomputer. That's a big power saving :)

Bernard Finucane

There are about 100 bn neurons in a human brain and maybe a quadrillion synapses. I doubt that the brain is a very good model for a supercomputer, any more than a horse is a good model for a sports car. The design constraints for a brain -- such as being part of a mobile self-reproducing system -- are very different than for a supercomputer.

Tomforemski

Clearly, a horse is nothing like a sports car, but a brain is very much like a computer. Surely, we can learn a lot from hundreds of millions of years of evolution/experimentation.

Ravindra Bachalli

Science is all about quantities and measures and brain is poor at both. However, if a problem needs exascale computing then only a human brain can find a smarter work-around to avoid it. Some of the numerical techniques invented during the infancy of computing are being given a go-by in the rush to increase computing power.

dbonacin

Our ideas are too often over-connected with our jobs. It is very hard to exclude it and in the same time to keep connection between those two domains. Anyway, here we talk about such two different things. One is “supercomputing”, and the other is artificial intelligence (AI). Although, those two concepts are close in many details, they are not the same thing. Supercomputing is a domain where we can achieve extreme amount of symbolic (mathematical) operations, and AI is a domain where we look for a creativity and new way of processing information. If we compare any computer with the human brain, we can easily find out that human brain is not a computer (meaning=it does not operate on formal symbolic level of operate mnemonics). So, possible integration or closing of those two concepts will arrive, but only if we accept the fact that speed is not absolute criteria of functioning. There are many criteria more. Egg. human is not fastest biological being (it is one special fly), but he can move faster than any other biological creature. Think of it.

Maximilian

"Consumer electronics, mobile devices and embedded sensors are now the new partners of the extreme scale supercomputing community, because they share the same requirements for plentiful, powerful and inexpensive components that consume little power."

I wonder if this community aiming to perform embarrassing parallel workloads in the Exascale range before the end of the decade will focus their efforts in the first place on mobile chips over graphics chips? While consuming a considerable amount of power and costing in the range of $600 p. board they can perform an amazing amount of FLOPs.

The AMD HD 5970 has a processing power of respectively 4.64TFlops (single precision), 0.93TFlops (double precision) respectively, while have a maximum power consumption of 294W.

By reading quotes like

"It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore" (Jen-Hsun Huang, co-founder and CEO of NVIDIA)

I wonder how ARM chipsets (do they even support Double precision?) will be able to come even close to these numbers considering power efficiency and costs?

Dr. Robert R. Tucci

Dr. Wladawsky-Berger,
I'm no expert in this field, but maybe that is an advantage. Here is a blog post of mine that was largely inspired as a response to yours:
"ExaFLOPS Computing, Is it a Foolhardy Pursuit Headed for a Painful Belly-FLOPS?"
http://qbnets.wordpress.com/2010/06/24/exaflops-computing-is-it-a-foolhardy-pursuit-headed-for-a-painful-belly-flops/

Rich

It goes to wonder if the current pace in scaling will ever level off. There have been such vast advancements over the past 20 years that it almost seems like it would be impossible to keep that pace up. Do you think it will level off at some point?

Irving Wladawsky-Berger

As long as the market demands for computing continues, I think that the hardware and software technologies will be able to keep up. The architectures will likely evolve, more centralized or more distributed depending on the problems and economics. But, for the foreseeable future, I believe that one way or another the current pace in scaling will continue.

sto credits

However, at least in the exciting, is exascale computing to solve a highly complex and beyond our capacity, not only because of its large-scale potential of being kind of problem, but because of its inherent uncertainty and unpredictability. Way to deal with these uncertainties is to run multiple bands or the same copy of the application, and use many different parameters, which can explore these issues, can not predict the solution space. This will let us in science and engineering the best solution to many problems, and enable us to calculate the probability of extreme events.

Shawn Montgomery

I have to agree that computing has become so complex. However, it's what makes the world go round and is becoming something that society has become dependent on. However, the complex task is left to those that can keep up with this demand for higher technologies as we become more dependent than we ever have been. Look at 10 years ago and where we are today. Just imagine what 10 year from now will deliver.

ffxiv gil

Supercomputing is a domain where we can achieve extreme amount of symbolic (mathematical) operations, and AI is a domain where we look for a creativity and new way of processing information. If we compare any computer with the human brain, we can easily find out that human brain is not a computer (meaning=it does not operate on formal symbolic level of operate mnemonics). So, possible integration or closing of those two concepts will arrive

tera gold

The Extreme Scale Systems Center's (ESSC) primary goal is to help enable the best and most productive use possible of emerging peta-/exa-scale high-performance computers. Of particular interest are the systems expected from the DARPA High Productivity Computing Systems (HPCS) program. The ESSC is intended to foster long-term collaborative relationships and interactions between DoD, DoE, DARPA, NRL and ORNL technical staff that will lead to improved and potentially revolutionary approaches to reducing time to solution of extreme-scale computing and computational science problems. The ESSC will support the major thrust areas required to accomplish this goal.

The comments to this entry are closed.