Isaac Newton laid down the foundations for what we now call classical mechanics with the publication of his Principia Mathematica in 1687, where his Laws of Motion, where first articulated. Ever since, our scientific understanding of the world around us has been based on classical mechanics, - “a set of physical laws governing and mathematically describing the motion of bodies and aggregates of bodies geometrically distributed within a certain boundary under the action of a system of forces.”
Classical mechanics works exceptionally well for describing the behavior of objects that are more or less observable to the naked eye. It accurately predicts the motion of planets as well as the flight of a baseball. It formed the scientific basis for the technology and engineering underlying the Industrial Revolution.
The elegant mathematical models used in classical mechanics depict a world in which objects exhibit deterministic behaviors. The same objects, subject to the same forces, will always yield the same results. These models make perfect prediction within the accuracy of their human-scale measurements.
But, this stable world that could be perfectly described given enough information and scientific knowledge began to fall apart in the early 20th century. Classical mechanics could not explain the counter-intuitive and seemingly absurd behavior of energy and matter at atomic and subatomic scales. Neither could it explain the behavior of bodies traveling near the speed of light or the vast scales of the universe.
To do so, physics had to wait for quantum mechanics and relativity. Compared to the pastoral worlds that classical physics had been traveling through, the new theories that quantum mechanics, special relativity and general relativity were now spinning often had the feel of some sort of 20th century magical mystery tour.
Classical mechanics works exceptionally well for describing the behavior of objects that are more or less observable to the naked eye. It accurately predicts the motion of planets as well as the flight of a baseball. It formed the scientific basis for the technology and engineering underlying the Industrial Revolution.
The elegant mathematical models used in classical mechanics depict a world in which objects exhibit deterministic behaviors. The same objects, subject to the same forces, will always yield the same results. These models make perfect prediction within the accuracy of their human-scale measurements.
But, this stable world that could be perfectly described given enough information and scientific knowledge began to fall apart in the early 20th century. Classical mechanics could not explain the counter-intuitive and seemingly absurd behavior of energy and matter at atomic and subatomic scales. Neither could it explain the behavior of bodies traveling near the speed of light or the vast scales of the universe.
To do so, physics had to wait for quantum mechanics and relativity. Compared to the pastoral worlds that classical physics had been traveling through, the new theories that quantum mechanics, special relativity and general relativity were now spinning often had the feel of some sort of 20th century magical mystery tour.
At the atomic and subatomic scales that quantum mechanics deals with, the notion of a predictable, deterministic world went out the window. You cannot specify the path of an electron, because the very notion of path is a classic concept that makes no sense in quantum mechanics. The behavior of particles at this scale is explained by their wave functions, which mathematically describe the complete behavior of the particle, but can only tell you the probability that the particle will be at a specific position and time.
The laws of quantum mechanics, - concisely embodied in Schrödinger’s equation - lets you map out the behavior of electrons and other particles, but only as probability distributions of all the possible states it could be at. In fact, the Heisenberg uncertainty principle tells you that it is impossible to simultaneously determine the exact position and velocity of an electron with any great degree of accuracy no matter how good your measurement tools are. The more precisely you know the position, the less precisely you will know its velocity and where it is heading next. The more precisely you know its velocity, the less you will know about where it is at any particular point of time.
Moreover, while in classical mechanics something either has the properties of a particle, e.g., a planet, a baseball; or of a wave, e.g, light, sound; in quantum mechanics all objects exhibit both kinds of properties. The concept of wave-particle duality explains that it all depends on what question you are asking and what experiment you perform to answer the question. Sometimes a wave model will give you the better answer, sometimes a particle model. There is no such thing as absolute reality. It all depends on the mechanisms used to observe and measure that reality.
But, the worlds of the very small and the very large are not the only ones that exhibit counter-intuitive, somewhat bizarre behavior. So is the world of highly complex systems, especially those systems whose components and interrelationships are themselves quite complex, as is the case with systems and evolutionary biology, as well as with organizational and socio-technical systems whose main components are people. In such systems, the dynamic nature of the components, as well as their intricate interrelationships renders them increasingly unpredictable and accounts for their emergent behavior.
We have long known about complex systems and emergent behavior. The butterfly effect has been used for decades as a metaphor to describe the concept that small variations in a complex, dynamical system may produce large variations in the longer term behavior of the system. “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” was the title of a 1972 talk. Similar phrases are often used around the world to illustrate the difficulties of large scale weather prediction.
The study of complex systems is now receiving a lot more attention for a variety of reasons. As a result of the Internet, we are now dealing with fast changing, globally integrated systems in a number of areas, from finance to supply chains, from the propagation of news items to the marketing of products. Before the Internet brought the world much closer together, changes traveled relatively slowly and often became attenuated and lost intensity before reaching other parts of the system. Not so today. In our increasingly integrated world, changes propagate around the world in milliseconds, and will often lead to non-linear, highly unpredictable behaviors. The Internet has become our ultimate butterfly effect amplifier.
The ongoing financial crisis is a case in point. As we know, this was a gathering storm that we totally missed. While there are many reasons why we failed to predict the crisis, one of the major ones is arguably that for the last several decades economic thinking was dominated by neoclassical economics, which some describe as an “idealized vision of an economy in which rational individuals interact in perfect markets.”
The financial crisis may have done for this idealized view of people and markets what the atomic spectrum and speed of light experiments did for classical mechanics at the end of the 19th century. They have both acted as kinds of turn of the century indicators that the world out there is far more complex than previously imagined, and new theories and models are now needed.
New terms, like long tails, Freakonomics and black swans, - every bit as fanciful as quarks, charm and strangeness, - have begun to enter our lexicon. Black swan theory, for example, refers to the consequences of high impact, hard to predict events beyond the realm of normal expectations, yet capable of significantly altering the course of history, such as our financial meltdown, the 9/11 attacks and the rise of the Internet. These events, while catastrophic when they do happen, seem so far outside the boundaries of normal probability that most models miss them because no one expect them to happen within their lifetimes. But, they do happen, and given our increasingly integrated, fast changing systems, we expect such large improbable disturbances to occur more frequently in the years ahead.
How do you then deal with these highly unpredictable complex systems in our midst? The good news, is that advances in information technologies can be a huge help if properly applied. First of all, you need to know what is going, which requires access to vast quantities of information. Fortunately, we now have the ability to gather huge amounts of information about the real-time behavior of organizations, markets, and other highly complex systems, which we can then quickly analyze with powerful supercomputers. This should help us make better informed and much smarter decisions, including real time corrections to the system to try to prevent or ameliorate whatever problems the analysis might have uncovered.
How about modeling these complex systems, so we can better understand their behavior and try to anticipate future catastrophic conditions? Since these systems are inherently unpredictable, there is no one model that can accurately tell us about the future. But, reminiscent of quantum mechanics, we can map out the future states of the system, and compute the probabilities of the different states at different points in time. We do something similar today with hurricane prediction, where we are not able to tell precisely where the incoming hurricane will hit a few days from now, but we can calculate cones of probabilities reflecting the possible paths the hurricane might take.
Doing so requires massive supercomputers, because we have to simultaneously run thousands of copies of the same complex applications, using many different combination of parameters, so we can explore the solution space of these otherwise unpredictable problems over time. This will let us map out what the overall solution space for these systems looks like, e.g., their cones of probabilities, and in particular, it will enable us to calculate the probabilities of extreme events.
This new style of predictive modeling is one of the key objectives of the Extreme Scale Computing initiative that is being launched by the US Department of Energy (DOE) to address major energy and environmental problems, from climate studies to the design of safe nuclear reactors. But, it is clear that many disciplines will benefit from such powerful predictive capabilities, from economics and medicine to city planning and business management.
I truly find this fascinating. Once more, our classical understanding of the world needs to be expanded to include a set of more fanciful, somewhat counter-intuitive, seemingly absurd explanations for events we cannot otherwise grasp. Our 21st century magical mystery tour promises to be every bit as exciting and productive as the one we embarked on over a century ago.
It's good to see the DoE being tasked with doing something commercially useful; 'beating their swords into ploughshares', some might say.
I'm not sure you have it quite right about running thousands of copies of the same application with slightly different initial conditions. That you can do with thousands of Personal Computers, connected by fairly-low-bandwidth networking. You could do it (for example) by setting it as a homework project in a moderately-large school. A good start, and I wish my children's schools would try it, but there is a further level that needs reliable, high-density infrastructure servers.
Extreme scale computing (to my mind) has more to do with how you tie large numbers of processors together to work on a single problem. Here is Intel Research's contribution so far http://techresearch.intel.com/articles/Tera-Scale/1826.htm ; they can build hardware (as can IBM) but the question of how to deploy it productively is a research topic, and one where they are seeking collaborators. It's an Emerging Technology, and (as far as I can tell) no business or academic organization is sure of the way forward.
One of the IBM ventures in the field is of course Blue Gene http://www-03.ibm.com/systems/deepcomputing/bluegene/ . Should we view this as the prototype of things to come ? Some minaturization needed, and some reduction in the power consumption, maybe; but it's available now for early technology adopters to use; and it's likely to be the early adopters who will figure out how to exploit it to make commercially-useful solutions, and that's where the money will be.
Posted by: Chris Ward | April 04, 2010 at 03:32 PM
Irving Wladawsky-Berger's insights on relative quantum physics are to-the-point, and lead to the central issue of the atomic model in Schrodinger terms of probability calculations. Recent advancements in quantum science have produced the picoyoctometric, 3D, interactive video atomic model imaging function, in terms of chronons and spacons for exact, quantized, relativistic animation. This format returns clear numerical data for a full spectrum of variables. The atom's RQT (relative quantum topological) data point imaging function is built by combination of the relativistic Einstein-Lorenz transform functions for time, mass, and energy with the workon quantized electromagnetic wave equations for frequency and wavelength.
The atom labeled psi (Z) pulsates at the frequency {Nhu=e/h} by cycles of {e=m(c^2)} transformation of nuclear surface mass to forcons with joule values, followed by nuclear force absorption. This radiation process is limited only by spacetime boundaries of {Gravity-Time}, where gravity is the force binding space to psi, forming the GT integral atomic wavefunction. The expression is defined as the series expansion differential of nuclear output rates with quantum symmetry numbers assigned along the progression to give topology to the solutions.
Next, the correlation function for the manifold of internal heat capacity energy particle 3D functions is extracted by rearranging the total internal momentum function to the photon gain rule and integrating it for GT limits. This produces a series of 26 topological waveparticle functions of the five classes; {+Positron, Workon, Thermon, -Electromagneton, Magnemedon}, each the 3D data image of a type of energy intermedon of the 5/2 kT J internal energy cloud, accounting for all of them.
Those 26 energy data values intersect the sizes of the fundamental physical constants: h, h-bar, delta, nuclear magneton, beta magneton, k (series). They quantize atomic dynamics by acting as fulcrum particles. The result is the exact picoyoctometric, 3D, interactive video atomic model data point imaging function, responsive to keyboard input of virtual photon gain events by relativistic, quantized shifts of electron, force, and energy field states and positions. This system also gives a new equation for the magnetic flux variable B, which appears as a waveparticle of changeable frequency.
Images of the h-bar magnetic energy waveparticle of ~175 picoyoctometers are available online at http://www.symmecon.com with the complete RQT atomic modeling manual titled The Crystalon Door, copyright TXu1-266-788. TCD conforms to the unopposed motion of disclosure in U.S. District (NM) Court of 04/02/2001 titled The Solution to the Equation of Schrodinger.
Posted by: Dale B. Ritter | April 05, 2010 at 04:28 AM
The quantum mechanics seem to apply at small scales, nobody has seen evidence of them on a large scale, where outside influences can more easily destroy fragile quantum states.
Posted by: Quantum Mechanics | April 20, 2010 at 04:54 AM