Super FLOPS
Computers are digital devices that operate on a finite number of bits. The early Intel chips (through Pentium III) operate on 32-bit data, and the latest Intel chips (Pentium D and Pentium 4) operate on 64-bit data. A 32 bit number is roughly 4 x 109, which allows signed integers from -2 billion to 2 billion when doing single precision arithmetic. The largest represented number is not as large as the number of stars in our Milky Way Galaxy (200-400 billion). Zero and one are represented, but there's nothing in between. Most real world calculations are not possible using integer math. That's why floating point notation was invented.
In the IEEE Floating Point Standard, a 32 bit number is divided into a sign bit, an eight bit exponent, and a 23 bit mantissa. This allows representation of numbers between about 10-38 and 1038 with about seven digits precision. 64 bit floating point numbers have a sign bit, an eleven bit exponent, and a 52 bit mantissa, allowing numbers between about 10-308 to 10308 with about 15 digits precision. This appears to allow for anything from the smallest elementary particle to the size of the universe. However, computers can "think" only in integers, so there's some complex shuffling of bits in the background which allows floating point computations to proceed. All this takes time, and a measure of how well this is done is a figure of merit called "floating point operations per second," or FLOPS (sometimes written as FLOP).
A skilled human takes about fifteen minutes to do a long division with ten significant digits, so humans calculate in milliFLOPS. A hand calculator performs at the ten FLOPS level. A 3.6 GHz Pentium 4 achieves a peak computation speed of 14.4 gigaFLOPS, with a average rating of 7.2 gigaFLOPS. The Pentium is a general purpose CPU, so it's not optimized for floating point calculation. Computer game systems, however, require extensive calculation ability for realistic graphics, so they have specialized computation architectures that achieve higher FLOPS. The Xbox 360 (Microsoft) can do a teraFLOP, and the PlayStation 3 (Sony) has a claimed speed of more than two teraFLOPS.
A teraFLOP capable computer on a tabletop is quite an achievement, but many computations require more computing power. Weather forecasting and other scientific simulations often require more computing power than a teraFLOP to get answers in a reasonable time. After all, it would be nice to predict the weather before it happens. For tasks like these, there are supercomputers. Supercomputers take advantage of the fact that most computations can be broken into many smaller processes that can be done in parallel. Supercomputers are typically massive parallel processors built from many smaller computers working together.
In December, 2006, the US Defense Advanced Research Projects Agency (DARPA) announced that it would fund a $500 million program for the development of a next generation supercomputer. In this final phase of DARPA's High Productivity Computing Systems Program, IBM and Cray Inc. will develop hardware and software to increase computing capability by an order of magnitude. Today's fastest computer, the IBM Blue Gene/L at Lawrence Livermore National Laboratory, runs at a peak speed of about 280 teraFLOPS. It's built from more than 130,000 processor chips. DARPA's desired ten-fold increase in speed would lead to the first PetaFLOP computer. Some scientists believe that computers will become sentient somewhere between the petaFLOP and exaFLOP (1,000 petaFLOPS) level.
References:
1. Heidi Ledford, "Better, faster - and easier to use," Nature, vol. 444, no. 7122 (21/28 December 2006), p. 993.