« April 2007 | Main | June 2007 »

May 31, 2007

Super FLOPS

Computers are digital devices that operate on a finite number of bits. The early Intel chips (through Pentium III) operate on 32-bit data, and the latest Intel chips (Pentium D and Pentium 4) operate on 64-bit data. A 32 bit number is roughly 4 x 109, which allows signed integers from -2 billion to 2 billion when doing single precision arithmetic. The largest represented number is not as large as the number of stars in our Milky Way Galaxy (200-400 billion). Zero and one are represented, but there's nothing in between. Most real world calculations are not possible using integer math. That's why floating point notation was invented.

In the IEEE Floating Point Standard, a 32 bit number is divided into a sign bit, an eight bit exponent, and a 23 bit mantissa. This allows representation of numbers between about 10-38 and 1038 with about seven digits precision. 64 bit floating point numbers have a sign bit, an eleven bit exponent, and a 52 bit mantissa, allowing numbers between about 10-308 to 10308 with about 15 digits precision. This appears to allow for anything from the smallest elementary particle to the size of the universe. However, computers can "think" only in integers, so there's some complex shuffling of bits in the background which allows floating point computations to proceed. All this takes time, and a measure of how well this is done is a figure of merit called "floating point operations per second," or FLOPS (sometimes written as FLOP).

A skilled human takes about fifteen minutes to do a long division with ten significant digits, so humans calculate in milliFLOPS. A hand calculator performs at the ten FLOPS level. A 3.6 GHz Pentium 4 achieves a peak computation speed of 14.4 gigaFLOPS, with a average rating of 7.2 gigaFLOPS. The Pentium is a general purpose CPU, so it's not optimized for floating point calculation. Computer game systems, however, require extensive calculation ability for realistic graphics, so they have specialized computation architectures that achieve higher FLOPS. The Xbox 360 (Microsoft) can do a teraFLOP, and the PlayStation 3 (Sony) has a claimed speed of more than two teraFLOPS.

A teraFLOP capable computer on a tabletop is quite an achievement, but many computations require more computing power. Weather forecasting and other scientific simulations often require more computing power than a teraFLOP to get answers in a reasonable time. After all, it would be nice to predict the weather before it happens. For tasks like these, there are supercomputers. Supercomputers take advantage of the fact that most computations can be broken into many smaller processes that can be done in parallel. Supercomputers are typically massive parallel processors built from many smaller computers working together.

In December, 2006, the US Defense Advanced Research Projects Agency (DARPA) announced that it would fund a $500 million program for the development of a next generation supercomputer. In this final phase of DARPA's High Productivity Computing Systems Program, IBM and Cray Inc. will develop hardware and software to increase computing capability by an order of magnitude. Today's fastest computer, the IBM Blue Gene/L at Lawrence Livermore National Laboratory, runs at a peak speed of about 280 teraFLOPS. It's built from more than 130,000 processor chips. DARPA's desired ten-fold increase in speed would lead to the first PetaFLOP computer. Some scientists believe that computers will become sentient somewhere between the petaFLOP and exaFLOP (1,000 petaFLOPS) level.

References:
1. Heidi Ledford, "Better, faster - and easier to use," Nature, vol. 444, no. 7122 (21/28 December 2006), p. 993.

May 30, 2007

Speed Dating

In the days of print journals, before the internet, when our Morristown research site had enough scientists to justify a library staff of six, we placed our names on routing lists for various journals. For those younger scientists who have never heard of a routing list, these were lists of names affixed to new journals as they would arrive. The first person on the list got a journal in his mailbox immediately. He was given the opportunity to read it for a few days before putting it back into the interdepartmental mail for the next person on the list. If a person didn't have quite enough time to browse the issue, he would notate a "send after" date and place it in the mail for the next person on the list. It was a self-policing system that seemed to work well most of the time, although I once got about twenty issues of Nature in my mailbox all at once from someone who was ignoring his mail stack. The nice part about this system, and the part I miss most, was the ease at which one could scan through a journal for interesting articles, whether or not they were in your particular field. Even today, it's hard to do this on your PC desktop, since titles, and even abstracts, don't give the full flavor of the underlying work. I was on the routing list for the Proceedings of the National Academy of Sciences of the United States of America (PNAS), a multidisciplinary scientific journal that rarely had an article in any of my specialty areas, but it still had a plethora of interesting articles in other areas of science and mathematics.

A recent paper [1] in PNAS from a multidisciplinary, multi-institution team, analyzed how the pace of life in cities has been increasing, and how this pace may not be sustainable. The research is important, since more than half of the world's population now resides in cities. Their essential conclusion is that many metrics of urban life, including wealth and R&D jobs, increase exponentially with population, but most infrastructure metrics, such as fuel consumption and road surface area, have an exponential decline. As if to emphasize the term "pace," even walking speed of city residents has increased exponentially. The authors state that "Cities have long been known to be society's predominant engine of innovation and wealth creation, yet they are also its main source of crime, pollution, and disease." Wealth creation and innovation have exponents of greater than 1.2, but infrastructure items have an exponent of about 0.8, which sometimes reflects economy of scale. Here's a list of some of the exponents that the study found. All items, except the last two from Germany, are from US statistics.

• New patents 1.27
• Inventors 1.25
• Private R&D employment 1.34
• Super-creative employment 1.15
• R&D establishments 1.19
• Total wages 1.12
• Total bank deposits 1.08
• New AIDS cases 1.23
• Serious crimes 1.16
• Length of electrical cables 0.87
• Road surface area 0.83

The authors predict that the "social life" in a city, which is not exactly a measurable quantity, increases with population size. They also state that population growth will require faster innovation to "sustain growth and avoid stagnation or collapse." Perhaps things such as New York City's "Congestion Pricing" scheme, which calls for an $8 fee for automobiles ($21 for trucks) entering mid-town Manhattan from 6:00 AM to 6:00 PM, weekdays, counts as innovation. If it pays for technical improvements to public transportation, then it might make sense.

References:
1. Luís M. A. Bettencourt, José Lobo, Dirk Helbing, Christian Kühnert, and Geoffrey B. West, "Growth, innovation, scaling, and the pace of life in cities," Proc. Natl. Acad. Sci. vol. 104, no. 17 (April 24, 2007), pp. 7301-7306.
2. Mathematical Models Explain Walking Speed in Cities (Wired News, April 20, 2007).

May 29, 2007

Pierre-Gilles de Gennes

In the mid-1970s, I bought my wife one of the first digital watches. Her co-workers called it the "atomic watch," because they had never seen anything like it. It showed time in hours and minutes, but only at the push of a button. The reason it didn't display the time continuously is because the display was a Light-Emitting Diode (LED) display that consumed considerable battery power. Even then, we needed to replace batteries every six months. Soon thereafter, digital watches appeared with a continuous time display because they used a new type of material called a liquid crystal. Pierre-Gilles de Gennes, the scientist responsible for major advances in the development of such materials, died May 18 at age 74.

Pierre-Gilles de Gennes was born on October 24, 1932 in Paris, France, and he was home-schooled by his parents, a physician and a nurse, until age twelve. He attended the École Normale Supérieure, and he received his Ph.D in 1957. He spent some time as a postdoctoral researcher at the University of California, Berkeley, with Charles Kittel, author of a popular textbook on solid state physics. After an extended period in the French Navy, he started a group for the study of superconductors at Orsay. He developed an electron correlation function for superconductors which is now called the "de Gennes function." Like many who at first study superconductors, he changed fields, and started his studies of liquid crystals in 1968, branching out (pun intended) into polymer mechanics in 1971. From 1976 to 2002 he was Director of the École Supérieure de Physique et de Chimie Industrielles de la Ville. He was awarded the 1991 Nobel Prize in Physics for "discovering that methods developed for studying order phenomena in simple systems can be generalized to more complex forms of matter, in particular to liquid crystals and polymers".

De Gennes' fundamental work on liquid crystals was not protected by patents. These would have been worth many millions of dollars in display applications. In an interview with Le Monde, he stated that research scientists "were not taught to think about applications." Having learned his lesson, he looked towards applications of physical phenomena later in life. France is a major wine producer and exporter, and de Gennes noted that application of fungicide and other sprays to grapes was inefficient. The grapes were not wetted by the liquids, and droplets formed on the surface. His research in this area led to sprays with an order of magnitude greater coverage for the same volume. After retirement, much like Erwin Schrödinger, he became interested in the working of the human brain.

References:
1. Martin Weil, "Obituary: LCD and Wine Researcher Pierre-Gilles de Gennes," (Washington Post, May 23, 2007).
2. Physicist Pierre-Gilles De Gennes Dies (The Guardian, May 22, 2007).
3. Pierre-Gilles de Gennes, 74; won Nobel Prize in physics for liquid crystal work (LA Times, May 23, 2007).

May 28, 2007

Memorial Day

Monday, May 28, 2007, is the Memorial Day holiday in the US. It occurs each year on the last Monday in May. Its purpose is to commemorate those killed in time of war. It's existed as a holiday since the American Civil War.

The Memorial Day weekend is considered to be the unofficial start of summer in the US. The official start of summer is the Summer Solstice, which occurs this year on June 21, 2007, at 18:06 UTC.

The Indianapolis 500 automobile race is conducted each year on the Memorial Day weekend. It's a 500-mile race that is as much a test of the endurance of man and machine as it is a speed race. Most importantly, women are allowed by etiquette to wear white shoes from now until Labor Day [1]."

Many people travel during the Memorial Day weekend, and there are about 500 deaths and 20,000 injuries caused by traffic accidents in this period [2,3].

References:
1. "Why aren't you supposed to wear white after Labor Day?" (Yahoo).
2. Memorial Day Holiday Period Traffic Fatality Estimate, 2005 (National Safety Council).
3. This Blog, "Unsafe at Any Speed."

May 25, 2007

The Other Kind of Digit

Phrenology (from the Greek φρην, mind, and λογος, knowledge) is a discredited science, first developed in the nineteenth century, which correlated the shape of the head with personality and other traits. This idea originated with Aristotle, and it was taken quite seriously in the Victorian era. Today, the idea that shape of a person's skull can predict the working of the brain inside is classified as a pseudoscience. In fact, phrenology is cited often as a prime example of a pseudoscience. Today, 2300 years after Aristotle, we live in an age when statistics give credence to many things, and people often ascribe causality to correlation.

A soon to be published paper in the British Journal of Psychology, a journal of The British Psychological Society, claims that finger length is a predictor of SAT test scores. Psychologists at the University of Bath looked at the finger lengths of seventy five children. They claim to have measured these to an accuracy of 0.01 mm (10 μm), which should temper our belief in their results. They then calculated the ratio of length of the index finger to that of the ring finger. In brief, a smaller ratio correlated with a larger difference between the math and verbal portions of the SAT test. A small ratio in boys favored a higher math score, and a larger ratio in girls favored a higher verbal score.

The psychologists believe that the causal link relates to pre-natal levels of testosterone and estrogen in the womb. The relative concentrations of these hormones are thought to affect both finger length and brain development. Testosterone may promote development of spatial and mathematical skills, whereas estrogen is thought to associate with verbal ability. The subtext to all this is that women are genetically disadvantaged in mathematics. Apparently, the authors are careful not to state this explicitly for fear of the consequences.

My left index finger is 90 mm long when measured from the notch between it and the adjacent ring finger and its tip (excluding the fingernail). My ring finger is 79 mm, giving a ratio of 1.139, but I can't compare my ratio to others, since the paper is not yet published. If you measure your ratio, you can compare it with my own. I received the highest score in my secondary school on a national test administered by the Mathematical Association of America. I can't remember my SAT math score exactly, but it was somewhat higher than 700 from a possible 800. Hopefully, my blog reflects the fact that my verbal score was similarly high.

Reference:
1. Andrew McLaughlin, "Finger length helps predict SAT exam results, study shows" (University of Bath Press Release).

May 24, 2007

Prime Factors

What's special about the following 307-digit number?

1159420574072573064369807148876894640753899791702017724986868353538822483859966756608000
6095408005179472053993261230204874028604353028619141014409345351233471273967988850226307
5752809379166028555105500425810771176177610094137970787973806187008437777186828680889844
712822002935201806074755451541370711023817

First, it's more simply stated as 21039-1. Second, it's the largest number of its type to be factored. Its three prime number factors are

• 5080711

• 55853666619936291260749204658315944968646527018488 637648010052346319853288374753

and

• 207581819464423827645704813703594695162939708007395209881208387037927290903246
7938234314388414483488 253405334476911222302815832769652537609141018910524199389
9334109711624358962065972167481161749004803 659735573409253205425523689

Factoring of this number was a brute force exercise involving computer clusters at three institutions, the Ecoles Polytechniques fédérales (Lausanne, Switzerland), the University of Bonn (Germany), and Nippon Telegraph and Telephone (Tokyo, Japan) [1, 2]. The calculation took eleven months, and this same calculation on a typical personal computer would take about a hundred years. Because of the particular form of the number, a power of two minus one, the calculation was aided by the "special number field sieve." This sieve was invented in the 1980s by Arjen Lenstra, his brother, Hendrik Lenstra, Mark Manasse, and John Pollard. Not-so-special numbers are harder to factor. A 200-digit non-special number (the RSA-200 challenge number) was factored in 2005 in an eighteen month effort. This number took the equivalent of about seventy-five years of time on a typical personal computer.

Why is factoring important? It's easy to multiply large prime numbers, but difficult to decompose a large number into its prime factors. This fact was developed into a method for secure communications, and it's used in nearly all secure internet transactions [3]. Factoring numbers is equivalent to breaking this code, so the numbers used have been getting larger and larger. Presently, for 1,024 bit encryption, 150-digit prime numbers are used to produce a 300-digit number. No one need worry yet, since the 307-digit number just factored is a special number that's somewhat easier to factor, and factoring it took a lot of computer time. Still, we will soon see the day when 1,024 bit encryption is not sufficiently secure, and we'll need to use 2,048-bit keys.

References:
1. Florence Luy, "A Mighty Number Falls" (Press Release, Ecoles Polytechniques fédérales, May 21, 2007).
2. Iain Thomson, "Clock ticking on 1024-bit encryption safety" (vnunet.com, 23 May 2007).
3. D. M. Gualtieri, "Keeping Secrets," Phi Kappa Phi Forum, vol. 83, no. 2. pp. 6-7 (Spring 2003). Copy available via e-mail request

May 23, 2007

Light Activated Micro-Actuator

How fast can you say "1,2-bis(5-methyl-2-phenyl-4-thiazolyl)perfluorocyclopentene?" The organic chemists among you can do this in less than four seconds. Other chemists would take about six seconds. Materials scientists like me would do it in about ten seconds. Upon exposure to ultraviolet light, this molecule can change shape about four-million times faster (25 microseconds), and it can do useful work as well. More surprisingly, exposure to visible light can bring the molecule back to its original shape, so that it can be triggered again by ultraviolet light. It's a light-activated molecular motor.

Piezoelectric materials (which change shape in response to an electrical charge), magnetostrictive materials (which change shape in response to a magnetic field), and shape-memory alloys (which change shape in response to temperature) are well-known and useful materials. These materials change shape reversibly, but light activated materials generally change shape irreversibly. Recently [1], Masahiro Irie, a professor of Applied Chemistry and Materials Science at Kyushu University, Japan, and his colleagues, synthesized the aforementioned material and demonstrated its photoisomerization in response to ultraviolet light. This molecule has a "Y" shape, and UV irradiation excites an electron, which causes a bond to shift to a location near the branch of the "Y." A carbon ring is formed, and this twists the molecule's shape. Visible light, tuned to the energy of the new bond, breaks the bond, and the molecule springs back to its original shape.

Irie and his group prepared rods of this material that reacted to the concerted motion of many of the small molecules. They found that the rods expand and contract by as much as 7%. A 250-micrometer rod, irradiated on one side, had an end movement of 50 micrometers. There was enough force generated at the tip to move a gold sphere about a hundred times heavier than the rod. Applications for this effect are manifold, and it will be interesting to monitor progress in this research area in the near future. Nanobot propulsion would be one application.

References:
1. Seiya Kobatake1, Shizuka Takami, Hiroaki Muto, Tomoyuki Ishikawa, and Masahiro Irie, "Rapid and reversible shape changes of molecular crystals on photoirradiation," Nature vol. 446 (12 April 2007), pp. 778-781.
2. Mason Inman, "Crystals twist about in response to light" (New Scientist Online).

May 22, 2007

The Blues

The Inuit have apparently twenty-seven names for ice [1]. The reason is obvious - The quality of ice is important to the Inuit, who live in a landscape dominated by water, ice, and snow. They are keen observers of ice, and they are able to place ice into twenty-seven categories. Non-Inuits are challenged by their language. When non-Inuits see ice, they may categorize it by an adjective, such as "melting," but they don't "see" ice as well as the Inuit. The idea that language influences the way its speakers think is called the Sapir-Whorf hypothesis, first recognized by Benjamin Whorf, an American linguist.

This hypothesis was put to the test in a recent study published in the Proceedings of the National Academy of Sciences [2]. A researcher team led by Jonathan Winawer of the Massachusetts Institute of Technology, with members from the University of California at Los Angeles, the Smith-Kettlewell Eye Research Institute (San Francisco), and Stanford University, tested whether a slightly richer linguistic palette translates into enhanced sensory perception. Their experiment was based on the fact that the Russian language has two words for the color blue, one for light blue, and the other for dark blue. The team recruited about twenty-five native Russian speakers and twenty-five native English speakers in the MIT area and presented them with squares of various shades of blue on a screen. The object was to match a particular square to one of two other squares. Twenty shades of blue were presented.

Russian speakers were better at distinguishing the shades when they straddled the boundary between light and dark blue, and they were ten percent faster at discriminating light from dark blue than between blues in the same category. When the Russian speakers were mentally distracted by the additional task of memorizing an eight-digit number (verbal interference), their superiority in color discrimination over English speakers vanished.

Why is this finding important to technologists? Louis Pasteur said, "chance favors the prepared mind." Unless your mind is preprogrammed (by formal education, keeping up with the scientific literature, etc.), you may not recognize certain clues in your experiment's data.

References:
1. Arctic Alive Web Site.
2. Jonathan Winawer, Nathan Witthoft, Michael C. Frank, Lisa Wu, Alex R. Wade, and Lera Boroditsky, "Russian blues reveal effects of language on color discrimination," Proc. Natl. Acad. Sci. USA, 10.1073/pnas.0701644104 (April 30, 2007, Online Prepublication Article).
3. Roxanne Khamsi, "Russian speakers get the blues" (New Scientist Online).
4. Michael Hopkin, "Seeing the blues: Having different words for light and dark blue may change how you see them" (Nature Online (Subscription only), 30 April 2007).

May 21, 2007

High Temperature Nano-Glue

Very often there's a need in my laboratory for some high temperature "glue." One vendor, Aremco, has many useful products in this area, my favorite being their No. 668 alumina-silica paste. I've used this to bond alumina ceramics for several projects. Of course, one problem I faced, and the key problem with all "glues," is the finite thickness of the glue layer itself. It would be nice to be able to bond materials with a bonding layer of vanishingly small thickness. Nowadays, the expression "vanishingly small" has been replaced by "nano," so it's no wonder that we now have a nano-glue.

A group of materials scientists and physicists from Rensselaer Polytechnic Institute (a school I attended for two undergraduate years in the 1960s and never really liked, although I hear it's much better now), IBM, and Technion-Israel Institute of Technology, have developed a nano-glue formed from a self-assembled organosilane molecular nanolayer that acts as a high temperature bonding agent [1]. The layer is actually an organic nano-sandwich. In the middle are molecules functionalized with oxygen at one end, and sulfur at the other. The outsides of the sandwich contain copper and silica. The functionalized ends of the molecule bond well to surfaces, and when the layer is heated to about 400oC, the copper and silica react to form the bonding material.

The group leader, Ganapathiraman Ramanath, says that this nano-glue was discovered by accident. Ramanath had worked for many years with similar organic materials without the copper and silica. These are good glues at low temperatures, but they decompose and detach from surfaces when heated above about 350oC. The copper and silica layers were added originally to promote mechanical strength of the bonding layer, but they were found to stabilize the assemblage at high temperature. The bonding layers achieve most of their stength at 400oC, but they continue to strengthen up to 700oC. This nano-glue is five times stronger than the bonding afforded by a copper-silica structure alone.

Ramanath says the material could sell for much less than a dollar a gram, which is quite a low price by most standards. Patent protection has been sought by the inventors, but their same fundamental approach could be used in the development of new families of nano-glues. An obvious route would be substitution of other metals for copper. This research was supported by the National Science Foundation.

References:
1. Darshan D. Gandhi1, Michael Lane, Yu Zhou, Amit P. Singh, Saroj Nayak, Ulrike Tisch, Moshe Eizenberg, and Ganapathiraman Ramanath, "Annealing-induced interfacial toughening using a molecular nanolayer," Nature, vol. 447 (17 May 2007), pp. 299-302.
2. Michael Mullaney, "Inexpensive 'nanoglue' can bond nearly anything together" (RPI Press Release).
3. Julie Steenhuysen, "Move over Elmer's: Nanoglue is thinner, stickier" (Reuters, Washington Post)

May 18, 2007

Sound-Assisted Reactions

Many reactions must overcome an activation energy barrier to proceed. One good example is the thermite reaction, in which aluminum metal reduces iron oxide,

2Al + Fe2O3 -> 2Fe + Al2O3.

This reaction is very energetic, releasing about 850 kJ per mole of aluminum oxide produced, but overcoming the activation barrier requires considerable energy, usually supplied by igniting a magnesium strip.

Electricity is a good method of overcoming activation energy. One needs only think of what happens when you spark a mixture of hydrogen and oxygen. Lightning is nature's chemist, producing nitrous oxides and ozone from nitrogen and oxygen in the air. Lightning was once thought to have been responsible for production of chemicals that started life on Earth, but the field of astrobiology has proposed other alternatives. Kenneth Suslick, a Professor of Chemistry at the University of Illinois at Urbana-Champaign, and a graduate student, Nathan Eddingsaas, have been encouraging chemical reactions through the use of electricity produced while crushing and breaking crystals of certain compounds [1, 2].

Sir Francis Bacon, who was an advocate of applied science in the sixteenth and seventeenth centuries, noticed in 1605 that scratching sugar with a knife in a dark room produced light [3]. This effect, called fractoluminescence, or more generally mechanoluminescence, arises from the recombination of electric charges separated by the cleavage of crystal planes and the attendant excitation of gas molecules in the air. A dramatic demonstration of this involves crushing wintergreen flavor Lifesavers in the dark [4]. In this case, the wintergreen acts as a phosphor to transform the ultraviolet light emitted by ionization of air to visible light.

Not content with just crushing crystals, Suslick and Eddingsaas subjected slurries of recorcinol and paraffin oil to intense sound waves. They bubbled nitrogen and oxygen through the mixture, and the resulting mechanoluminescence by sonication proved to be several orders of magnitude more intense than that caused by mechanical grinding. Acoustic cavitation, which is the formation and implosion of bubbles and the bane of many mechanical fluid-handling systems, generates intense pressures that shatter the slurry crystals with great force. Optical spectra indicated the occurrence of gas phase reactions.

References:
1. Nathan C. Eddingsaas and Kenneth S. Suslick, "Mechanoluminescence: Light from sonication of crystal slurries," Nature vol. 444 (9 November 2006), p.163. A copy is available here.
2. Nathan C. Eddingsaas and Kenneth S. Suslick, "Intense Mechanoluminescence and Gas Phase Reactions from the Sonication of an Organic Slurry," J. Am. Chem. Soc. (To Be Published).
3. F. Bacon, "Of the Advancement of Learning," (G. W. Kitchin, Editor; Dent, London, 1915). Full text copies are available here, and here.
4. Linda M. Sweeting, "Light Your Candy," ChemMatters (October 1990), pp. 10-12.
5. Diana Yates, "Mechanoluminescence event yields novel emissions, reactions" (University of Illinois Press Release).
6. Summary of Research at Suslick's Laboratory.

May 17, 2007

Internal Combustion Engines

The internal combustion engine has been with us for longer than most of us realize. So, why is it still so inefficient? The efficiency of a typical automotive engine is only 25%. According to thermodynamic principles established by Sadi Carnot, the factor limiting the efficiency of all engines is the temperature gradient between a heat source and a heat sink. Of course, entropy prevents 100% efficiency of any engine, but some engines come close. Some gas turbine engines are sixty percent efficient.

Early internal combustion engines did not incorporate compression into their cycles. An explosive source was introduced into a chamber, and then exploded to produce mechanical work. As early as the seventeenth century, the Englishman, Sir Samuel Morland, utilized water pumps in which gunpowder was the explosive agent. Alessandro Volta (whom I mentioned in a previous article) built a cork gun powered by a mixture of hydrogen and air around 1780. Advancements in this powered piston concept continued, but compression of the fuel-air mixture wasn't considered until Carnot's fundamental work on thermodynamics, published in 1824. Just a few years later, in 1838, The Englishman, William Barnet, patented the first engine with cylinder compression.

The Wankel engine, was introduced into automobiles by Mazda when I was still a student. This engine was invented many years earlier by the German inventor, Felix Wankel (1902-1988). Wankel never had a driver's license. The principal advantages of the Wankel engine are its mechanical simplicity and light weight, but its efficiency isn't as good as a piston engine, and its rotary seals are difficult to maintain. More relevant today is the Atkinson rotary engine. This engine, invented by James Atkinson in 1882, is short on power, but high in efficiency. Its innovation is making the power stroke of the piston longer than the compression stroke. The Toyota Prius hybrid electric vehicle uses this type of engine, as does the Toyota Camry Hybrid hybrid electric vehicle. The compression ratio for these is nearly thirteen to one.

Today's surfeit of sensor and electronics technologies have led to an advancement in the piston engine called variable valve actuation. In traditional engines, the timing of the piston valves is locked in synchrony with the engine cycle. The crankshaft directs the intake of the air-fuel mixture and the exhaust to occur at a fixed time in the piston cycle. In variable valve actuation, the mass flow of reactant gases can be fine-tuned to allow a more efficient combustion. One group at Stanford University has been using computer modeling [1,2] to investigate this concept to increase efficiency and decrease pollutants, such as nitrous oxides. Variable valve actuation makes possible the technique of "Homogeneous Charge Compression Ignition (HCCI), in which a portion of the exhaust is fed back into the cylinder, increasing the fuel mixture temperature and allowing ignition without a sparkplug. This "auto ignition" occurs during compression, but at a lower compression than for diesels. HCCI could improve engine efficiency by fifteen to twenty percent.

References:
1. Emil Venere, "Radical engine redesign would reduce pollution, oil consumption" (Stanford University Press Release, May 9, 2007.
2. Gregory M. Shaver, J. Christian Gerdes, Matthew J. Roelle, Patrick A. Caton, and Christopher F. Edwards, "Dynamic Modeling of Residual-Affected Homogeneous Charge Compression Ignition Engines with Variable Valve Actuation," Journal of Dynamic Systems, Measurement, and Control, vol. 127, issue 3 (September, 2005), pp. 374-381.

May 16, 2007

De Moivre's Formula

Electrical engineers enjoy imaginary numbers, although they refer to the square root of negative one as j, and not the mathematician's i. They don't want to confuse it with the symbol for electrical current. I don't know why i is the symbol for electrical current, but it may have been used first in a paper by one of the early investigators of electricity, such as Alessandro Volta, Luigi Galvani, or Georg Ohm, and everyone followed his lead. My bets are on a paper by Ohm, for whom Ohm's law is named, since his publications were more mathematical in nature [1].

One of the most important formulas involving imaginary numbers (possibly second only to Euler's formula) is De Moivre's formula, named after the French mathematician, Abraham de Moivre,

(cos(z) + i sin(z))n = cos(nz)+i sin(nz)


where n is an integer, and z is a complex number or a real number (i.e., the imaginary part of the complex number is zero). Like Euler's formula, De Moivre's formula links complex numbers and trigonometry. The expression "cos(x) + i sin(x)" occurs often in mathematics, so it is abbreviated "cis(x)." This formula is useful for finding the complex roots of unity; that is, the complex numbers z for which zn = 1.

De Moivre led an interesting life. Apparently, he did not receive a college degree, and he was forced to leave France for England because of his Calvinist religion. He had limited financial resources, and he would make some money by playing chess at a London coffee house. His major achievement is his work on probability theory, for which he was elected a Fellow of the Royal Society. He was a friend of Isaac Newton and Edmund Halley. De Moivre used a linear regression model to predict his own death. At one point, he noted that he was sleeping a quarter hour longer each day. Extrapolating to a 24-hour sleep, he obtained November 27, 1754, which was indeed the case.

References:
1. Georg Ohm, "Die galvanische Kette: mathematisch bearbeitet" (The Galvanic Circuit Investigated Mathematically), Riemann (Berlin, 1827). A copy of the entire book is here (4.7 MB PDF File).
2. Howard Wainer, "The Most Dangerous Equation," American Scientist, Vol. 95, No. 3 (May-June 2007) pp. 249-256.
3. Donald E. Knuth, "The Art of Computer Programming," (Second Edition), Addison-Wesley Publishing Company (Reading, Massachusetts, 1973), p 82.

May 15, 2007

Swiss Cheese Integrated Circuits

Many years ago, I accompanied my wife on an interview to a medical research laboratory in Pittsburgh. I spent my time there in their library, where I discovered an interesting periodical. It was a record of enforcement actions by the US Food and Drug Administration. One of these was the seizure and destruction of some swiss cheese that had been augmented in appearance by drilling holes through it. This government action seemed to me to be a unnecessary waste of wholesome, albeit artistically-altered, food. Recently, IBM has developed a swiss cheese process for production of integrated circuits.

A major problem with dense integrated circuits is the dielectric constant of the materials that surround key circuit elements, such as field effect transistor gates and the conductors that connect disparate parts of the circuitry. A high dielectric constant leads to power loss and propagation delays in fast computer chips, so the ideal materials are those with the lowest dielectric constant. The dielectric constant of vacuum (ε = 1) is as low as you can get. Fortunately, air is not that much higher, so structures such as "air-bridge" conductors, in which the conductor wiring is like a bridge over a silicon valley, have been used. The IBM research team, composed of members of its Almaden Research Center (San Jose, CA) and the Thomas J. Watson Research Center (Yorktown Heights, NY), took a similar approach by fabricating a sea of mini bridges on the surface of a chip. They coat a surface oxide on silicon wafers with a polymer that naturally forms 20 nanometer holes when it is heated. After plasma etching, the holes are replicated in the glass layer. When a subsequent glass layer is applied by vacuum deposition, the holes remain because the material can't get into the small holes. The holes are capped by the glass, instead. What remains is a layer of smaller dielectric constant with a flat, uniform surface.

Integrated circuits built with this material run about 30% faster, or they use less power when operated at the typical speed. The IBM material is being developed into a commercial process at its East Fishkill, New York, facility, and at the State University of New York at Albany.

References:
1. Scott Hillis, "IBM uses self-assembling material in chip advance" (Yahoo News, May 3, 2007).
2. Peter Svensson, "A better computer chip comes with ... holes?" (Associated Press).

May 14, 2007

Theodore Maiman

Theodore Maiman, who experimentally demonstrated the first laser, died on May 5, 2007, at age 79.

Maiman was born in Los Angeles on July 11, 1927. Maiman had a simultaneous interest in physics and electrical engineering; so in this respect his life and my life are similar. His father was an electronics engineer. Maiman received a bachelor's degree in engineering physics from the University of Colorado in 1949 and subsequently attended Stanford University. At Stanford, Maiman received both a master's degree in electrical engineering (1951), and a Ph.D. in physics (1955). Working at Hughes Research Laboratories (Malibu, California), Maiman achieved laser operation in a ruby crystal prepared by his colleague, Ralph Hutcheson [1]. In this under-funded (as he stated), nine month research effort, he bested all others laboratories working towards the same goal, including Bell Labs and RCA.

Ruby is an aluminum oxide crystal doped with a small quantity of chromium. The chromium atoms (Cr3+) replace some of the aluminum atoms. Ruby is three-level laser in which the chromium atoms absorb short wavelength light from a flash lamp and their electrons are excited from a ground state to a higher energy level. These electrons lose energy by first dropping quickly to a slightly lower energy level, and then releasing photons after a final drop back to the ground state. The frequency of the laser light is a deeper red (694 nm) than that of a helium-neon laser (632.8 nm).

Maiman established Korad Corporation in 1962 to develop and manufacture lasers. Korad was sold to Union Carbide Corporation in 1968, and Maiman formed a consulting firm, Maiman Associates. Maiman was nominated for the Nobel Prize, but he never received one. He was a member of both the National Academy of Sciences and the National Academy of Engineering, and he was inducted into the National Inventors Hall of Fame.

Although Maiman demonstrated the first laser, he wasn't the inventor of the laser. The invention of the laser is credited to Charles Townes and Arthur Schawlow who invented the maser. The maser is a "microwave laser," but as far as the physics is concerned, one electromagnetic wave is as good as another. Indeed, Maiman stated that he was guided in his research by an article by Arthur L. Schawlow and Charles H. Townes in a 1958 issue of Physical Review [2]. Schawlow shared the 1981 Nobel Prize in Physics with Nicolaas Bloembergen for "their contribution to the development of laser spectroscopy." No Nobel Prize was awarded for the laser, per se. Perhaps one reason for this is the controversy surrounding who was the true inventor. This is a story in itself, involving one of Townes students, Gordon Gould. Gould had a notarized laboratory notebook containing the first description of a workable laser, and he fought a life-defining battle to collect patent royalties on the laser.

References:
1. Theodore H. Maiman, "Ruby Laser Systems," U.S. No. 3,353,115 (Nov 14, 1967).
2. This Month in Physics History - December 1958: Invention of the Laser.
3. Arthur L. Schawlow and Charles H. Townes, "Infrared and Optical Masers," Physical Review, vol. 112, no. 6 (15 December 1958), pp. 1940-1949.
4. Theodore H. Maiman (IEEE).

May 11, 2007

Cosmological Constant

In yesterday's post, I reviewed the "fifth force," a supposed addition to the four fundamental forces recognized by physicists. The four forces are gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. This fifth force is still only conjectured, and there are many experiments that seem to deny its existence. However, the fifth force can explain some anomalies we see in the structure of the universe that are not explained by current theory. Werner Heisenberg once said, "Not only is the Universe stranger than we think, it is stranger than we can think." A case in point is the cosmological constant.

Albert Einstein proposed the cosmological constant as a means to obtain a steady-state universe from his theory of general relativity. His theory was not consistent with the observational evidence at the time, so he added the cosmological constant to his gravitation equations as a "fudge factor." After the universe was shown to be expanding, Einstein removed the constant from his equations and called the cosmological constant his biggest mistake. Now it appears that the universe is not just expanding, the expansion is accelerating, so there has been renewed interest in the cosmological constant as a means to explain this acceleration.

If physics is indeed unified, the same laws must hold from the smallest elementary particle to the universe in general, but current quantum field theory predicts a huge cosmological constant, about 10120 times larger than observations indicate. This "cosmological constant problem" is presently one of the most important unsolved problems in contemporary physics. Steven Weinberg, who shared the 1979 Nobel Prize in Physics with Abdus Salam and Sheldon Glashow, suggested in 1987 that there exist multiple universes with a range of cosmological constants, but only those with a specific structure would favor the emergence of life. In effect, we see our particular universe only because we're here to see it. It's not surprising that many physicists object to this reasoning.

One physicist, Philip Mannheim of the University of Connecticut has embraced the idea of a huge cosmological constant [1]. In a manner not unlike the reasoning behind the fifth force, Mannheim proposes that there are two types of gravity - one that operates in near space, and another, an anti-gravity, operating in the universe at large. The first type of gravitational force holds our solar system together, but anti-gravity pushes the universe apart. Mannheim hasn't violated the princple of Occam's razor [2]; instead, he's substituted his anti-gravity force for the concept of "dark matter," an unseen constituent of the universe that somehow causes it to expand. Unlike the dark matter concept, Mannheim's anti-gravity has some heavy mathematics behind it.

References:
1. Zeeya Merali, "Two constants are better than one," New Scientist vol. 194, no. 2601 (28 April 2007), p. 8.
2. Entia non sunt multiplicanda praeter necessitatem (Entities should not be multiplied beyond necessity).

May 10, 2007

A Pound of Feathers

There was a riddle circulating when I was a child, "What weighs more, a pound of lead or a pound of feathers?" This was often followed by another riddle, "How many animals did Moses take with him in the ark?" The first question has a deeper significance than you may think. It touches on experiments to find the "Fifth Force."

Physicists recognize four fundamental forces. These are gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. All of mechanics seems to be explained by evoking just these forces. Of course, physicists are curious people, and the idea that there may be other forces is sure to incite their interest. The first to examine the possibility of a fifth force was Count Loránd von Eötvös of Hungary, who used torsion balances to measure the attraction of massive spheres to see whether gravitational mass is the same as inertial mass. Inertial mass is the mass of Newton's force equation, F= ma, whereas the gravitational mass is the mass of the formula

F = G (m1 m2)/r2

Eötvös had the idea that the quality of substance may cause a difference in measurements, so he used not only lead spheres, but spheres of other metals, and spheres of wood. His observations did not contradict the equivalence principle - the idea that inertial mass is the same as gravitational mass.

The equivalence principle was unquestioned for almost a hundred years after Eötvös, until 1986, when a group of physicists reexamined Eötvös' data [1]. They found a correlation of intermediate range force with a material property known as hypercharge; in this case, intermediate range meant a force that operates over a range of a tens, or hundreds, of meters, and not over cosmological distances, where gravity reigns. This observation set off a new flurry of experiments that confirmed the equivalence principle to a new precision, so there seems to be no need to invoke a fifth force. As I mentioned in a previous post, such measurements are still uncertain at the ten ppm level, while other fundamental constants are known to eleven significant figures, so there's still the possibility of a fifth force, and experiments continue.

One experiment, in which the gravitational constant was measured in a deep mine shaft, measured a gravitation force that was two percent smaller than what was expected [2]. This result can be explained by a repulsive fifth force operating in an intermediate range of distances. I'm skeptical about such a large disagreement with established theory. One US group, the the Eöt-Wash Group, a name derived from Eötvös and their location at the University of Washington, is using sensitive torsion balances to continue testing the equivalence principle.

References:
1. Ephraim Fischbach, Daniel Sudarsky, Aaron Szafer, and Carrick Talmadge, and S. H. Aronson, "Reanalysis of the Eötvös experiment," Phys. Rev. Lett. 56, 3 - 6 (1986).
2. Fifth force (Wikipedia).
3. Eöt-Wash Group.

May 09, 2007

Coffee Rings and Nano Things

Coffee drinkers are familiar with coffee rings, especially the kind that mar your desk calendar and other important papers. The most useful observation is that they are rings. Why a ring, and not a uniform circle of stain? The first thought is that the ring is formed by the edge of the coffee mug, but that's just one familiar type of coffee ring. Individual coffee droplets and spills appear also as rings after the liquid evaporates. The actual cause of coffee rings is capillary flow, and this phenomenon is present also in printing, washing and coating processes.

About ten years ago, a group of physicists from the University of Chicago (and its associated James Franck Institute) decided to study coffee rings [1]. The mechanism is as follows: The contact line between the liquid surface and the substrate pins the liquid surface, requiring that the liquid evaporating at the exterior edge (a larger reservoir) is always replenished by liquid from the interior. There is a constant flow of liquid from the interior to the exterior, and this flow carries nearly all dispersed solids towards the edge. Of course, physicists never stop at just a qualitative explanation, so they determined that there is a particular power-law growth of the mass of the ring as a function of time. This power-law is independent of the substrate type, carrier fluid, or the dispersed solids.

Can coffee rings be useful? A group of chemists at Rice University have used the effect to produced ring patterns of gold nanorods. This is not merely an extension of the Chicago work, since two liquid phases were at work to produce extremely small rings. The nanorods were coated with a hydrophobic polymer and dispersed in chloroform. As the chloroform evaporated, water vapor from the air condensed as small droplets on the surface of the chloroform, which was at a low temperature because of evaporative cooling. The nanorods assembled as rings around the small water droplets. When all liquid was gone, the rings remained. A micrograph of such rings appears on the inside cover of a recent issue of Angewandte Chemie [2]. Unfortunately, the image is not available online. The lead investigator, Eugene Zubarev, says these arrays of gold rings should have interesting optical and electromagnetic properties. His work is supported by the National Science Foundation

References:
1. Robert D. Deegan, Olgica Bakajin, Todd F. Dupont, Greb Huber, Sidney R. Nagel and Thomas A. Witten, "Capillary flow as the cause of ring stains from dried liquid drops," Nature vol. 389 (23 October 1997), pp. 827-829.
2. Bishnu P. Khanal, Eugene R. Zubarev, "Innentitelbild: Rings of Nanorods," Angewandte Chemie, vol. 119, issue 13 (March 19, 2007), p. 2172.

May 08, 2007

Nano Catalysts

In heterogeneous catalysis, the surface is everything, so nanotechnology is an obvious avenue of research. Since the ratio of surface area to volume for a spherical particle is related to the inverse of the diameter, small particles offer an immediate advantage, but there are other benefits of small size. An international team of chemists and materials scientists from China and the Georgia Institute of Technology have shown that the particular shape of platinum nano-crystals increases the oxidation rate of ethanol [1].

When I studied catalysis many years ago, it was still a "black art." [2] One of the guiding principles was that reactivity is enhanced at atomic steps on a surface, presumably because of "dangling bonds" (as chemists would say), or additional quantum energy levels (as the physicists would say). Shi-Gang Sun, along with his research team at Xiamen University in China, and Zhong Lin Wang of Georgia Tech, prepared platinum nano-crystals that formed as perfect 24-sided geometrical solids called tetrahexahedra, or tetrakis hexahedra (from the Greek tetra (four) and hex (six); i.e., 4 x 6 = 24). Crystals like these are nearly all steps, so if the classical catalysis rule of thumb that I learned still holds, their catalytic activity will be greater. Since platinum has the face-centered cubic structure, the facets of these tetrahexahedra are high index faces, such as {730}, {210}, and {520}.

Sun and Wang prepared their crystals on a glassy carbon support using an electroplating process. They were able to produce platinum nano-crystals with sizes ranging from 50 to 200 nanometers, which are generally large by nano standards. The formation of these crystals was aided by a pulsed deposition at about ten hertz. The pulse deposition likely allows time for surface diffusion of the platinum atoms to a stable configuration before additional atoms are deposited on the crystal surface. Electron microscopy of the crystals revealed steps that were an atom high by two-three atoms deep. Oxidation of ethanol by the nano-crystals was at least twice as fast as that of platinum nano-spheres when normalized to surface area. The rate was less by weight, since the nano-crystals in this study are much larger than the nano-spheres. Work is underway to reduce the nano-crystal size. More reactive catalysts might arise from nano-crystals made from metal oxides [3].

References:
1. Na Tian, Zhi-You Zhou, Shi-Gang Sun, Yong Ding, and Zhong Lin Wang, "Synthesis of Tetrahexahedral Platinum Nanocrystals with High-Index Facets and High Electro-Oxidation Activity," Science vol. 316, no. 5825 (4 May 2007), pp. 732-735.
2. "A collection of arcane, unpublished, and (by implication) mostly ad-hoc techniques developed for a particular application or systems area (compare black magic). VLSI design and compiler code optimization were (in their beginnings) considered classic examples of black art; as theory developed they became deep magic, and once standard textbooks had been written, became merely heavy wizardry." From the Free Online Dictionary of Computing.
3. Mason Inman, Nano-crystal 'gems' are powerful chemical catalysts" (New Scientist Online, 03 May 2007).

May 07, 2007

Radio Day

The history of radio began with Maxwell's 1861 publication in the Philosophical Magazine in which he proposed that light is an electromagnetic wave and presented what are now called Maxwell's equations [1]. Quite a while later, in 1888, Heinrich Hertz demonstrated radio waves by producing and detecting radio waves in what we now call the UHF band. The unit of frequency, formerly just cycles-per-second, is now called Hertz in his honor. Hertz, and most of the world, did not know that he was three years late. On May 7, 1895, in Saint Petersburg, Alexander Stepanovich Popov demonstrated radio at a meeting of the Russian Physical and Chemical Society. In Russia, and some countries of the former Soviet Union, May seventh is celebrated as "Radio Day," to honor Popov as the inventor of radio.

Popov's accomplishment is largely unknown. The invention of radio is usually attributed to Guglielmo Marconi. Indeed, Marconi shared the 1909 Nobel Prize in Physics with Karl Ferdinand Braun for his contribution "...to the development of wireless telegraphy." The Nobel Prize is never awarded posthumously, and Popov had died by 1909. This may explain why Popov is just a footnote in history books. It is also possible that a non-disclosure agreement Popov signed as a condition of his employment at the Marine Engineering School of the Russian Navy may have inhibited wider publication of his work.

Popov began his experiments with radio early in the 1890s. He built a coherer, a primitive detector of radio waves, in 1894, and he used it to detect the radio static from lightning storms. His May 7, 1895, demonstration of this lightning detector involved transmission and detection of continuous radio signals over a distance up to 64 meters. By March 1896, Popov was transmitting radio waves between university buildings. After learning of Marconi's experiments, he achieved ship-to-shore communications at a distance of more than six miles (1898), and then thirty miles (1899).

Also on this date in history (May 7, 1952), Geoffrey W. A. Dummer, an English engineer, presented a paper at a conference in Washington, D.C. This paper introduced the concept of the integrated circuit. This idea was reduced to practice several years later by Jack Kilby of Texas Instruments. Kilby was awarded the Nobel Prize in Physics in 2000 for his achievement.

Radio Days (1987) is a film written and directed by Woody Allen.

References:
1. Maxwell's original electromagnetics paper (1.2 MB PDF File).
2. Popov's Contribution to the Development of Wireless Communication (IEEE).
3. An extended account of Popov's work.
4. The Invention of Radio (Wikipedia).
5. Ten kopek stamp (1989) commemorating Popov's invention of radio.

May 04, 2007

Bubbles, Continued

In yesterday's post, I summarized some of the many ways in which bubbles have attracted the interest of physicists. One interesting aspect of bubble experiments is that they can be small in scale but yield interesting results. One such experiment appeared a few months ago in Physical Review Letters [1]. Nathan C. Keim, a graduate student of physics at the University of Chicago, along with his colleagues, Peder Moller, Wendy W. Zhang, and Sidney R. Nagel, investigated the shape of air bubbles using high speed video capture as they detached from a submerged nozzle.

Scientists believed bubbles, or their complementary counterparts, drops, would all detach in the same way; that is, the neck would pinch down to a circular cross-section until it broke. Initial conditions, such as the shape of the flow orifice, would not matter. Nagel's University of Chicago group has studied this phenomenon before [2], looking at water drops breaking apart in viscous oil. They found that the detachment of drops varied with the size of the nozzle; that is, the process was not scale-invariant.

The most recent experiments involved the video capture of air bubbles in water at 130,000 frames per second, which was enough to capture every nuance of the process. Their surprising result is that some bubbles will tear away from the nozzle, rather than just pinching off at a point. The cross-sectional shape of the neck depends on several initial conditions, such as the shape of the orifice. Even small asymmetries in the nozzle cross-section are preserved in the neck. Most surprisingly, a nozzle tilt of just a tenth of a degree will change the necking cross-section. The change in buoyancy forces acting on the surface of the bubble from a tenth degree nozzle tilt is extremely small.

Nagel says that this tabletop experiment has implications on the cosmic scale (sounds like one of my research proposals!). Stars often explode asymmetrically, and small differences in the initial conditions of the explosion may be the cause. There may be slightly denser material on one side of the star, and this drives the final asymmetry.

References:
1. Nathan C. Keim, Peder Mller, Wendy W. Zhang, and Sidney R. Nagel, "Breakup of Air Bubbles in Water: Memory and Breakdown of Cylindrical Symmetry," Phys. Rev. Lett. vol. 97, (3 October 2006), p. 144503. An online version appears here
2. P. Doshi, I. Cohen, W. W. Zhang, M. Siegel, P. Howell, O. A. Basaran, and S. R. Nagel, "Persistence of memory in drop break-up: The breakdown of universality," Science vol. 302 (14 November 2003), p. 1185ff.
3. Tabletop experiment yields bubbly surprise (University of Chicago Press Release).

May 03, 2007

Double, double, toile and trouble...

"... Fire burne, and Cauldron bubble." So chanted the three witches in Shakespeare's Macbeth as they prepared their potion around a bubbling pot [1]. Over the years, bubbles have been very useful to physicists. Early in the last century, Sir William Lawrence Bragg, who shared the 1915 Nobel Prize in Physics with his father, Sir William Henry Bragg, was inspired by the surface bubbles he saw while adding oil to his lawn mower. Bragg, who was a founder of the field of x-ray crystallography, noticed that the bubbles formed a regular pattern that resembled a particular plane of a face-centered cubic crystal. Along with a colleague at the Cavendish Laboratory of Cambridge University, J.F. Nye, Bragg was able to generate rafts of surface bubbles in a glycerine-water-oleic acid-triethanolamine solution [2]. These bubble rafts contained about a hundred thousand sub-millimeter bubbles, and their mechanics mirrored many of the conjectured activities of actual atoms in a crystal lattice, such as grain boundaries and dislocations.

A little later, another physicist, Donald A. Glaser let his mind wander while staring at the bubbles in his beer. In a moment of inspiration, he realized that charged particles passing through a super-heated liquid would produce a stream of bubbles that define their paths. A few quick experiments later led to an apparatus, the bubble chamber, which advanced the field of elementary particle physic [3]. Glaser was only 25 when he had that famous beer, and this led to the 1960 Nobel Prize in Physics for his invention of the bubble chamber. It's good he wasn't a wine drinker, but it's my experience that most physicists prefer beer. Science advances, and now bubble chambers have been replaced by wire chambers that allow easy computer analysis of particle tracks.

Bubbles appear also as the central feature of sonoluminescence, which is the emission of light from bubbles excited by sound waves [4]. Sonoluminescence was discovered in the 1930s during investigations of sonar, but interest was piqued by the experiments by two physicists, Felipe Gaitan and Lawrence Crum, who were able to study the sonoluminescence of isolated bubbles in 1989 [5]. The sonically-induced collapse of bubbles is thought to generate tremendous temperatures (of the order of 20,000 oC), and there are unconfirmed reports that nuclear fusion has occurred in bubble implosions that reach temperatures of millions of degrees.

Since beer is important in its own right, there has been a recent study of how beer froth evolves over time [6]. Robert MacPherson of the Institute for Advanced Study in Princeton, New Jersey, and David Srolovitz of Yeshiva University in New York, are developing mathematical models of how a beer head evolves over time. This reseach is not as frivolous as it seems. The coarsening and merger of beer bubbles is the same behavior seen in the growth of grains in metals. One thing they discovered is that the rate of beer head decay can be controlled by changing the surface tension [7].

References:
1. Text version of MacBeth.
2. Bragg, L. and Nye, J.F., "A Dynamical Model of a Crystal Structure," Proc. R. Soc. Lond. vol. A-190 (1023), pp. 474-481.
3. Donald Glaser and the Bubble Chamber
4. Putterman, S. J. "Sonoluminescence: Sound into Light," Scientific American, Feb. 1995, p.46.
5. D. F. Gaitan, L. A. Crum, R. A. Roy, and C. C. Church, J. Acoust. Soc. Am. 91, 3166 (1992)
6. Head researchers turn their attention to beer (New Scientist Online).
7. Robert D. MacPherson and David J. Srolovitz, "The von Neumann relation generalized to coarsening of three-dimensional microstructures," Nature vol. 446 (26 April 2007), pp. 1053-1055.

May 02, 2007

Inch-Long Nano-Fibers

Carbon nanotubes have extremely small diameter. Single-walled carbon nanotubes have a diameter of about one nanometer. Multi-walled carbon nanotubes are nested tubes of graphene layers with an interlayer distance of about 0.33 nanometer. A tensile strength of up to 63 GPa has been reported in experiments on individual multi-walled carbon nanotubes, with a Young's Modulus of up to 950 GPa [1]. This strength, combined with the very low density of about 1.35 g/cc, gives this material a specific strength of about 48 MN-m/kg (mega-Newton-meter per kilogram). For comparison, the specific strength of high-carbon steel is 0.154 MN·m/kg. However, there is one mechanical problem - the properties are great in tension, but we're faced with the column buckling condition in compression, as found in other material structures with high aspect ratio. Still, carbon nanotubes have been added to polymers to create composites with improved mechanical and electrical properties.

Vasselin Shanov and Mark Schulz, scientists from the University of Cincinnati Smart Structures Bio-Nanotechnology Laboratory, have developed a process for producing multi-walled carbon nanotubes nearly an inch in length [2]. They've created aligned assemblages of multi-walled carbon nanotubes 20 nanometers in diameter and up to 18 millimeters long. These nanotubes are grown using a typical chemical vapor deposition (CVD) process that decomposes acetylene or ethylene gas at about 1000oC, but their trick is an undisclosed chemical catalyst. Shanov says that the type of catalyst, and also its support, prevents the formation of other carbon compounds that would "poison" the nanotube growth. They are in the process of applying for a patent on the catalyst.

References:
1. Min-Feng Yu, Oleg Lourie, Mark J. Dyer, Katerina Moloni, Thomas F. Kelly, and Rodney S. Ruoff, "Strength and Breaking Mechanism of Multiwalled Carbon Nanotubes Under Tensile Load," Science vol. 287. no. 5453 (28 January 2000), pp. 637-640.
2. Tom Simonite, "Lengthy nanotube crop may mean super-strong fibres" New Scientist Online (27 April 2007).
3. Firstnano, manufacturer of nanotube synthesis equipment.
4. The Nanotube Site.

May 01, 2007

yadda-yadda-yotta

The common SI prefixes are well known to all scientists. At the start, these prefixes were derived from Latin (e.g., deci, centi, and milli) and Greek (e.g., kilo, mega, giga). When words in these languages were in short supply, some were added from Nordic (femto and atto). As our instruments are able to image both the very large and the very small, the range of common prefixes has become larger and larger. How many of you recognize yotta, which signifies 1024 ? This prefix is derived from an expression for eight thousands. Here's a table of the recognized SI prefixes [1]:

• 1024 yotta (Y)
• 1021 zetta (Z)
• 1018 exa (E)
• 1015 peta (P)
• 1012 tera (T)
• 109 giga (G)
• 106 mega (M)
• 103 kilo (k)
• 102 hecto (h)
• 101 deca, deka (da)
• 100 (none)(none)
• 10-1 deci (d)
• 10-2 centi (c)
• 10-3 milli (m)
• 10-6 micro (μ)
• 10-9 nano (n)
• 10-12 pico (p)
• 10-15 femto (f)
• 10-18 atto (a)
• 10-21 zepto (z)
• 10-24 yocto (y)

Computer scientists work in powers of two. They've usurped some of these prefixes to give the following values [2]:

• 210 kilo (k) 1,024
• 220 mega (M) 1,048,576
• 230 giga (G) 1,073,741,824
• 240 tera (T) 1,099,511,627,776
• 250 peta (P) 1,125,899,906,842,624.

There are proposals to expand these prefixes in both directions. Jim Blowers, a computer scientist, saw a pattern in the prefixes, so he expanded them, as follows [3]:

• 1027 xona (X)
• 1030 weka (W)
• 1033 vunda (V)
• 1036 uda (U)
• 1039 treda (TD)
• 1042 sorta (S)
• 1045 rinta (R)
• 1048 quexa (Q)
• 1051 pepta (PP)
• 1054 ocha (O)
• 1057 nena (N)
• 1060 minga (MI)
• 1063 luma (L)

• 10-27 xonto (x)
• 10-30 wekto (w)
• 10-33 vunkto (v)
• 10-36 unto (u)
• 10-39 trekto (td)
• 10-42 sotro (s)
• 10-45 rimto (r)
• 10-48 quekto (q)
• 10-51 pekro (pk)
• 10-54 otro (o)
• 10-57 nekto (nk)
• 10-60 mikto (mi)
• 10-63 lunto (l)

Time will tell whether the International Bureau of Weights and Measures (Bureau international des poids et mesures) agrees with his prefixes.

Yadda-Yadda-Yadda (or, Yada-yada-yada) is an expression for vacuous chatter, or a filler for unstated material, that dates at least from the 1960s [4, 5]. Some believe that the expression derives from Yakety Yak, a song by the Coasters in that same period. This expression figured prominently in the one-hundred and fifty-third episode of Seinfeld, which aired on April 24, 1997 [6].

References:
1. NIST SI Prefixes.
2. NIST Binary Prefixes.
3. Jim Blowers Extended SI Prefixes.
4. Word of the Day, Origin of yadda-yadda-yadda (Random House).
5. "Yadda yadda yadda" (Microsoft Encarta).
6. The Yada Yada Seinfeld Episode.
7. SI prefixes.
8. "Yocto," New Scientist (13 January 2007), page 52.