« December 2006 | Main | February 2007 »

January 31, 2007

Special K

The dielectric constant k, also called the relative permittivity, specifies the ability of a material to contain an electric field. It's analogous to the permeability of a magnetic material where the material concentrates magnetic lines of flux. Dielectric strength is the ability of a material to withstand voltage without breaking down. A high k and a high dielectric strength are desireable properties in a capacitor.

The metal-oxide field effect transistor (MOSFET) is the workhorse of most integrated circuits. MOSFETs act as amplifiers and switches in computer chips. The Pentium 4 has more than forty million transistors. The increasing circuit density of chips is accomplished by shrinking the transistors. At present, the densest chips are defined by a 0.18 μm lithographic process in which the minimum feature size, the width of the transistor "gate," is 65 nm. Dielectric materials are an important part of this process because they insulate the conductive gate from the rest of the transistor. The "field" part of a MOSFET is the electric field in this insulator. Silica (SiO2), with a dielectric constant of 3.9 is presently used as the gate insulator. One problem with the very thin silica used in MOSFETs is that it isn't a perfect insulator, and some electric current flows between the gate and other parts of the transistor. These leakage currents have begun to dominate the power budget in dense computer chips. Not surprisingly, high k materials have caught the attention of the semiconductor manufacturers.

In simultaneous announcements [1-5], IBM and Intel announced that they will use high-k dielectric in their next generation of chips with 45 nm gate width. Intel announced that it will replace silica with a hafnia (Hafnium oxide, HfO2) material. Intel says its hafnia will reduce leakage by a factor of ten. The high-k process will require a change in the gate material from polysilicon to a metal, but Intel did not reveal the metal composition. IBM did not reveal the materials used for either its dielectric or gate, but it would be surprising if they were not similar to Intel's materials. Advanced Micro Devices, a competitor to Intel, is a research partner with IBM, along with Sony and Toshiba. They are expected to implement this change, also. NEC announced earlier that it will use a high-k process.

Atomic Layer Deposition (ALD) has been proposed as one method for deposition of hafnia materials. Common hafnia precursors are hafnium tetrachloride, tetrakis(ethylmethylamino) hafnium, tetrakis(dimethylamino) hafnium, and tetrakis(diethylamino) hafnium. As all chemists know, zirconium is chemically similar to hafnium, so making these materials in a pure form is difficult.

References:
1. Rhonda Ascierto, "It's silicon, Jim, but not as we know it." (Computer Business Review Online)
2. "Breakthrough spurs microchip arms race." (Reuters)
3. Tony Smith, "Intel 45nm CPUs to use metal gates, high-k dielectric." (The Register)
4. Mark LaPedus, "Intel tips high-k, metal gates for 45-nm." (EE Times)
5. Mark LaPedus, "IBM and partners tip high-k, metal gates." (EE Times)

January 30, 2007

A Matter of Style

One advantage of being at a large university is the opportunity to attend many events not related to your specialty. When I was a graduate student at Syracuse University, I attended a seminar in the philosophy department at which a prominent Harvard philosopher presented his latest paper. What I learned was that philosophers do this by actually reading their paper, verbatim, to the audience. They entertain questions after this formal reading. This is probably the way scientific papers were presented at meetings of the Royal Society in Newton's age, but now scientists and engineers give informal speeches that amplify a few bullet points on a series of slides. It's a difference in style.

One thing that interested this Harvard philosopher was the "quality of "redness;" that is, what makes the color red, red. I was likely the only member of the audience who "knew" that red was a specific range of wavelengths of light. More importantly, I could measure something and tell you with great precision whether or not it was red. The philosopher went on to talk about the "quality of chairness," something that always amused my children whenever I would use the expression as a talking point. In the philosopher's mind, there is a deeper meaning to the words "red" and "chair" than the formal written specification of measurable quantities an engineer would have. The eighteenth century philosopher, George Berkeley, had the idea that something is red only because someone perceives its color as red. A spectrometer doesn't see red. It measures something else. This idea is summarized in the question I heard as a child, "If a tree falls in a forest, and no one is around, does it make a sound?"

Peter Rodgers, the chief editor of Nature Nanotechnology, writes about this difference in style in the latest issue of Nature Physics [1]. He recounts the following joke that's found in various forms on the internet [2,3]. A mathematician, a physicist, and an engineer are riding a train into Scotland, which is known for sheep (and the Loch Ness Monster). The engineer looks out the window, sees a black sheep, and says, "All Scottish sheep are black." The physicist looks out the window and corrects the engineer, "No, some Scottish sheep are black." The mathematician looks out the window and corrects the physicist, "There is at least one field, containing one sheep, of which at least one side is black." Rodgers amends this joke by saying that if a philosopher accompanied them, he would correct them all, stating, "There is at least one field, containing one sheep, of which at least one side is black at the instant you looked at it, although there is no way of knowing that it is still black - whatever "black" means. And what is a sheep, anyway?"

Here's another [2]. An engineer thinks that equations are an approximation to reality. A physicist thinks reality is an approximation to equations. A mathematician doesn't care.

Another indictment of physics style is the following [4]. After taking a course in mathematical physics, a student wanted to know the real difference between mathematicians and physicists. His professor summarized it nicely. "A physicist is someone who averages the first 3 terms of a divergent series."

The computer scientists among you will like this definition. A physicist is a programmer who writes a database application in Fortran.

When I made the transition from physicist to materials scientist, I coined my own definition. A physicist is the type of scientist who tries to find "steel" in the Periodic Table.

References:
1. Peter Rodgers, "A Touch Too Much," Nature Physics, vol. 3, no. 1 (Jan., 2007), p. 9
2. Susan Stepney, "Engineer, Scientist, Mathematician."
3. Mathematical Jokes (mathmos.net)
4. Benjamin Jones, "Difference between a mathematician and a physicist."

January 29, 2007

The Mathematics of Class Division

Barrons, in its January 15, 2007, issue, reported that the five major New York brokerage firms (Lehman Brothers, Merrill Lynch, Goldman Sachs, Morgan Stanley and Bear Sterns) made bonus payments to their employees totaling 36 billion dollars. As a calibration point, this is the same as the gross domestic product of Vietnam.

The Gini coefficient is a statistical parameter for measuring the inequality of distribution. It is named after the Italian statistician, Corrado Gini, who published it in 1912. Formally, it's a ratio that compares areas between the actual distribution curve and a uniform distribution curve. It's easier to relate what it means. When everyone shares equally in some measure (income, wealth), the Gini coefficient is zero. When one person has everything, and the rest have nothing, the Gini coefficient is one.

Charles Wheelan, a Ph.D economist, has posted an article recently that summarizes the Gini coefficient for various countries

• Japan 0.25
• Sweden 0.25
• India 0.33
• The United States (1970) 0.39
• The United States (2005) 0.47
• Brazil 0.58

Whelan, who was in Brazil at the time the Wall Street bonuses were announced, waxes philosophically about these values. He fears that if there is too large of a gap between the rich and the poor, and the poor see no likely route to the good life, social order will break down and criminal activity will increase. Whelan saw this is Brazil. Just before he arrived, a tourist bus, en route from Rio de Janeiro's airport to upscale hotels, was hijacked and robbed in broad daylight. The murder rate in Brazil is five times that of New York City.

Whelan references a survey by Cornell University economist Robert Frank that shows that we care more about how much money we have relative to others than the actual amount of money we have. Frank found that most Americans would prefer to earn $100,000 a year relative to everyone else's $85,000 a year, rather than earning $110,000 a year while everyone else earns $200,000 a year. I think there's actually more economics than envy at work here. With a fixed supply of goods and services, inflation will quickly erode your $110,000 a year lifestyle relative to those earning $200,000 a year.

References:
1. Gini coefficient (Wikipedia).
2. Charles Wheelan, "Why Income Inequality Matters." (Yahoo Finance).

January 26, 2007

Geothermal Energy

Geothermal energy is responsible for less than one percent of US electrical generation, but a recent report by a panel of experts says that this can be increased to ten percent (100 gigawatts) by 2050. Of course, since this was a federal report, sponsored by the U.S. Department of Energy, this eighteen member panel needed to say this in 400 pages [1]! In its simplest implementation, geothermal energy is harvested by pumping water into fractured rock deep underground. The water is heated by the rock, and the resulting steam or hot water is extracted to the surface to generate electricity. Cooled water is then pumped back underground to complete the cycle.

Jefferson Tester, the H.P. Meissner Professor of Chemical Engineering at the Massachusetts Institute of Technology, and chairman of the panel, says that "The federal government hasn't supported broad national assessments of Earth-based resources since the late 1970s." [2] Tester also stated that "... Heat mining can be economical in the short term, based on a global analysis of existing geothermal systems, an assessment of the total U.S. resource, and continuing improvements in deep-drilling and reservoir stimulation technology." [3]

The panel recommended an investment of about a billion dollars from public and private sources in the next fifteen years in order to make an impact by 2050. This is roughly the price of only one clean coal plant. Chuck Kutscher, a Principal Engineer and Group Manager at the Center for Buildings and Thermal Systems of the US National Renewable Energy Laboratory says that the emission-free aspect of geothermal energy is one of its selling points [3]. Kutscher was not a panel member.

What physical properties of the earth are important to this technology? Foremost among these is the temperature gradient of about 25oC per kilometer. This gradient, of course, arises from the Earth's molten core with an estimated temperature of about 6000oC. At the heat extraction point, the thermal conductivity and specific heat of the surrounding rock is important. The average thermal conductivity of the Earth's crust is 1.7 W/m/K, and the specific heat of typical rock is 1260 J/kg/K.

Are there any hidden environmental issues? One commentary on the report asks the following questions:

• How much will this project, generating 100 gigawatts, cool the Earth's core?
• What sustains the heat in the core?
• What happens if the core is cooled 1%, or even a tenth of a percent?
• What happens if one side of the Earth's core is significantly cooled while the other side is left untouched?

The Core (2003, Jon Amiel, Director) is a film about a manned expedition into the Earth's molten core. This film makes no scientific sense at all, but I still bought a copy of it. Perhaps Hilary Swank had something to do with it.

References:
1. "The Future of Geothermal Energy: Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century" (400 page PDF file).
2. Lucy Odling-Smee, "Hot rocks could help meet US energy needs," Nature Online (23 January 2007).
3. "Nearly Forgotten, Geothermal Could Erupt" (IEEE Spectrum Online, January 23rd, 2007).

January 25, 2007

Gravitation

The force of gravity is very small. For example, the repulsive electrical force between two electrons is nearly 1043 times stronger than their gravitational attraction. The entire principle of gravity is summarized in one equation

F = G (m1 m2)/r2

where F is the force between two masses (m1 and m2), r is the distance between the masses, and G is the Newtonian Gravitational Constant (G), which has the value 6.6742 x 10-11 m3 kg-1 s-2.

Gravity is fundamental to the structure of the universe, so physicists have tried to measure G to high accuracy. Unfortunately, since gravitation is such a small force, G is known to only five significant figures [1]. Compare this to the fine structure constant, mentioned in a previous post, which is known to eleven significant figures. When I was an undergraduate, I measured G using a Cavendish balance, a torsional balance that measures the small gravitational attraction between heavy lead spheres. This undergraduate experiment is about where the measurement of G has stagnated for many years. Sure, you could use heavier spheres, but there's a limit on how accurately you can do this type of measurement.

Four physicists have published a better technique for measurement of G in a recent issue of Science. J. B. Fixler and M. A. Kasevich of Stanford University, along with G. T. Foster of Hunter College, the City University of New York, and J. M. McGuirk of Simon Fraser University, report on "Atom Interferometer Measurement of the Newtonian Constant of Gravity," in the January 5, 2007, issue [2]. Interferometry has improved many other measurements, so they used a mass interferometer of laser-cooled cesium atoms. A cesium beam is split, with one path near a movable 540 kilogram lead weight. The atomic interference pattern changes as the weight is moved.

At this point, their measurements are more uncertain than traditional measurements (6.693 x 10-11 +/- 0.027 x 10-11), but their technique shows promise for higher accuracy.

References:
1. Fundamental Physical Constants (NIST)
2. J. B. Fixler, G. T. Foster, J. M. McGuirk, and M. A. Kasevich, "Atom Interferometer Measurement of the Newtonian Constant of Gravity," Science, vol. 315, no. 5808 (January 5, 2007), pp. 74-77. Abstract available here.
3. Zeeya Merali, "Quantum technique could pin down gravitational constant" (New Scientist Online, 04 January 2007)

January 24, 2007

Copper Damascene Process

Aluminum has been the conductor of choice for integrated circuits. It has excellent electrical conductivity and is oxidation resistant after formation of a thin surface oxide. It evaporates easily at low temperature, and this allows an easy method of depositing aluminum onto the surfaces of silicon wafers; or onto silica (silicon dioxide) insulating layers formed on the surfaces of silicon wafers. As chip designers push circuitry to greater and greater density, they've reached a point where they need a better electrical conductor.

The resistivity of copper (16.78 nano-Ohm-meter at 20oC) is lower than that of aluminum (26.50 nano-Ohm-meter at 20oC), but the transition from aluminum to copper metallization is not easy. The first problem is that diffusion of copper into silicon forms deep-level traps for electrons, and this degrades the properties of the transistors in the integrated circuit. Another problem involves patterning the conductors on the chip. Aluminum is patterned by a process in which aluminum is deposited on the entire chip, and then removed from where it isn't needed by a plasma etch through a masking layer. Plasma etching does not work for copper.

The copper damascene process is a much more complicated process for formation of copper conductors on a chip. First, trenches that define conductor paths are cut into a silica layer. Then a conductive diffusion barrier layer, typically a metal nitride, is deposited to inhibit diffusion of copper into the silicon. Next, copper is deposited onto the chip surface such that it fills these trenches and the rest of the chip surface. A chemical-mechanical polish then removes the copper from the chip surface, but it leaves the copper in the trenches untouched. Five to ten metal layers are typically required to define all the conductors on a chip.

Of course, we've glossed-over the principal material that makes this process possible; namely, the conductive diffusion barrier layer. It must inhibit copper diffusion and offer good electrical conductivity at the same time. Many metal nitrides (e.g., TaN) are good electrical conductors, and they have been found to inhibit copper diffusion. Recently, an interdisciplinary team from the University of Florida, Gainesville, has investigated bilayers of germanium and hafnium nitride for conductive barrier layers in the copper damascene process. This team had members from the university's Materials Science and Engineering, Chemistry, and Chemical Engineering departments. Their research, published in a recent issue of Applied Physics Letters [1], showed that bilayers of germanium and hafnium nitride performed better than hafnium nitride layers alone.

As you can see, the copper damascene process is quite complicated. Ironically, because of the required diffusion barrier layer, the improvement in conductivity of copper conductors over aluminum is not that large. There is, however, one other advantage of copper. Electromigration is less pronounced in copper than aluminum, so it's possible to have higher current density in copper than aluminum. This makes copper damascene worthwhile.

References:
1. S. Rawal, D. P. Norton, KeeChan Kim, T. J. Anderson, and L. McElwee-White, "Ge/HfNx diffusion barrier for Cu metallization on Si," Applied Physics Letters, vol. 89, 231914 (4 December 2006).
2. Copper-based chips (Wikipedia).

January 23, 2007

A Cubic Mile of Oil

A cubic mile is rather small by geological standards. After all, the volume of the Earth is more than a quarter trillion (0.25 x 1012) cubic miles. Surprisingly, one cubic mile of oil is the world's annual oil consumption. A cubic mile of oil doesn't seem like much until you consider its energy equivalent. Two SRI International scientists, Ed Kinderman and Hewitt Crane, decided to normalize energy units around a cubic mile of oil for a book to be published by Oxford University Press. Their work is summarized in an article [1] in IEEE Spectrum magazine that presents the number of alternative technological equivalents to one cubic mile of oil:

• Building 104 Coal-fired power plants each year for fifty years.
• Building 4 Three Gorges Dams each year for fifty years.
• Building 32,850 wind turbines each year for fifty years.
• Building 52 nuclear power plants each year for fifty years.
• Building 91,250,000 solar panels each year for fifty years.

In making these comparisons, the following assumptions were made:

• A coal-fired power plant produces 500 megawatts.
• The Three Gorges Dam produces 18 gigawatts, which is its full design capacity.
• A wind turbine with a 100 ‘meter blade span produces 1.65 MW (when there's wind)
• A nuclear power plant produces 1.1 gigawatts.
• A solar panel is a residential 2.1 kilowatt system.

These energy equivalents take into account the actual energy production factor of the system; i.e., winds available for wind turbines, and sun available for solar. These data are summarized in this graphic. Building more than 91 million solar panels each year for fifty years is not a small undertaking!

In the same article, Ripudaman Malhotra, an SRI chemist, listed the technological equivalent of the world's total energy requirement (not just oil) in a single second:

• 150 tons of coal.
• 37,000 gallons of oil.
• 3.2 million cubic feet of natural gas.

Reference:
1. Harry Goldstein and William Sweet, "Joules, BTUs, Quads-”Let's Call the Whole Thing Off" (IEEE Spectrum, January, 2007).

January 22, 2007

Milestone - 100 Entries

This is my hundredth entry in this blog. The first entry was on August 10, 2006. For those of you who've just tuned-in, here are a few of the more unusual articles of 2006.

The Erdos Number (August 17, 2006)
Math Burn-Out (August 23, 2006)
Pancake Flipping with Bill Gates (September 14, 2006)
Hairy Smoking Golf Ball (September 21, 2006)
Small Values of Two (September 22, 2006)
A Starbucks on Every Corner (October 03, 2006)
Ice-Nine and Metallic Water (October 18, 2006)
Exponential Vampires (October 30, 2006)
The Smell of Metal (November 03, 2006)
Ants in My Computer (November 08, 2006)
Tom Swift and His Electric Thermostat (November 14, 2006)
All your base are belong to us (December 08, 2006)
All Your Base (Continued, December 11, 2006)
Holiday and Vacation (December 15, 2006)

While on the topic of hundreds,

• The Hundred Year's War actually lasted 116 years.
• A hundred is a geographical division used in England and some parts of the USA. Historically, it was considered to be a land area sufficient to sustain one hundred families.

January 19, 2007

'Twere Well It Were Done Quickly

So said Macbeth while contemplating his assassination of King Duncan of Scotland. A recently published study on procrastination [1] says that 15-20% of people are chronic procrastinators who wouldn't follow Macbeth's advice.

Piers Steel, a professor at the University of Calgary's Haskayne School of Business spent ten years studying procrastination. He analyzed his data using meta-analysis, a statistical technique used to deduce correlations from small data sets. One of his findings, that New Year's resolutions are most likely broken, should come as a surprise to no one. Another of his findings, that perfectionism is not the root cause of procrastination, strips me of my primary excuse. Says Steel, "... perfectionists actually procrastinate less, but they worry about it more." Steel further adds, "Essentially, procrastinators have less confidence in themselves, less expectancy that they can actually complete a task."

Steel's meta-analysis revealed 691 possible correlations. He concludes that neuroticism, rebelliousness and sensation seeking have a very weak correlation with procrastination. Strong correlations were found for task aversion, task delay, and self-efficacy (the belief that you can do something if you really want to). Furthermore, since impulsive people value the present more than the future, they aren't motivated until just before a deadline. Then they work at a fevered pitch to complete the job, sometimes working more than ten times an average rate [2].

Steel's advice to managers is to meet with their procrastinating employees on a daily basis to set daily goals. People who are bored with their jobs, or who dislike their jobs, will tend to delay finishing tasks. On the other hand, employees who find their jobs interesting will complete their tasks in a timely fashion.

Steel was also able to describe procrastination by a mathematical equation. As can be expected, a psychologist's equations are very simple.

U = (EV)/(AD)

where U is the utility, or desirability, of the task; E is the person's expectation that the task can be done; V is the value of completing the task; A is the immediacy of the task; and D is the person's sensitivity to delay. You can see that long-term tasks (large A) have reduced utility.

References:
1. Piers Steel, "The Nature of Procrastination: A Meta-Analytic and Theoretical Review of Quintessential Self-Regulatory Failure," Psychological Bulletin, vol. 133, no. 1, pp. 65-94 (Jan., 2007).
2. Jeanna Bryner, "Why We Procrastinate".

January 18, 2007

Breaking Moore's Law

Moore's Law is an observation made in 1965 by Gordon E. Moore, a co-founder of Intel, that the number of transistors on commercially viable integrated circuits doubles every 18-24 months. The law is still true, because integrated circuit manufacturers have used it as a technology road map for the last forty years. Today, the problem of following this law is not just the size of a transistor, but the increasing area needed for the wires that connect the transistors together.

A recent paper [1] by Gregory S Snider and R Stanley Williams of Hewlett Packard Laboratories (Palo Alto, CA) addresses this problem through use of a mesh of nano-wire interconnects on the top layer of a silicon chip. They speculate that their approach could increase transistor density by a factor of eight, which is three chip generations. This is an advance of Moore's law by five to six years.

Snider and William's work is on an increasingly important chip architecture, the field programmable gate array (FPGA). Presently, almost 90% of FPGA chip area is consumed by interconnects. Their approach, which they call "Field Programmable Nanowire Interconnect (FPNI)," replaces traditional routed interconnects with nano-wires (in their experiments, 17 nm wires) arranged in a mesh on a layer above the transistors. The mesh is a crossbar electrical circuit in which signals on an "X" line can be routed to a "Y" line through a connection where they cross. The HP researchers say that 4.5nm wires should be possible by the year 2020, and that would increase circuit density by a factor of 25 over today's designs. HP thinks a commercially viable 15 nm crossbar is possible by 2010. [3] There are about 3 to 6 metal atoms in a nanometer.

The HP-FPNI process can be implemented in existing facilities. This is an important consideration, since a modern integrated circuit fabrication facility costs almost $10 billion. However, there are a few problems. First, the nano-wires, because of their higher resistance, cannot transmit signals as fast as standard interconnects. Also, these nano-wires can't be formed using photolithography. A possible approach would be imprint lithography, but alignment with the underlying silicon circuitry will be a challenge.

HP does not manufacture integrated circuits, so this research was aimed at generating a royalty stream [4]. As for invoking the nano-gods, Williams admits that "Waves of hype have washed across (nanotechnology) but along the way we've been making steady progress." [5]

References:
1. Gregory S Snider and R Stanley Williams, "Nano/CMOS architectures using a field-programmable nanowire interconnect," Nanotechnology, vol. 18 035204 (January 24, 2007)
2. Stan Beer, "HP beats Moore's Law with new chip architecture" (itwire.com.au).
3. Dan Nystedt, "HP Develops New Chip Technology" (IDG News Service).
4. Jack Uldrich, "HP Promises to Supersize Chips" (msn.com).
5. "HP claims advance in semiconductor nanotechnology" (Yahoo News)

January 17, 2007

Number Tricks

Last week, our Administrative Assistant, Vivian, sent me the following number trick:

• Take the first three digits of your seven digit telephone number.
• Multiply by 80.
• Add one.
• Multiply by 250.
• Add the last four digits of your seven digit telephone number.
• Add the last four digits of your seven digit telephone number, again.
• Subtract 250.
• Divide by two.

The result is your telephone number. Amazing? Well, let's check it out algebraically:

• Your telephone number is 10000x + y, where x is the first three digits, and y is the remaining four digits.
• 80x.
• 80x + 1.
• 250(80x + 1).
• 250(80x + 1) + y.
• 250(80x + 1) + 2y.
• 250(80x + 1) + 2y - 250.
• [250(80x + 1) + 2y - 250]/2.
• [20000x + 250 + 2y - 250]/2.
• [20000x + 2y]/2.
• 10000x + y.

Not only does this give us insight into how the trick works, it allows us to modify the trick. Through inspection, we can rewrite the equation as

• [a(bx + 1) + cy - a]/c

where c = (a x b)/10000. For example, if we set a = 2000 and b = 25, then c = 5. We are now able to rewrite this number trick as follows:

• Take the first three digits of your seven digit telephone number.
• Multiply by 25.
• Add one.
• Multiply by 2000.
• Add the last four digits of your seven digit telephone number, five times.
• Subtract 2000.
• Divide by five.

which is

• [2000(25x + 1) + 5y - 2000]/5.
or
• 10000x + y.

Here's another trick that's so simple, I won't explain (it's for children).

Ask somebody to think of a number, then double it, and multiply by 5. When they tell you the total, remove the last digit, and you'll have their number.

Reference:
1. Richard W. O'Donnell, "Some number Tricks" (Games Magazine).

January 16, 2007

To Trip the Light Fantastic

Coaxial cables are extremely efficient transmission lines. A coaxial cable is essentially just a wire inside a metal sheath. The space between the wire and the sheath can be air, or an insulator. A balancing of the inductance of the inner wire and the capacitance between this wire and the outer sheath allows coaxial cables to carry radio waves over a broad frequency range. Cable television systems are networks of coaxial cables, and they are able to transmit broadband television signals and high speed internet many miles from a central station to your house. Coaxial cables work well up to about the frequencies used by cell phones. After that point, waveguides are used. Waveguides are essentially pipes that propagate radio waves in much the same fashion as fiber optic cables

Transmission of light signals has been the domain of fiber optic cables for almost fifty years, but now there is a coaxial cable for light. A group of physicists at Boston College, along with collaborators from other institutions, have published a paper [1] in the recent issue of Applied Physics Letters that describes a coaxial cable for light. This coaxial cable is able to propagate light because its dimensions are so small, so small that the research team calls it a nanocoax. The inner wire is carbon, surrounded by an aluminum oxide insulator, and the outer sheath is aluminum. The diameter of this nanocoax cable is just 300 nanometers.

The diameter must be smaller than a wavelength of light for propagation of light to occur. The velocity factor of this cable, the ratio of its propagation speed to the speed of light, is about 0.9, the same range as a radio frequency coaxial cable. The propagating mode is transverse electric (TE), and the light propagates in the aluminium oxide layer. This research is an extension of the team's earlier work on nanoscale antennas that capture visible light [2, 3]. The motivation for this earlier work was an increase in the efficiency of solar cells. Possible uses for this coaxial cable include optical computation and miniature electrical circuitry.

The phrase, to trip the light fantastic, is an archaic expression for dancing. It originated in the poem L'Allegro (1645) by the English poet, John Milton.

Come, and trip it as ye go,
On the light fantastick toe.
And in thy right hand lead with thee,
The Mountain Nymph, sweet Liberty;

References:
1. J. Rybczynski, , K. Kempa, A. Herczynski, Y. Wang, M. J. Naughton, Z. F. Ren, Z. P. Huang, D. Cai, and M. Giersig, "Subwavelength waveguide for visible light," Appl. Phys. Lett. 90, 021104 (8 January 2007).
2. Y. Wang, K. Kempa, B. Kimball, J. B. Carlson, G. Benham, W. Z. Li, T. Kempa, J. Rybczynski, A. Herczynski, and Z. F. Ren, "Receiving and transmitting light-like radio waves: Antenna effect in arrays of aligned carbon nanotubes," Applied Physics Letters, Vol. 85, Issue 13, pp. 2607-2609 (September 27, 2004).
3. Physics Team Sees Potential for 'Perfect' Solar Cell.
4. Will Dunham, "Tiny new cable may spur big technological advances" (Yahoo News).
5. "BC physicists transmit visible light through miniature cable" (Boston College Press Release).
6. Jeff Hecht, "Nanoscopic 'coaxial cable' transmits light" (New Scientist).

January 15, 2007

Dead Media

The Denver Public Library is currently displaying a 120 foot long scroll valued at $2.4 million. No, it isn't an Egyptian papyrus. It's the original manuscript of Jack Kerouac's "On the Road," published by Viking Press in 1957. This book is considered to be a landmark in "Beat" literature. It was purchased by Indianapolis Colts owner Jim Irsay at auction in 2001.

This manuscript, and many Egyptian papyri, are still quite readable. However, if Kerouac had written the book just a little later on an IBM Magnetic Tape Selectric Typewriter, and kept it only as a magnetic tape, his original manuscript would be unreadable. The magnetic tape used for data storage in these typewriters is an example of "dead media."

Many of us have a cabinet somewhere at home stuffed with various dead media. I don't have any eight-track tapes, but I do have a shelf of vinyl record albums, unplayable because my phonograph lacks a needle. I also have a box of eight-inch floppy disks, each containing one-eighth of a meg of previously precious data readable by a CP/M computer. These data will never again be read, and they may have degraded to the point that reading would be difficult. I have a box of Apple MacIntosh disks in my laboratory that store data from an instrument I once used. These are likewise not readable by any disk drive at Honeywell's Morristown site. I can read an Egyptian papyrus more easily than the electronic version of a project report I wrote twenty years ago.

There is a real possibility that our civilization will leave no written historical record of itself to future generations. All our degrading magnetic bits and tarnishing CDs and DVDs will be gone in just a few decades if they are not imprinted onto more permanent media. At this time, the only reliable archive is microfilm. The chemical film process has been mostly perfected, and image magnification is a technology that should always be available.

In 1995, Bruce Sterling, a founding father of cyberpunk science fiction, proposed The Dead Media Project. The project was not directed at preserving old media, but rather documentation of all historical media technologies. The result would be called The Handbook of Dead Media. Sterling thought this would guide technologists in the development of more permanent media.

A group of enthusiasts launched a mailing list and a web site to document media technologies, and a large list can be found here. However, the Dead Media Project seems to be dead, since the web site is no longer updated. Perhaps it's because all the known media have been documented, but I think it's because the emphasis was only on documenting old media, not preserving it.

The US National Institute of Standards and Technology has a Digital Data Preservation Program with guidelines for the care and handling of CDs and DVDs. However, there are no guidelines for preservation of your phenakistoscope disks.

References:
1. Erika Gonzalez,'On the Road' manuscript on display at Denver Public Library.
2. The Dead Media Handbook on the Web.
3. Dead Media Project (Wikipedia).
4. Joe Kissell, "Dead Media: Preserving past communication for the future."
5. Care and Handling Guide for the Preservation of CDs and DVDs (1.24MB PDF File)

January 12, 2007

How Many Scientists Does it Take...

... to change a light bulb? There's both a joke here, and a serious answer; but you'll need to read to the end of this post to get to them (no jumping ahead!).

When Thomas Edison was defending his patent on his light bulb in 1890, he had a colleague at the Edison Electric Light Company produce a few working bulbs fabricated according to his 1880 patent as evidence of his process. Edison prevailed in court, and this box of bulbs was forgotten until recently, when it went up for auction at Christie's as part of its first science-themed auction [1]. This box of bulbs joined more than 200 other auction lots. About 40% of the items did not meet price expectation, and they were not sold, but the auction still brought in about $2.6 million.

The box of twenty light bulbs, valued by Christie's at a about a half million dollars, was among the unsold items. The highest bid, by Oxford University's Museum of the History of Science, was less than half of what was expected. What was essentially a high school essay by Albert Einstein, "On the Investigation of the State of Ether in a Magnetic Field," sold for $677,000. It was written when he was sixteen, and it was autographed. A first edition of Darwin's "Origin of Species" sold for $153,000.

How many scientists does it take to change a light bulb? The serious answer is "one," and that would be Shuji Nakamura, inventor of the materials technology for production of blue, ultraviolet, and white light emitting diodes. Nakamura did his original research in Japan, but he's now a professor in the Materials Department at the University of California, Santa Barbara [2]. Light emitting diodes are efficient and reliable replacements for incandescent and other conventional light sources, but they were never available in blue or white until Nakamura's breakthrough research. Now they can be applied to residential lighting. Nakamura effectively changed the light bulb. The LED market was $2.42 billion in 2001, and it is expected to reach $7.34 billion in 2010 [3].

And now, the joke [4]. How many scientists does it take to change a light bulb? Five: One to write the grant proposal, one to do the mathematical modeling, one to type the research paper, one to submit the paper for publication, and one to hire a student to do the work.

References:
1. Edison's bulbs fail to light up auction (Nature Online; now offline)
2. Shuji Nakamura's Home Page.
3. Lydia Polzer, "LED sales mushroom."
4. Science Jokes

January 11, 2007

Furlongs per Fortnight

In the 1980s, I was a member of an Allied-Signal (now Honeywell) team looking into conversion of company operations from US Customary Units to the Metric System, also called the International System of Units, or SI in its French abbreviation. There was a government effort at that time to have US companies achieve metrifaction through voluntary programs. Some speed limit signs on interstate highways had dual legends of miles-per-hour and kilometers-per-hour. In 1988, the Omnibus Foreign Trade and Competitiveness Act designated SI as "the preferred system of weights and measures for U.S. trade and commerce." I published some papers using Amperes-per-meter, the metric unit of magnetism for field strength, instead of Oersted, much to the consternation of my colleagues, who kept asking me what that was in "real" units.

Twenty years later, the kilometer-per-hour signs are gone, I say Oersted, and carpet is still sold by the square foot. The United States, Liberia and Burma (Myanmar) are the only countries that still use customary units instead of SI units. This may have been acceptable when the US was the economic powerhouse of the early and mid-twentieth century, but thinking in one set of units and selling in another is becoming a problem. The customary vs metric problem in the US was underscored by the 1999 loss of NASA's Mars Climate Orbiter. A contractor gave NASA thrust in pound-seconds, and NASA thought it was in newton-seconds. These differ by more than a factor of four.

Seven years after the Mars Climate Orbiter, NASA has finally learned its lesson. NASA announced on January 8, 2007, that metric units will be used for all operations on the moon. This will be "hard" metric; that is, there will not be quarter-inch bolts called 6.35 millimeter. Five millimeter bolts will be used instead. This may make it easier to get spare parts when you're on the lunar surface, a quarter of a million miles from home, and your next door neighbor, the European Space Agency, is metric.

A furlong is a customary unit equal to 660 feet (1/8 of a mile, or 201.168 meters). A fortnight is 14 days (or 14 nights, when travel consumed weeks or months). A furlong per fortnight is only 0.166 millimeters per second, so the speed of light is 1.803x1012 furlongs per fortnight.

References:
1. NASA's "metric moon" press release.
2. NASA Finally Goes Metric (Yahoo News).
3. Web site of the U.S. Metric Association (USMA);includes USMA's Guide to the Use of the Metric System.
4. Nice online conversion calculator.

January 10, 2007

Mathematics, Physics, and Reality

It's true that physics is mathematical. It's so mathematical that some have dared to eliminate the middle man and say that physics is math. A current example of this is Stephen Wolfram, physicist and creator of the computer program, Mathematica, who published a book [1] that speculates that physical reality is a type of computer program called a cellular automaton.

Pythagoras (c. 582 BC - 507 BC) was the first to claim that everything is math. His followers, known as the Pythagoreans, believed that numbers are the true nature of things. The Pythagorean theorem was to them a revelation of this truth. In more recent times, Sir Arthur Eddington (1882-1944) approached a unification of relativity, gravitation and quantum mechanics through an analysis of the dimensionless ratios of fundamental constants. As I mentioned in a previous post, physics contains many constants, and physicists wonder whether all these constants are necessary. They may be just a manifestation of our ignorance of the true reality. Eddington had similar misgivings about constants, and he found that ratios of certain physical constants are dimensionless - they're pure numbers.

Eddington was particularly interested in the fine structure constant, a dimensionless number constructed from the charge of the electron, the speed of light, Plank's constant, and the vacuum permittivity. This number was at first measured to be 1/136, and Eddington had a good reason why it should be exactly 1/136; that is, until it was found to be closer to 1/137, at which time Eddington developed a revised set of reasons that alpha should be 1/137. The measured value today is 1/137.03599911. For those of you who strive for accuracy, Eddington once stated that there are 15 747 724 136 275 002 577 605 653 961 181 555 468 044 717 914 527 116 709 366 231 425 076 185 631 031 296 protons in the universe.

Traditional physicists ridiculed Eddington's approach. Hans Bethe, who won the 1967 Nobel Prize in Physics, authored a paper with two other physicists, Guido Beck and Wolfgang Riezler, in which they parodied Eddington's calculation of the fine structure constant [2]. In the paper, they calculated the fine structure constant using the value of absolute zero in Celsius units (-273). I wasn't able to find the paper online, but I suspect it makes use of the fact that 273/2 = 136.5.

Although much scorn was directed towards Eddington, it's interesting that another Physics Nobelist, Paul Dirac, espoused similar numerology in his Large Numbers Hypothesis [3]. Dirac was able to construct the very large dimensionless ratio, 1080, which is close to Eddington's number of protons in the universe. Dirac was no crackpot. Aside from receiving the Nobel Prize in Physics, he once held the Lucasian Chair of Mathematics at Cambridge University, a chair that was held by Isaac Newton, and is presently held by Stephen Hawking.

References:
1. Stephen Wolfram, "A New Kind of Science"
2. Ian T Durham, "Hans Albrecht Bethe."
3. Dirac, P. A. M. "The Cosmological Constants." Nature 139 (1937) 323.

January 09, 2007

Which Professor Smith?

Smith is a common English name that follows in the tradition of German surnames that describe a person's occupation. Smith, the short form of blacksmith, has been used as an identifying surname since Anglo-Saxon times. It is the most common family name in the United States, representing more than one out of every hundred surnames [1]. Now that inherited surnames are the rule, a blacksmith named Smith would be unusual, and it would be an example of nominative determinism.

The common search engines are not that useful if you are searching for a particular physicist named Smith. There are too many Smiths. Scientific papers typically list authors by surname and initials only, so a more specific search for "J. Smith" will still give you every Smith named John, Jane, Joanna, Julian, Justin, James... You get the idea.

A group of computer scientists from the Pennsylvania State University, none of them named Smith, have developed a search algorithm to handle this Which Professor Smith problem. The algorithm is described in the paper, "Efficient Name Disambiguation for Large-Scale Databases [2]," authored by C. Lee Giles, Jian Huang, and Seyda Ertekin. When presented with a large database of academic papers, the algorithm correctly identified authors nine out of ten times. Says Giles, "It works very similarly to how humans would figure out authors' "identity" by looking at affiliations, topics, publications... The system works by using machine-learning methods to cluster together names that the system believes to be similar."

The Penn State scientists cite their own affiliation as an example of database confusion. They are known to work at the Pennsylvania State University, Penn State, or PSU, and their algorithm accounts for this variation as well. This algorithm will become part of the CiteSeer search engine, which is hosted at Penn State. CiteSeer was developed by Giles, Steve Lawrence, and Kurt Bollacker while they were at the NEC Research Institute in Princeton, New Jersey. CiteSeer is primarily useful when seaching publications relating to computer science and computer engineering. It doesn't index much more.

References:
1. Smith Surname (Wikipedia).
2. New system solves the "who is J. Smith" puzzle.

January 08, 2007

The Chemistry of Chocolate

Cocoa is the primary ingredient of Chocolate, a much beloved confection. About three and a half million metric tons of Cocoa are produced annually (a metric ton is a thousand kilograms, or 2205 pounds, which is just a little larger than an English ton). Cocoa is a natural product that has been consumed by humans for thousands of years, and ill effects (aside from increased waistline!) seem to be minor. The Swiss eat an average of about twenty pounds of chocolate per person per year, and Americans eat about twelve pounds per year [1]. Some people believe that chocolate causes acne, but it may be the fat content of chocolate that's the culprit in this case.

Cocoa has been found to contain about 550 compounds identifiable by gas chromatography, and 35 of these have an odor. Such a huge chemical palette arises from the many processes used in cocoa production. These include fermenting, roasting, grinding, dissolving, heating, pressing and drying [2]. Felix Frauendorfer and Peter Schieberle of the Deutsche Forschungsanstalt fur Lebensmittelchemie, Garching, Germany, have published this roster of cocoa compounds in the July, 2006, issue of the Journal of Agricultural and Food Chemistry [3]. In summary, cocoa smells somewhat like cabbage, sweat, honey, and potato chips, with some additional caramel, malt, popcorn, and sulfur [2, 3]. The German scientists validated their study by combining twenty four of the most potent aromatic ingredients to reproduce the scent of cocoa.

What else does cocoa contain? Well, there's indole (C6H4-NH-CH=CH, CAS Number 120-72-9), which is also present in coffee, tobacco, olive oil, and wine. Other chemicals include (with appropriate disclaimers as to their actual effects on humans) the following [4, 5]:

• Anandamide, an endogenous cannabinoid, present in small quantities [6].

• Caffeine, present in modest quantities.

• Theobromine, a mild stimulant that may work in concert with caffeine to give the "chocolate buzz."

• Magnesium. Since magnesium deficiency exacerbates pre-menstrual symptoms, this may explain the chocolate link here.

• Phenylethylamine, an amphetamine-like chemical, present in very small quantities. This may explain chocolate's effect in soothing depression.

• Tetrahydro-beta-carbolines, which are neuroactive alkaloids also found in beer, wine and liquor. There is a theory that this may explain "addiction" to chocolate.

• Phenylethylalanine, a supposed aphrodisiac.

Although chocolate is considered to be safe for humans, its toxicity in animals is well established. Most people know not to feed chocolate to the family dog. The theobromine in chocolate is the culprit, and it's also toxic to horses, parrots, and cats (especially kittens). Consumption of chocolate by animals can lead to epileptic seizures, heart attacks, internal bleeding, and death [7]. Less than one ounce of baker's cocoa can bring about symptoms in a medium sized dog.

References:
1. Prescription-strength chocolate (Science News).
2. Megha Satyanarayana, "A Whiff of What?"
3. Felix Frauendorfer and Peter Schieberle, "Identification of the Key Aroma Compounds in Cocoa Powder Based on Molecular Sensory Correlations," J. Agric. Food Chem., vol. 54, no. 15, pp. 5521 -5529 (2006).
4. Chocolate (http://www.chocolate.org).
5. Dhara Thakerar, "Chocolate's chemical charm."
6. Tomaso, E. D., M. Beltramo, and D. Piomelli, "Brain cannabinoids in chocolate," Nature, vol. 382 (Aug. 22, 1996), pp. 677ff.
7. Chocolate toxicity in animals (wilipedia)

January 05, 2007

In the Blink of an Eye

A human eye blink lasts about a quarter second, and there is a blink about ten times per minute. So, why do people always have their eyes closed in your holiday photographs? Things are easier, now, with digital cameras. You can examine each shot after it's taken, and re-shoot if required. If you have a conventional film camera, you know you need to take multiple photos to get at least one that's perfect, but how many do you really need? The 2006 Ig Nobel Prize in mathematics was awarded to an Australian scientist who did a statistical analysis of this problem.

For the incognoscenti (igcognoscenti?), the Ig Nobel Prizes are a parody of the Nobel Prizes. They are awarded each year in a ceremony organized by the Annals of Improbable Research, a science humor magazine. The prizes are presented by real Nobelists for unusual research that makes people laugh, and then think. Piers Barnes, a physicist with the Australian Commonwealth Scientific and Industrial Research Organization, was awarded the Mathematics prize for calculating how many photographs you need to take of a group of people to get at least one in which all eyes are open. In this case, "at least one," is the 95% confidence level, somewhat of a gold standard among physicists and many other scientists.

Barnes' rule is as follows: "For groups of less than 20 people, divide the number of people in the group by three, if the light is good, or by two, if the light is bad." He calculates that a flawless photograph of a hundred people is nearly impossible. Such a calculation is easier than you think, and it makes you wonder why we haven't received our Ig Nobel Prize.

If a person's eyes are closed a quarter of a second every six seconds, the probability p of his eyes being closed at any random time is 0.25/6, or 0.042. The probability of his eyes being open is (1-p), or 0.958. For one person, it appears that we are already at the 95% confidence level with a single snap. For more than one person, we need to multiply probabilities. For two people, this would be (0.958)(0.958) = 0.92. It looks as if more than one snap may be in order, but how do we calculate exactly how many snaps we need to get to the 95% confidence level?

First, we calculate the probability that all eyes are open for groups of N people, and also the probability (1-p) that someone's eyes will be shut. Here are some examples (A group of sixteen is here, since that's the number of people in my department):

N......p.................(1-p)
1......0.958333.....0.04167
16.....0.506108.....0.49389
30.....0.330669.....0.66933
65.....0.245470.....0.75453
90.....0.198415.....0.80159

The object at this point is to see how many times we need to multiply (1-p), the probability of some eyes being closed, to get to a 5% level for each group; that is, how many snaps S we need to reach the 95% confidence level of having all eyes open.

N.....(1-p)..........S.......(1-p)S
1......0.04167.....1.......0.04167
16.....0.49389.....5......0.02939
30.....0.66933.....8......0.04028
65.....0.75453.....11.....0.04513
90.....0.80159.....14.....0.04522

So, for my group of sixteen, five snaps would be required, which is close to the Barnes rule of N/3 for small groups. As you can see from the table, a group of ninety would require fourteen snaps. This is a large group, so the Barnes' N/3 rule does not apply.

References:
1. Picture Perfect: How to Make Blink-Free Holiday Photos
2. CSIRO boasts Ig Nobel Laureates
3. The chance of taking a blink-free photo

January 04, 2007

A Babel of Computer Languages

In the beginning there were just wires. Computers were programmed by plugging wires into boards to route electrical signals to appropriate places in the circuitry [1]. Then came assembly language, rudimentary written instructions that served to replace wires with mechanical and electrical switches. Now, sixty years later, programmers have a choice of hundreds of programming languages, and all programmers have a favorite language. Fortunately, the general features of computer languages are the same, and it's easy to migrate from one language to another as conditions require.

Of course, some languages are more popular than others, and their popularity changes over the course of time as new languages become fashionable. So, it's not unexpected that language preference rankings will be published. Tiobe Software publishes one of the most complete ranking lists. To quote from the web site,

"The popular search engines Google, MSN, and Yahoo! are used to calculate the ratings. Observe that the TIOBE index is not about the best programming language or the language in which most lines of code have been written."

Here's the top ten of the TIOBE Programming Community index [2] at the end of 2006:

• 1. Java (19.907%)
• 2. C (16.616%)
• 3. C++ (10.409%)
• 4. Visual Basic (8.912%)
• 5. PHP (8.537%)
• 6. Perl (6.396%)
• 7. Python (3.762%)
• 8. C# (3.171%)
• 9. Delphi (2.569%)
• 10. JavaScript (2.562%)

Four of my languages (C, Visual Basic, PHP, and Perl) made the top ten, and an additional four (Pascal (19, 0.566%), Fortran (21, 0.448%), Forth (37, 0.145%), and APL (48, 0.092%) made the top fifty. After the top fifty, the percentages are so low it's hard to rank, and rankings may not be statistically significant. It is interesting to note that the venerable COBOL ranks at 18 with 0.601%. The list from 51-100 includes such notables as Algol, Boo, Clean, Euphoria, Groovy, Limbo, MAD, Magic, MOO, MUMPS, Occam, Oz, and Yorick.

Alas! poor Yorick, I knew him...

References:
1. Andrew S. Tanenbaum, "A History of Operating Systems".
2. TIOBE Programming Community Index for December 2006.

January 03, 2007

Billions and Billions

Carl Sagan, an American astronomer, used the phrase, "billions and billions," to describe the number of stars in our Milky Way galaxy in his television series, "Cosmos: A Personal Voyage (1980)." There are about 200-400 billion stars in our galaxy. Our galaxy is just one of 125 billion galaxies in the universe, as estimated from Hubble telescope deep space images. Many astronomers are remembering Carl Sagan on the tenth anniversary of his death, December 20, 1996.

I first became aware of Sagan through his book, "Intelligent Life in the Universe" (Random House, 1966), co-authored with the Russian astronomer, I.S. Shklovskii. The main premise of the book, which presaged the "billions and billions" phrase, is that there are so many stars in our galaxy, although the probability of intelligent life arising on some planet around a star is extremely small, there should be a few. There are presently estimated to be 1000 intelligent civilizations in our galaxy. As can be imagined, such estimates are controversial.

Sagan was an indefatigable popularizer of science. Aside from the Cosmos television series, he published numerous popular science books, including "Broca's Brain: Reflections on the Romance of Science" (Ballantine Books, 1974), and "The Dragons of Eden: Speculations on the Evolution of Human Intelligence" (Ballantine Books, 1978). He also authored the science fiction novel, "Contact," an adaptation of which became a film staring Jodie Foster.

The same activities that made him popular to the audience of non-scientists made him an object of ridicule by many scientists who believed that he was staining the ivory tower. As a consequence, although his science was first rate, Sagan was denied tenure at Harvard University, and he was never elected to the National Academy of Sciences. Sagan was an author or co-author of more than 600 scientific papers, popular articles and books. His research led to the realization that Venus was the dry, hot planet it is, rather than the balmy tropical rain forest featured in many science fiction stories and books. Sagan was also any early prophet of global warming, warning that the planetary warming effect of greenhouse gases had been demonstrated on Venus.

Carl Sagan possessed a stained glass representation of a hypercube. It was presented to him by my father, who was a strained glass artisan. You can view a photograph here.

References:
1. Carl Sagan (Wikipedia).
2. Official Carl Sagan Web Site.
3. Jennifer Ouellette, "Casting out the Demons."
4. Review: "Carl Sagan: A Life in the Cosmos" by William Poundstone

January 02, 2007

A Hundred Years of Broadcast Radio

Radio waves were discovered in 1888 by German physicist, Heinrich Hertz (1857 - 1894). The art of radiotelegraphy was perfected by Guglielmo Marconi less than a decade later. Marconi demonstrated trans-Atlantic reception of radio signals on December 12, 1901. Radio telegraphy, as its name implies, is the transmission of Morse code telegraph signals. It is a point-to-point transmission, and the only sound involved is the beep-beep (in those days, more like a hiss-hiss) of the telegraph signal.

The era of broadcast radio began on December 24, 1906, when Reginald Fessenden(1866 - 1932), a Canadian physicist living in the US, broadcast his voice and Christmas music to ships at sea. Fessenden had invented radio telephony. The first word heard on radio was Fessenden's "Hello." By 1920, KDKA was established as the first US commercial broadcast radio station in a suburb of Pittsburgh, Pennsylvania. Several other stations in the US, Canada, and elsewhere, went on the air in that period, and many of these claim to be the first commercial broadcast station.

Whence the term "radio?" Such transmissions were always called "Wireless." An advertising expert, Waldo Warren, coined the term early in commercial radio history, apparently from the phrase "electromagnetic radiation," and the name stuck.

Fessenden was a large man, and he once got stuck between sections of an antenna he was climbing to make an adjustment. His staff had to grease him to set him free. Among Fessenden's many patents was U.S. No. 1,576,735, "Infusor," Mar 16, 1926, a device for making tea.

References:
1. Broadcast Radio Turns 100.
2. History of Radio.
3. When I lived in Pittsburgh, I viewed the plaque marking the site in Wilkinsburg, PA, of KDKA's first transmission.