Title : NSF 94-74 Connections Type : General Publication NSF Org: MPS Date : November 29, 1994 File : nsf9474 CONNECTIONS Investments for the 21st Century THE ULTIMATE KNOWLEDGE INDUSTRY A PUBLICATION FOR THE DIRECTORATE OF MATHEMATICAL AND PHYSICAL SCIENCES OF THE NATIONAL SCIENCE FOUNDATION Introducing Connections The process of creating fundamental new knowledge is exciting and rewarding in itself. For many scientists, that is reason enough for what they do. From society's viewpoint, however, there is another, very practical reason to support basic research. New knowledge is economically and socially valuable. It leads to new technologies, new products and services, and an enhanced quality of life for Americans. Indeed, our industrial competitiveness as a nation depends upon continuing to solve fundamental problems and to translate that new knowledge into cutting-edge technologies. In this sense, fundamental research is the ultimate knowledge industry. At NSF's Directorate for Mathematical and Physical Sciences, we are very aware of the close connection between fundamental discoveries and economically or socially significant technologies, because we see such technologies arise regularly from the research we support--sometimes immediately, sometimes after a period of years, and sometimes in ways that no one could predict. This publication is our attempt to share with a wider audience a few of the many exciting discoveries being made in the mathematical and physical sciences--and their connections to emerging, potentially useful new technologies. Connections is, in a sense, a kind of progress report to our investors--the American people--on the returns from their investment in fundamental research. We are introducing Connections , a publication from the Directorate for Mathematical and Physical Sciences at NSF, to provide a context for investments that will make a difference. As corporate R&D programs are scaled back, NSF's investments in academia assume increased importance--to ensure the knowledge base that underpins our society and the highly-skilled people on which our industrial competitiveness depend. We hope you will find Connections useful and interesting. Sincerely, William C. Harris Assistant Director Directorate for Mathematical and Physical Sciences CONTENTS Ready for Prime Time 1 Innovation Highlights 4 A New Chemistry for Carbon 6 Innovation Highlights 10 Origins of the Information Superhighway 12 The Directorate of Mathematical and Physical Sciences 17 Cover: An early example of an optical fiber; modern fibers are much thinner yet capable of transmitting far more information. Photo: Dan McCoy/Rainbow READY FOR PRIME TIME There is a sense in some quarters that the mathematical and physical sciences have seen their day--that --physics is over,-- that the big discoveries have all been made, that the real action in research has shifted to the life sciences. Nothing could be further from the truth. The mathematical and physical sciences are intellectually thriving, vigorous, and as full of surprise as ever. Witness the recent discovery, discussed in these pages, of a new form of carbon--a discovery that has transformed many areas of chemistry, may help physicists to understand high-temperature superconductors, and offers many intriguing new possibilities in the materials sciences. Or consider solitons, a mathematical curiosity that soon will help extend the reach of optical fiber communications systems around the world. The mathematical and physical sciences are not only thriving. They remain an area of science with a very wide potential for technological payoff. To make this claim is not to deny the rapid pace of discovery in the life sciences or to diminish in any way the enormous potential they hold for socially useful applications. It is simply to point out that advanced instrumentation, new materials, novel production methods, and efficient computational algorithms are critical to virtually all industrial activity, including biotechnology. The knowledge base and the source of technical talent for such areas--indeed, the training ground for at least half of all scientists now employed in U.S. industry--are found in the mathematical and physical sciences. In addition, the interplay among physics, chemistry, mathematics, biology, and the materials sciences is rapidly creating the knowledge necessary to design and engineer at a molecular level, laying the foundation for a new wave of high-tech material revolutions in the 21st century. Chips, coatings, catalysts, composites, and supercomputers--such is the emerging technological language of that future. Unprecedented control over materials and processes will allow industry to re-invent the products we use--from nonpolluting automobiles to cutting tools that never lose their edge. Equally important, these advances will enable the development of industrial technologies that are environmentally more benign and economically more efficient than those of today. The argument put forward here rests on a basic premise that should be made explicit: Investment in the creation of fundamental new knowledge is one of the best ways to create new technological payoffs. The truth of this assumption is evident to any close observer of technological innovation in the United States over the past 50 years. The nation is now reaping the benefits of just such investments in the mathematical and physical sciences from years past, and nowhere more so than in computer and information technologies. As discussed in these pages, the "information superhighway" itself depends critically on a host of basic discoveries whose future use for this purpose could not have been foreseen. In the past, the creation of fundamental new knowledge was a task shared by government, academia, and industry alike. Corporate research laboratories made major contributions--such as the invention of the transistor at Bell Labs. So did the federal government's national laboratories. This era, however, is rapidly coming to a close. Central corporate research labs in the United States are being downsized or dispersed to operating divisions in the form of development labs. Under the pressure of competitive restructuring, most corporations no longer invest heavily in the creation of fundamental new knowledge. Instead, they increasingly rely on government to create such knowledge through investments in university research--particularly in the mathematical and physical sciences. Praveen Chaudhari, for 10 years Vice President for Science at IBM, commented recently on this trend: "I expect basic research and exploratory technology in the U.S. to become more and more the domain of universities."[1] The U.S. national laboratories are also under increasing pressure to focus their R&D either narrowly on their mission or on applied topics of interest to industry. Thus federal investments in the creation of new knowledge at universities play an increasingly critical strategic role in the nation's technological future. ---------- [1.] Praveen Chaudhari, "Corporate R&D in the United States," Physics Today (December, 1993), pp. 39-40. ---------- The words used to describe different kinds of research can be a source of confusion. The most visible parts of the federal R&D budget, for example, are targeted, "strategic" research programs such as global change or high- performance computing. Yet even in such targeted programs, much of the research is of a fundamental or basic character. The reason is straightforward: We simply do not have the basic knowledge necessary to solve many problems that lie on the critical path to socially or economically important applications. To make progress, we must continually invest to create that knowledge. The opportunities are many. The flowering of physical science research at the atomic level is propelling the impending "molecular revolution----. The study of complex or so-called nonlinear systems offers the potential to understand climatic change and other complex phenomena in nature, to prevent chaotic behavior in complex man-made systems, and to exploit and harness non-linear effects in new materials and new communications technologies. Biological systems are ripe for the application of mathematical and physical techniques--such as in protein-folding algorithms critical to rational drug design, improved x-ray or magnetic imaging in the analysis of protein structures, and the development of biopolymers, gels, and other biomolecular materials. New tools critical to industrial competitiveness and to progress in science itself--from advanced magnets to novel sensors to more powerful telescopes--are being developed and refined. Far from being dead, the mathematical and physical sciences underpin the future. They are just now ready for prime time; their era of maximum social and economic payoff is still to come. INNOVATION HIGHLIGHTS Microscopic Monolayers What could keep blue- and green-dyed drops of water that wet a surface from merging with one another? The answer is found in extremely thin lines of a monolayer--a surface coating only a single molecule thick--of a substance not soluble in water. The colored water drops in the photograph (left) measure about 4 millimeters across; the lines of monolayer that separate the drops are extremely thin--less than a micrometer in width, about the same size as the circuit features in a microelectronic chip. Monolayers are formed on a surface by a process known as molecular self-assembly. Here, an exposed gold surface was simply dipped in a solution of the desired chemical. The molecules in the liquid are strongly attracted to the gold and weakly repelled by each other. As a consequence, they form a layer on the surface precisely one molecule thick and with uniform spacing between the molecules--in essence forming a very thin crystal on the surface of the gold. These remarkable structures are similar to those that form the membranes of living cells and are related to the colorful iridescent films that form when gasoline is spilled on a wet floor. Only recently have the analytical tools become available to study the processes that underlie self-assembly. Patterns can also be formed by alternating bands of microscopic monolayers of hydrophilic (water-loving) and hydrophobic (water-hating) molecules. Water condenses preferentially on the hydrophilic parts of the patterns and forms drops that are really lines about a micrometer wide and a centimeter long. A beam of laser light directed to such a surface sees a regular grid of lines: bare gold (which reflects the light) and gold covered with a drop of water (which does not reflect). This arrangement forms a diffraction grating and creates optical patterns that can be used to monitor humidity, temperature, and other environmental variables. The ability to form monolayers in microscopic patterns may lead to new types of sensors that can measure environmental properties from a distance, with only a beam of light. The technique also provides a sensitive method for studying the properties of surfaces and important industrial processes such as the condensation of water on the cooling tubes of a power plant or the operation of a metal- plating bath. A wide variety of self-assembled structures are now being examined for applications that include liposomes for drug delivery, microelectronic fabrication, and thin film coatings for corrosion protection. Green By Design It's an adage as old as the Republic: An ounce of prevention is worth a pound of cure. Now chemists are taking it to heart in an effort to discover more environmentally benign ways of making modern chemicals and chemical products, updating chemical synthesis processes that in some instances are 40 to 100 years old. Among the targets: o Nylon. Recent reports indicate that nylon production may be a major source of nitrous oxide, an ozone-depleting gas. Chemists at the University of North Carolina have discovered a new catalyst that permits the synthesis of adipic acid, a key nylon intermediate, without producing nitrous oxide. In addition, Purdue University chemists have found a biotechnological approach to synthesizing adipic acid that eliminates the need to use benzene, a toxic chemical, as the starting material. o Pharmaceuticals and pesticides. Toxic compounds of metals such as cadmium, lead, mercury, and chromium are widely used in oxidation reactions, a frequent step in the synthesis of pharmaceuticals and pesticides. Chemists at the University of Connecticut have demonstrated substitute oxidation methods using ordinary light and nontoxic food dyes as catalysts. Chemists at Iowa State University have used photochemical reactions--also involving light--to substitute for Friedel-Crafts reactions, which require toxic compounds of metals. o Replacing hazardous solvents. Chemists at Virginia Polytechnic Institute and State University are experimenting with supercritical fluids as replacements for toxic solvents such as benzene in free-radical reactions. o Just-in-time synthesis. Storage and shipping of toxic chemicals are a potential source of accidents; just-in-time synthesis can eliminate both. Dupont chemists, for example, have developed methods for making methylisocyanate--the cause of the Bhopal, India tragedy--just when and where it is needed, and then immediately converting it to the final agrochemical product. As environmental costs are increasingly factored into industry's economic calculations, chemical processes that are green by design may turn out to be an essential ingredient in competitive success. Training Tomorrow's Talent A recent patent describes a light-sensitive protein that could serve as the basis for an optical memory. Research in this area advances the rapidly growing field of molecular electronics, which promises to push computing past the limitations of current silicon chips. The protein in question, known as bacteriorhodopsin, changes shape when struck by light. In principle, a one-centimeter cube of the material could store all the information housed in the New York Public Library, because it stores information, molecule by molecule, in three dimensions. What is perhaps most remarkable about this patent, however, is that it stems in part from research done by an undergraduate student during an NSF-sponsored summer chemistry program at Syracuse University. Deshan Govender, working with chemistry professor Robert Birge, designed a method to orient and immobilize the protein in a polymer--a critical step in creating the three-dimensional memory system. Nearly 1,500 students in the mathematical and physical sciences take part in these Research Experiences for Undergraduates programs, held annually at about 150 colleges and universities. Not all succeed in contributing to a potentially valuable patent, of course. But a unique feature of the U.S. system of fundamental research, conducted largely in academic institutions, is that it also trains the next generation of scientists and engineers. Participation in research at undergraduate, graduate, and postdoctoral levels is a crucial ingredient of industrial competitiveness--particularly in the mathematical and physical sciences, which account for about half of the scientists now employed in U.S. industry New-Technology Telescopes For the past several decades, progress in astronomy has come largely from the electronics revolution, which allowed astronomers to put in place new types of solid-state sensors that make better use of the light that telescopes collect. Such sensors are nearly 50 times more efficient than photographic plates and are reaching their physical limit. Now, however, a second revolution is underway: by the year 2000, a new generation of very large telescopes will quadruple the light- gathering area available to astronomers. These instruments include the twin 10-meter Keck telescopes on Hawaii's Mauna Kea, two 8-meter Gemini telescopes (the design for which is shown here), two 8.4-meter Columbus telescopes, two 8-meter Magellan telescopes, and additional 8-meter telescopes to be built by European and Japanese scientists. The new telescopes will be lighter and stiffer in their construction, and their aerodynamic, open design will minimize turbulence and temperature differences that could distort stellar images. Their huge but lightweight primary mirrors will increase both the resolution of astronomical images and the sensitivity with which the image can be studied. Key technologies that make possible these new telescopes include: o New mirror materials that do not expand or contract as outdoor temperatures change. o New mirror designs, including large mirrors that are divided into many smaller segments; extremely thin mirrors that lack stiffness but keep their shape through an actively controlled support system; and hollow mirrors cast with a ribbed, honeycomb structure. o New, more compact "high-aperture" shapes that are less expensive to build and easier to operate. o A host of new computer-controlled tools to finish the final shape of the mirror surface to within tolerances of a few micrometers. Still to come is the wizardry of adaptive optics, which will correct continuously for the distortion of the atmosphere, giving these telescopes nearly as clear a window on the universe as if they were located in space. The combination of these technological advances will transform optical and infrared astronomy and accelerate scientists-- knowledge of the cosmos. A NEW CHEMISTRY F0R CARBON The discovery of buckyballs was serendipitous. But there are already hints that a full understanding of these curious molecules may lead to a technological bonanza. Until a few years ago, there were two known forms of pure carbon, graphite and diamond. Then an improbable-seeming third form of carbon was discovered: a hollow cluster of 60 carbon atoms shaped like a soccer ball. Buckminsterfullerene or "buckyballs"--named for the American architect R. Buckminster Fuller, whose geodesic domes had a similar structure--is the roundest, most symmetrical large molecule known. It is exceedingly rugged and very stable, capable of surviving the temperature extremes of outer space. The discovery was serendipitous. It came inadvertently in 1985 during research at Rice University by Richard E. Smalley, Harry Kroto, and their associates--fundamental experiments aimed at understanding how long-chain molecules are formed in outer space. Carbon, of course, is one of the most versatile of all atoms. It readily forms complex molecules that are the basis of the chemistry of living organisms and of the nearly infinite variety of industrial polymers that give us synthetic fibers and plastics. Even the pure forms of carbon, graphite and diamond, have important commercial uses, from industrial lubricants to cutting tools. So it might seem obvious that the unique properties and intricate structure of buckminsterfullerene would also lead to important technological applications. At first, however, the molecule was a mystery wrapped in an enigma. Not until 1990 was a convenient way of making this molecule, also known as C60, discovered. That set off an explosion of research among chemists, physicists, and materials scientists to uncover the molecule's secrets. Even mathematicians, intrigued by the symmetry inherent in the molecule's shape, joined in. The research boom led to a kind of population explosion. Investigators soon discovered a whole family of related molecules, including C70, C84, and other "fullerenes"--clusters as small as C28 and as large as a postulated C240. Researchers also discovered that the process by which fullerenes are created in the laboratory also yields tiny cylindrical shapes, dubbed "buckytubes." Variations in the synthesis process can trap other atoms inside the carbon cage of a single fullerene molecule or within the lattice of a fullerene crystal to form exotic new classes of compounds. These unusual molecules were originally thought to be very stable and relatively unreactive, but they turn out to have an extraordinarily versatile chemistry. They react with elements from across the periodic table, including metals, fluorine and other halogens, and oxygen. They undergo most of the reactions of the carbon chemical family known as alkenes. And fullerenes readily react with the chemical species known as free radicals--key to the polymerization processes widely used in industry--thus opening up the fullerenes to the manipulative magic of organic chemists. Potential applications, then, can build upon the properties not only of a single molecule but of a rapidly growing tribe of molecules and their chemical derivatives. Studies of the electronic structure of fullerenes reveal further examples of their surprising versatility. Pure C60 is an insulator: It does not conduct electricity. But when a crystal is --doped-- by inserting metallic atoms such as potassium or cesium into empty spaces within the crystal, its properties change dramatically. With a small amount of doping, the material becomes a semiconductor; with higher levels, it becomes a superconductor; with still more doping, it becomes an insulator again. As a superconductor, doped C60 is not in the big leagues. The critical temperature, above which it is no longer a superconductor, is significantly lower than that of the best inorganic compounds. But doped C60 is the best organic superconductor known. More important, because C60 is a relatively simple system, it may help scientists master the still mysterious theory of high- temperature superconductivity. Despite a host of new ways of synthesizing C60 and its cousins--from heating graphite electrically to vaporizing it with an electron beam--making and purifying large quantities of the fullerenes (and especially of some of their derivatives) remains difficult. Nonetheless, some fascinating physical properties of the individual molecules and of fullerene crystals have come to light. Fullerenes, for example, are extremely stable when irradiated with lasers; they can absorb more light energy without fragmenting than any other carbon-based species of similar size. Fullerenes also appear to display nonlinear optical effects--giving off light of a different frequency than that which they absorb, for example--and seem to enhance the photoconducting properties of some polymers. The flow of discoveries has almost been a torrent. Science magazine chose C60 as its "molecule of the year" for 1991. Papers on fullerenes dominated lists of the most-cited research articles in chemistry during 1992 and much of 1993 and showed up among the most-cited articles in physics for the same period. Speculation and some hard work on potential applications began almost immediately after the discovery of the fullerenes (see Future Directions?). Possible applications of interest to industry include optical devices; chemical sensors and chemical separation devices; production of diamonds and carbides as cutting tools or hardening agents; batteries and other electrochemical applications, including hydrogen storage media; drug delivery systems and other medical applications; polymers, such as new plastics; and catalysts.[1] ---------- [1.] Chemical and Engineering News (November 22, 1993), pp. 8-18. ---------- By trapping other atoms in carbon cages, fullerenes may provide ways of increasing the volatility or solubility of refractory materials such as uranium. By incorporating metallic atoms into crystal voids, doped fullerenes may make possible a new type of laser analogous to the semiconductor lasers that drive optical fiber communications. Fullerenes-- hollow structure may permit their use as molecular containers, sensors, or drug-delivery agents, as well as provide a catalytic environment in which other reactions can take place. Catalysts, in fact, appear to be a natural application for fullerenes, given their combination of rugged structure and high reactivity. The molecules have already been found to exhibit catalytic activity in hydrogen transfer reactions such as those of coal liquefaction and other refining processes. Additional experiments suggest that fullerenes which incorporate alkali metals possess catalytic properties resembling those of platinum. Storing hydrogen safely may be the key to new kinds of energy systems and perhaps even to the development of nonpolluting automobiles based on fuel cells. The C60 molecule can absorb large numbers of hydrogen atoms--almost one hydrogen for each carbon--without disrupting the buckyball structure. This property suggests that fullerenes may be a better storage medium for hydrogen than metal hydrides, the best current material. The affinity of the C60 molecule for hydrogen also suggests a nickel- fullerene battery as a candidate to replace the nickel- cadmium batteries now used for personal electronic devices; fullerenes are much lighter (as well as safer) than the cadmium they would replace, thus resulting in a more portable power supply. The publicity and excitement surrounding the fullerenes has been so intense as to overshadow how much remains unknown about C60 and its cousins. Methods of synthesizing, separating, and purifying fullerenes and their derivatives remain primitive and, as a result, the materials remain too expensive for many commercial applications. Fundamental questions about the structural and electronic properties of carbon clusters also remain unanswered: how they form, how stable they are, what gives rise to their superconductivity, and the relationship between nonlinear optical properties and their size and shape. Study of many of the higher fullerenes and the wide variety of their derivative materials has only begun. The many textbooks that will be written on fullerenes are barely past chapter one. It is too early to make reliable forecasts of commercial potential. Full development of aromatic chemistry based on the benzene ring--which extended organic chemistry from linear molecules into the realm of planar, two-dimensional molecules--took a hundred years. The spherical fullerenes will help take organic chemistry into fully three-dimensional shapes and interactions--well before a hundred years, but not overnight. Improvements in the purity and characterization of bulk fullerene materials may well reveal new possibilities for exploiting their electronic properties, as did similar developments in silicon and other semiconductors. Investing in basic knowledge about the fullerenes may well be the fastest--and almost certainly the surest--route to such payoffs. Do the fullerenes represent another potential "mother lode" of technology? Until the field is mapped and the test shafts drilled, no one knows. But if the early "finds" are any indication of what lies beneath the surface of our ignorance, this could be a big one. ------------------------- Future Directions? Many discoveries have generated interest and preliminary speculation about possible future applications of fullerenes or buckyballs. They include: o Buckytubes as microfibers. These tiny hollow needles of carbon are related to fullerenes. Studies of buckytubes show that they are stronger as fibers than any other known material because they appear to form virtually without defects. Their strength may be used in composite materials, some of which already contain ordinary carbon fibers. Buckytubes can also be filled with metal, potentially turning them into molds for molecular wires 1,000 times thinner than those etched in computer chips. Might we one day fly in superlight airplanes built from buckytube composites, or use computers with atomic-scale electronic components wired together with buckytubes? Too little is known about buckytubes or how to grow them in a controlled way for such questions to be taken seriously, yet. But scientists can--t help dreaming. o Buckyballs and synthetic diamonds. If growing synthetic diamonds were easier, lots of applications would be waiting, from diamond-coated kitchen knives that never lose their edge, to diamond semiconductor chips that would be of great interest to the electronics industry. The problem is that diamond does not grow easily on most surfaces. A layer of the C70 fullerene deposited on a silicon chip, however, seems to provide a vastly improved template for growing thin films of diamond. Diamond is reported to grow more than a billionfold faster on the C70 layer. Other experiments suggest fullerenes can also provide a source of bulk synthetic diamonds for diamond-studded cutting tools. o Buckyballs and AIDS. Many new materials are suspect as possible carcinogens because they can be incorporated into DNA and hence could cause genetic defects. Buckyballs appear to be too large and too round for this to be a concern. As it happens, however, they seem to be just the right size to interfere with the reproduction of the human immunodeficiency virus that causes AIDS. A buckyball derivative fits neatly into the hollow pocket--the active site--at one end of an enzyme that is crucial to the virus's reproductive process. Tried first on a computer screen, this approach also showed promise in a test-tube experiment, leading to efforts to design related molecules that might work even more effectively. -------------------- INNOVATION HIGHLIGHTS Putting Wetware To Work They are soft and squishy, hardly an engineer's dream material! Yet gels--materials that inhabit a curious no man's land between liquids and solids--may become the basis for new kinds of machines, such as an implantable insulin pump for diabetics. And unlike many "hard" materials, gels can respond in an almost intelligent way to their surroundings. A gel-based insulin pump, for example, would have no sensors or moving parts, yet it could potentially detect the glucose level of the blood and, when it rises too high, release additional insulin. Like the familiar gelatin dessert, gels consist of cross-linked chains or networks of polymers and a solvent such as water. What makes gels potential machines is their ability to shrink or swell--in theory changing their volume by as much as 25-fold within one thousandth of a second--in response to tiny changes in their environment. Depending on the exact chemical composition of the gel, a volume change can be triggered by shifts in temperature, solvent composition, acidity, electric field, or even light, as the fibers in the polymer network either fold on themselves or stretch and expand. Researchers have learned to understand and then to predict a gel's behavior based on its chemistry, in particular the type of chemical bonding responsible for cross-linking its polymers. This in turn has led to the ability to design gels to meet specific needs. Among the applications being pursued in universities and in industry are artificial muscles for robots or human protheses; controlled drug delivery systems such as an implantable insulin pump; chemical valves that control the flow of fluids; and even gel-based methods of separating large molecules from solutions (which could be useful in some manufacturing processes). A gel encapsulating a drug, for example, can be designed to shrink tightly in the acidic environment of the stomach--protecting the medicine from attack--but to expand in the intestine, releasing the drug into the body. For such applications, the "wetware" of gels may offer special advantages. Visualizing Mathematics Recently, two physicists used pictures to make a point. Using specialized software to compute and visualize the geometric shapes involved, they disproved a conjecture put forth by Lord Kelvin, a British physicist and mathematician, more than 100 years ago. The research illustrates the power of a new trend that is rapidly changing how mathematicians do their work and communicate with each other. Kelvin was trying to find the geometric pattern that divides space into cells of equal volume and has minimum surface area--a problem related to the properties of foams and "honeycomb" shapes. Over the years, many mathematicians and physicists had studied Kelvin's proposed solution without being able to prove either it right or wrong. Recently, however, Dennis Weaire and Robert Phelan of Trinity College in Dublin found a pattern (shown here) with less surface area than that proposed by Kelvin--in large part because they could use a computer to see alternative geometric patterns. Although geometric problems can be expressed mathematically in nonvisual ways, "a logical equivalent is not a psychological equivalent" says Berkeley's William Thurston, who believes that people--including mathematicians--can understand geometric information far better than they can communicate it. Hence the need for a better means of visualizing geometric concepts. Computers and increasingly sophisticated software are beginning to supply that means. Written by mathematician Ken Brakke of Susquehanna University, the surface modeling software used to disprove Lord Kelvin is a prime example of the new computational and visualization techniques that are transforming mathematical research and teaching. Without it, according to Princeton's Fred Almgren, many problems cannot even be tackled. Visualization is also finding applications in the industrial world. Brakke's software has been used to study biological membranes, to model how fuel behaves in tanks under weightless conditions for space missions being planned by Martin Marietta and Lockheed, and--since soldering problems are a major reason for electronic microchip failures--to study in detail the behavior of solder droplets. To Catch A Black Hole Einstein predicted that the collision of two of the massive collapsed stars known as black holes would alter the shape of space itself. At present, however, human observers have no way of knowing when such an event--among the most momentous of all astronomical phenomena--occurs. Since black holes release no light or other electromagnetic radiation, even the most sophisticated of current instruments cannot detect them. That will soon change. Scientists at the California Institute of Technology and the Massachusetts Institute of Technology are building a fundamentally new kind of instrument known as the Laser Interferometer Gravitational-Wave Observatory (LIGO). The observatory consists of two identical detectors for gravitational waves--ripples in the fabric of space itself, such as would be given off by the collision of two black holes. The detectors will be separated by several thousand miles in order to rule out false signals. Eventually, two additional stations will be needed to pinpoint the astronomical source of the gravitational waves. At the heart of the detectors is a laser beam, which is split and then directed down two long pipes containing nothing but vacuum. The beams are reflected back and forth from mirrors attached to heavy weights at either end of the pipes, and then eventually recombined. Because the two pipes lie at right angles to each other, a passing gravity wave would stretch the weights in one pipe ever-so-slightly further apart and squeeze those in the other very slightly closer together. Such changes would be detected because the recombined laser beams--which normally cancel each other--would instead cause an interference pattern. LIGO will be able to measure changes as small as 10-16 centimeters, about one- hundred-millionth the diameter of a hydrogen atom. This powerful technology will open new frontiers in both physics and astronomy. It will also provide important advances in the technologies of seismic isolation, optical coatings, and high-power solid state lasers. LIGO will enable physicists to conduct a crucial test of Einstein's theory of general relativity. And it will allow scientists to go hunting for the otherwise undetectable collisions between black holes, as well as other cataclysmic--but invisible--cosmic events. Magnetic Material Probes As the molecular revolution continues, the key to the development of new technology is often a better understanding of the properties of materials on an atomic level. Superstrong magnetic fields are a powerful tool for probing material properties--in semiconductors, in liquid crystals and polymers, and in biological materials. Magnetic fields themselves are important in technology, and could play an even greater role if scientists could understand and harness the phenomenon of high-temperature superconductivity. The National High Magnetic Field Laboratory (NHMFL) in Florida, a new national facility, is providing the tools for such research. Open to academic and industrial scientists and supported with both federal (NSF) and state funds, the laboratory will soon have the world's most powerful magnet--an instrument capable of generating magnetic fields 500,000 times stronger than that of the earth --as well other types of magnets. This specialized equipment will support studies of electron transport in semiconductors, a process basic to faster electronic chips, and will help to characterize magneto-optic phenomena that may one day lead to new technology for the information superhighway. The magnets will also facilitate fundamental studies critical to nanofabrication--the manufacture of motors, sensors, and other devices on a microscopic scale. An entire wing of the NHMFL is devoted to the application of magnetic sensing techniques to medicine and life science research. The goal is improved magnetic resonance imaging (MRI), already a widely used method of examining the inner workings of living tissue without harming it, and related techniques that can help medical researchers design new drugs, understand how proteins work within the body, and gain insights into the structure of biologically important molecules. Research on high-temperature superconductivity could lead to ultrafast computers, revolutionary new technologies for storing and transmitting energy, and more efficient means of transportation on land and sea. High-field magnets at the NHMFL can play a crucial role in helping scientists understand the basic phenomena of high-temperature superconductivity as well as the materials from which practical magnets can be constructed. The facility will also be used to experiment with and test high-temperature superconducting magnets as they evolve. ORIGINS OF THE INFORMATION SUPERHIGHWAY A retrospective look at the discoveries that made possible the emerging technologies of today and the services of tomorrow. Ten years ago, the information superhighway could not have been built. Many of the core technologies essential to the convergence of computing and communications--a conjunction at the heart of the information superhighway--were simply not ready. The discoveries that initiated or made these technologies possible go back even further--before anyone dared to dream of a world in which scientists could collaborate across continents, in which every school could be connected to the great libraries and museums, and in which ordinary citizens could tap a wealth of digital services and entertainment from their homes. The true origins of the information superhighway, in fact, include fundamental research on the physics of surfaces in the late 1940s, obscure university work on microwave oscillators in the early 1950s, and a speculative suggestion in an academic journal in the mid-1960s. The origins also include some exploratory technology development in a corporate R&D laboratory in the early 1980s--undertaken despite the initial skepticism of the laboratory director that it would ever prove to be of commercial value--and dozens of other, similar discoveries whose practical significance at the time was no more than a dream. Such research, if proposed today, would be hard to distinguish from hundreds of similar basic research proposals; it would be difficult to conduct at all in many corporate R&D laboratories. Yet it produced the seeds of a revolutionary technology that is likely to transform homes and workplaces alike. This is the story--or at least part of the story--of how and why those discoveries were made, of how the core technologies for the information superhighway came to be.[1] These technologies include the semiconductor lasers that generate precisely defined yet infinitesimal light pulses; the optical fibers that carry those pulses; the pulses themselves, which interact with optical fibers to propagate virtually unchanged over distances equal to half the circumference of the earth; and the ever more high-speed electronic chips that generate, manipulate, and help us use the array of digital information that increasingly defines our world. ---------- [1.] This discussion draws upon articles by N. Bloembergen, A. Glass, and A. Fowler in a special edition of Physics Today, Vol. 46, No. 10 (October, 1993), pp.28-31, 34-38, and 59-62. ---------- One thread of this story begins in a physics laboratory at Columbia University in the early 1950s. In the course of basic research on microwaves, physicist Charles Townes discovered the principle of stimulated emission--the idea that a small signal could trigger the release of much larger pulses of energy from excited molecules. Townes's research led to invention of the microwave oscillator, or maser, patented in 1954. It did not become commercially important. In 1958, however, Townes and Arthur Schalow extended the principle of stimulated emission to visible wavelengths, creating the conceptual basis for the laser--for which they later received a Nobel Prize. The first operating laser was built in 1960 and the first semiconductor laser a few years later. It was not until the 1970s, however, that semiconductor lasers could be built directly into microelectronic chips and hence mass produced, and not until the 1980s that they became commercially important. Lasers, of course, have become a $20 billion-a-year industry encompassing dozens of different configurations and forms. In addition to making optical communications possible, they are widely used in medicine and surgery, serve as metal cutting tools, and are found in compact disc players and bar code readers at retail checkout counters. In the 1960s, however, semiconductor lasers seemed one of the more unlikely types to find commercial success. The first semiconductor lasers were large, awkward devices that had to be cooled to the temperature of liquid nitrogen to work at all. They had no obvious applications and were much less powerful and practical than other laser types. Nonetheless, the idea was intriguing: Could a laser be made small enough to fit on a chip, much like the fledgling integrated circuits that were just beginning to emerge from laboratories? One research group working on this problem in the late 1960s was at Bell Laboratories. The scientists knew that silicon, the material of choice for electronic circuits, did not offer much hope for lasers. But other families of semiconductor materials--such as gallium-arsenide compounds--did show light-emitting behavior under certain circumstances. The problem was one of purity. If gallium arsenide could be made pure enough, it might become a laser. The process of trying to make purer gallium arsenide required some fundamental advances in materials science. The Bell Labs group, for example, invented a new technique that allowed them to deposit material a single layer of molecules at a time. Known as molecular beam epitaxy, the technique is now widely used in materials processing and research. Gradually, month after month, the team made progress. Finally, in 1969, the scientists succeeded in creating a gallium-arsenide laser that could operate at room temperature. Still later, researchers at the University of Illinois and elsewhere were able to create such lasers in miniature and integrate them into semiconductor chips, together with their control circuitry. This meant that semiconductor lasers could be made by the millions, using standard chipmaking processes. Now, more than 200 million semiconductor lasers are manufactured each year, and production is still increasing. Semiconductor lasers vastly outnumber all other kinds of lasers and they continue to improve, recently demonstrating the ability to send precisely shaped and precisely timed pulses of light as short as a few femtoseconds (a thousandth of a trillionth of a second). At such speeds, the information housed in the entire Library of Congress could in principle be transmitted--over some future version of the information superhighway--in a single second. Lasers by themselves do not make an optical communications system. Among the first suggestions that laser light could be transmitted over long distances in a glass fiber--and hence used for communications--was made in a 1966 article by Charles Kao in a scientific journal. His idea was to make a glass waveguide--a special glass fiber in which laser light traveling down the fiber would be refracted from the edges back toward the center and hence guided along the fiber. The first fibers were relatively crude; they broke easily, and defects or impurities in the glass scattered or absorbed enough of the light signal that it couldn--t travel very far. But basic research on the chemistry and thermodynamics of glass led to steady improvements--purer glasses that reduced losses, for example, and epoxy coatings that made the fibers more flexible and resistant to corrosion. In 1970, Corning Glass Works demonstrated a fiber that could transmit a light signal with losses of only 1 percent per kilometer--a big advance at the time, but a fiber still far from a commercial system. Today's fibers have losses of 100-fold less, reduced almost to the theoretical limit. The result has been an explosion of optical communications. Optical fibers now carry most U.S. long-distance telecommunications. Total communications traffic over fibers--as measured by the digital equivalent of passenger miles--is 1,000 times greater than a decade ago. In addition, optical fibers are finding applications in medicine, environmental sensing, and the aerospace industry. The manufacture of optical fibers has become a big business, with millions of miles of fiber produced each year. By the time the information superhighway is in place, however, optical fibers will have been transformed from passive to active devices with even greater carrying capacity. These developments build on fundamental research into the properties of rare earth elements, such as erbium. Glass fibers doped with erbium and powered by a semiconductor laser have been shown to amplify an optical signal. These fiber amplifiers can be spliced directly into a fiber, replacing the regenerating stations that now detect, amplify, and retransmit optical communications signals every 30 to 100 kilometers. Since the comparatively slow electronic components of regenerating stations are the principal bottlenecks in today's long-distance networks, their replacement by fiber amplifiers beginning in 1995 will increase the capacity of long-distance optical communications systems by as much as 100-fold. In addition, optical fiber lasers are under development and optical switches are being explored. Such all-optical communications systems will have a number of advantages, including those arising from a novel type of optical pulse that can be transmitted over such networks. Not all light pulses, it turns out, are created equal. Some--known as soliton pulses--can interact with glass fibers in an almost magical fashion that allows the pulses to travel further and to carry more information than ordinary light waves. In contrast, even the most perfect low-loss optical fiber gradually disperses an ordinary laser light pulse traveling along it. Quite apart from such dispersive effects, light pulses gradually spread their frequencies and lose their definition, an inherent property known as the nonlinear intensity effect. Soliton pulses overcome both of these limitations simultaneously: the nonlinear properties of the soliton pulses exactly balance and cancel the dispersive effects of the fiber. The result is that soliton pulses retain their shape almost indefinitely. A recent experiment demonstrated error-free propagation of solitons over a distance of more than halfway around the world. Commercial communications systems are expected to make use of solitons before the end of the century. The origins of soliton transmission lie in fundamental research into mathematical physics and exploratory technology development. Solitons were discovered more than a century ago, but remained an abstract mathematical concept. In the early 1970s, a Bell Labs plasma physicist named Akira Hasegawa realized that the equations describing the transmission of light down an optical fiber permitted a soliton solution--one in which nonlinear and dispersive terms exactly cancelled--and thus that solitons might be used for communications. His theoretical paper on the subject, published in 1972, lay almost ignored for almost a decade. Then in 1980, Linn Mollenauer and his colleagues at Bell Labs demonstrated that soliton pulses could be transmitted through optical fibers. For several years, the team explored the basic physics of solitons, despite widespread skepticism that such pulses were anything more than a curiosity. By the mid-1980s, they had demonstrated that soliton pulses could be transmitted thousands of kilometers. In more recent experiments, soliton pulses have demonstrated error-free transmission rates of 15 billion bits per second--above that reached by conventional light pulses and representing an enormous capacity for carrying information. Yet solitons have still another advantage. Conventional optical fiber systems carry only a single optical signal because multiple signals might interfere with each other. Soliton pulses of different wavelength, however, can pass through each other with no interaction. This property became important once optical fiber amplifiers were developed. In an all-optical communications system, solitons permit the simultaneous transmission of signals at several different wavelengths, increasing the capacity of the information superhighway still further. Undergirding the information superhighway, of course, is an enhanced human ability to generate, manipulate, display, and use information. The information processing revolution has been compared to the industrial revolution; its full impact cannot yet be measured because the revolution is still far from over. The critical technology at the core of this new- found human ability is the microelectronic chips that provide increasingly high-speed digital information processing. Such chips are found not only in computers, but in automobiles, household appliances, office equipment, cellular phones, and video and music devices--in virtually every aspect of American life. So commonplace has this technology become--and so routine its remarkable rate of continued improvement--that it is easy to forget its origins also lie in fundamental research. The movement of electrons in semiconductors is what performs the digital magic we take for granted. The underlying theory as it applies to semiconductors was developed as part of the quantum mechanics revolution in physics in the 1930s. Research on semiconductors began in earnest after World War II. In 1947, during research on the physics of semiconductor surfaces, John Bardeen and Walter Brattain at Bell Labs observed transistor-like behavior; their colleague William Shockley followed up on the discovery, building the first transistor, for which all three shared a Nobel Prize. The invention of the integrated circuit--the chip--at the end of the 1950s opened the way to the vast commercial expansion of the semiconductor industry. Between those beginnings and today's chips, with their thousands of circuits and millions of transistors, lie more than 45 years of effort to grow ever more perfect silicon crystals, to insert precisely controlled amounts of impurities, and to subject the crystals to dozens of processing steps. This has required much new fundamental knowledge, such that the surface of a silicon crystal is perhaps the best- studied surface in the history of science. New discoveries and new physics-based tools continue to drive down the size of microelectronic components and to reduce the unit cost of computing power at a rate that is still accelerating. Ever cheaper computers are transforming established industrial and service businesses, raising productivity, and creating wholly new products and services. There will be no shortage of information to share over the information superhighway. It is worth noting that none of the critical technologies of the information superhighway discussed here arose directly out of targeted research. In each case, the key observation or inspiration came from more fundamental inquiries. Moreover, behind each mature technology lies a decade--often several decades--of effort to build a base of fundamental knowledge, sometimes supplemented by exploratory technology, on which later, more commercial development could build. Even commercial technology depends on the skills and creativity of individuals trained by exposure to academic research. There are no short cuts: the information superhighway could not be built without the physical and mathematical knowledge on which its central technologies depend. The process is a continuing one. Just as commercial deployment of the information superhighway is harvesting earlier investments in the creation of basic knowledge, so continued technological progress in the next century--and the commercial competitiveness that goes with it--will rely on current fundamental research. Without the science of today, the technologies of tomorrow will not happen. Squeezing Still More Into The Information Superhighway Even the enormous capacity of the emerging optical fiber communications system is not infinite. A two-hour feature film, for example, contains about 1.5 trillion bits of digital information, and purveyors of movies-on-demand will want to serve as many customers as possible through a single optical fiber. New methods that compress the information before transmission, and then decompress it afterwards, will help. Compressing text is relatively easy; images, however, present more of a challenge. One powerful technique is based on methods for transforming a difficult mathematical problem into one that is easier to solve; an example is the Fourier transform, in use for more than a century. In the 1960s, mathematicians developed powerful new transformation methods for difficult mathematical objects on which Fourier transforms could not be used. The methods, known as Calderon-Zygmund theory, enriched the mathematical literature but were not put to any practical use. Electrical engineers independently developed computer algorithms that accomplished a similar transformation; in effect, these algorithms provided one way of implementing a fast Calderon transform. Theory and practice came together in a major new synthesis in the early 1980s, now known as wavelet theory. The new theory had important practical implications, including more general and flexible methods of image compression. Wavelet theory allows the digital description of an image--a set of numbers--to be transformed to other sets of numbers in ways that single out the main features of the image from details of lesser importance. The technique also provides rules for deciding when the less important details can safely be ignored. Compression results from transmitting the description of the main features first, then less important details, while omitting unimportant information. Wavelet theory thus allows a choice of how much compression to use, based on the quality of the desired image. For high quality images, it can reduce 12-fold the information transmitted; for poorer quality (but still readily recognizable) images, 30-fold compression is possible. Wavelet methods could compress a full-length feature film enough to be transmitted over the optical fiber system of the future in a second or two and could perhaps even enable a compressed film to be transmitted (much more slowly) over a copper telephone wire. Wavelet methods are already being used to compress the FBI's fingerprint files, making them far more quickly and readily accessible. Image compression software incorporating the wavelet technique is beginning to appear in the commercial market. THE NATIONAL SCIENCE FOUNDATION's DIRECTORATE FOR MATHEMATICAL AND PHYSICAL SCIENCES The fascinating results discussed in the preceding pages grew out of investments made in people, their ideas, and their education over a period of many years. Society now benefits from this work and sees dramatic progress in areas related to modern information--progress that consists of myriad incremental pieces. Benefits in other areas are still to come, but promise to be abundant as well. Major breakthroughs and events capture the imagination of us all. However, it is useful to pause and recognize that the nation enjoys a unique competitive advantage in innovation--an advantage aided by investments in the core knowledge base of the mathematical and physical sciences. This knowledge base has been generated by universities, industry, and the national laboratories, with federal and private support from many sources. The changes now underway in the post-cold war era are challenging all sectors of our society. Clearly, this is a time of great opportunity--and equally great responsibility--for ensuring the advantage that knowledge contributes to enhancing our society. The National Science Foundation (NSF) accepts this stewardship role and is engaging the technical community in a dialogue to help define productive new interactions. The mathematical and physical sciences and the technology they uniquely produce are critical to America's economic future over a wide range of industrial activity, from advanced manufacturing and materials development to biotechnology to high-performance computing. NSF support for these objectives is thus an investment in the future. A significant part of this support is directed to the education and training of undergraduate, graduate, and postdoctoral students. Funding mechanisms for research include grants to individual scientists, to groups, and to facilities that support advanced science and engineering. The Mathematical and Physical Sciences Directorate supports a wide array of research and education activities in astronomical sciences, chemistry, materials research, mathematical sciences, and physics. In most of these areas, NSF provides approximately 50 percent of federal support for basic research in academic institutions. In addition, the Directorate supports major facilities and shared tools such as the Gemini advanced telescopes, the National High Magnetic Field Laboratory, and the Laser Interferometer Gravitational-Wave Observatory. The Directorate maintains close connections with other agencies, other disciplines, and industry to enhance the impact, relevance, and leverage of its investment. In addition to NSF's long-standing commitment to investment in creative individuals, the complexity of science has encouraged support of new efforts coupling fields of science and sectors of performers. The Mathematical and Physical Sciences Directorate provides support for 10 Science and Technology Centers, 10 Materials Research Science and Engineering Centers, and 2 Mathematical Sciences Institutes. These interdisciplinary centers provide a unique focus on areas of strategic importance, offer new models for education, and generate new interactions between industry and universities. The Directorate also participates in a number of NSF-wide programs, including special projects for women and minorities, research experiences for undergraduates, postdoctoral fellowships, young investigator awards, and Presidential Faculty Fellowships. Support for the mathematical and physical sciences has remained relatively constant in real terms over the past decade (see bar chart). That support, focused on fundamental research, directly contributes to national research priorities to a significant extent. NOTE: CHARTS ARE NOT INCLUDED IN THIS ELECTRONIC VERSION! The mathematical and physical science community looks forward to contributing to the future of the nation and to working in partnership with others in this process. The National Science Foundation provides awards for research in the sciences and engineering. The awardee is wholly responsible for the conduct of such research and preparation of the results for publication. The Foundation, therefore, does not assume responsibility for the research findings or their interpretation. The Foundation welcomes proposals from all qualified scientists and engineers, and strongly encourages women, minorities, and persons with disabilities to compete fully in any of the research and related programs described here. In accordance with federal statues, regulations, and NSF policies, no person on grounds of race, color, sex, national origin, or disability shall be excluded from participation in, denied the benefits of, or be subject to discrimination under any program or activity receiving financial assistance from the National Science Foundation. The National Science Foundation has TDD (Telephonc Device for the Deaf) capability, which enables individuals with hearing impairment to communicate with the Foundation about NSF programs, employment, or general information. This number is 306-0090. Electronic Dissemination. This and other NSF publication are available electronically through STIS (Science and Technology Information System), NSF's on-line publishing system. To get a paper copy of the "STIS Flyer", NSF 94-4, call the Publications Section at (301) 306-1130. For an electronic copy, send and E-mail message to stisfly@nsf.gov (Internet) Ordering by Electronic Mail or FAX. If you have access to either Internet, you may order publications electronically. Internet users should send requests to pubs@nsf.gov. In your request, include the NSF publication number and title, number of copies, and your name and complete mailing address. Printed publication also may be ordered by FAX (703-644-4278). Publications should be received within 3 weeks after receipt of the request.