It’s hard enough to thread a needle. Imagine trying to manipulate threads and needles miniaturized to one-millionth the normal size. Now, you’re thinking like the emerging group of nanotechnologists whose growing dexterity at fashioning new materials and devices may eventually improve every arena of technology, from aerospace to drug development. While many researchers focus on developing tools for working on nanoscale materials, others are pursuing a virtual pathway toward nanotechnology applications. As ever-more powerful computers have become ever more affordable, computational nanoscientists can readily simulate materials atom by atom.
“Now, we can do many marvelous things,” says Sidney Yip, a computational materials scientist at the Massachusetts Institute of Technology.
By running such precise computer models of the chemical and physical properties of materials, researchers can examine tiny constructions more thoroughly than a bench scientist ever could.
Models can also simulate materials that have been envisioned but not yet created. For instance, in the late 1990s, Deepak Srivastava of the computational nanotechnology group at NASA Ames Research Center in Moffett Field, Calif., used computer models to study carbon nanotubes—sheets of graphite rolled into a tube. The models indicated that a tube created by rolling the graphite in one particular way would result in a nanotube that behaves like a metal.
When the scientists roll the graphite sheet in a slightly different way, however, the resulting tube behaves like a semiconductor. The model also suggested that connecting tubes with these subtly different structures would generate a nanodevice capable of functioning like a transistor.
“At the time, people thought it was a little far out because no one knew how to build these devices,” says Srivastava. Several years later, researchers motivated by the computer models proved the NASA Ames team right by fabricating such a nanotube-based switch in the lab.
One arena in which simulations are moving nanotechnology forward is energy. Academia and industry have been investing millions of dollars in research preparing for a hydrogen economy, where fossil fuels would be replaced by cleaner, more-abundant, and more-efficient hydrogen fuel. That goal, however, requires new materials for storing and extracting usable energy from hydrogen sources.
Along those lines, computational physicist Kyeongjae Cho of Stanford University is using models to design fuel cell materials. In a fuel cell, a catalyst strips electrons from hydrogen atoms to generate electricity. Typically, the catalyst consists of nanoparticles of platinum. “However, platinum is not a cheap metal, and it’s a finite resource,” says Cho. “There will be lots of problems if we try to launch a hydrogen economy based on fuel cells, because we don’t have enough platinum to do the job.”
To tackle this problem, the Stanford researchers have been using computer models of atomic-scale structures to learn what makes platinum a good catalyst. At the nanometer scale, a material’s properties are dictated by the arrangement of its atoms. For instance, if you assemble carbon atoms so that each one bonds to four others in a tetrahedral pattern, you end up with diamond.
But if you change that arrangement to a planar structure with only three atoms neighboring each carbon atom, it becomes graphite.
With metals such as platinum, the number of possible configurations is huge. To find the ones likely to represent those of catalytic nanoparticles, the Stanford team used computer models to simulate many different atomic configurations and calculate their stabilities. The researchers identified the most stable candidate—a configuration of 611 atoms that measures 3.1 nanometers in diameter, as it turned out—and calculated that it would be efficient at stripping electrons from hydrogen atoms in a fuel cell.
The next step, says Cho, is to use detailed knowledge of what makes a platinum catalyst tick to scour through databases of nonplatinum materials and find ones with similar properties but that are more abundant and affordable.
In the same vein, Cho and his colleagues are using computer models to compare many different carbon nanotube structures in their capacity to store hydrogen. Several years ago, experiments indicated that nanotubes could be ideal for this application. Subsequent studies led to the opposite conclusion, says Cho. “So, there was big disagreement about this,” he adds.
The Stanford researchers performed computational analyses of how hydrogen interacts with carbon nanotubes, and they determined that hydrogen-storage capacity is a function of the tube’s diameter. A narrow nanotube binds to hydrogen too strongly, making it difficult to recover the hydrogen to, say, fill up your fuel cell car at a future hydrogen service station. A wide carbon nanotube doesn’t bind to hydrogen strongly enough, but a carbon nanotube 1 nanometer in diameter is just right.
In a typical batch of real carbon nanotubes as they’re now produced, there are tubes of many diameters. Only a few percent have the right dimensions for hydrogen storage. “The question becomes whether we can control the size of nanotubes to produce only those with the optimum size,” says Cho.
Data in motion
Besides pointing the way toward better hydrogen-storage materials, computer models at the atomic scale can help researchers design more creative and intricate nanomachines and explore a multitude of renditions before going through the technical pains of fabricating the devices in the lab.
Consider Mario Blanco and his graduate student Santiago Solares, at the Materials and Process Simulation Center at the California Institute of Technology in Pasadena. They designed a nanovalve for controlling the flow of fluids. In their design, a carbon nanotube lies between two silicon cantilevers that act like tweezers. To close the valve, the top cantilever presses down on the hollow nanotube, squeezing it shut. The entire system contains only about 75,000 atoms.
To identify the best way to control the cantilever, the researchers used computer models to simulate both chemically and electrically based switching mechanisms. In their simulations, adding an acid to the system flooded the cantilever’s surface with mutually repulsive negative charges. This caused the top surface of the upper cantilever to expand relative to its bottom surface, and the cantilever curled downward. A neutralizing agent could return the simulated cantilever to its original configuration.
“But changing the charges on the surface takes time, on the order of milliseconds,” says Blanco. “Sometimes, you want things to happen much faster than that.” To map out potential laboratory routes toward faster valve action, the researchers added an electrode and coated the cantilever with gold. In their simulated system, when the electrode pumped electrons onto the cantilever surface, the switching speed was 10 to 100 nanoseconds, as fast as a computer chip, says Blanco. This work offers scientists insight into how to fabricate such a device in the laboratory.
Blanco’s next project is to simulate a light-driven molecular machine. His design consists of a buckyball—a soccer ball-shaped molecule made of 60 carbon atoms—linked to a light-absorbing unit called porphyrin, which is attached to a molecular assembly that can move as a tiny muscle. When the porphyrin molecule absorbs light, one of its electrons hops over to the buckyball. That stimulates the nanomuscle component to donate one of its electrons to the porphyrin, a transfer that causes the muscle to change shape. Such changes could be useful, for example, for converting solar energy directly into mechanical energy.
Each component of this molecular machine has already been made and tested in the lab, says Blanco. Before laboratory scientists assemble the components into working machines, further simulations can alter the length of the muscle and tweak the system in other ways. “We can create these modifications much faster on the computer to figure out which ones to test in the lab,” says Blanco.
Simulations are also proving central to revealing how lab-made nanostructures work. For instance, Fraser Stoddart, a colleague of Blanco’s in the chemistry department at Caltech, has developed complex molecules called rotaxanes. They act as electronic switches, changing from a conducting to a nonconducting state when a voltage is applied. Such molecular switches, designed to function like transistors on a chip, could lead to computers that are faster and more powerful than those available today.
“The level of complexity of what Stoddart’s been able to make in the laboratory is an order of magnitude higher than what anyone else is working on,” says Blanco. “So, there’s a great deal of skepticism as to whether or not these things do what the researchers claim they do.” To weigh in on this controversy, Blanco and his team modeled a rotaxane atom by atom.
Each rotaxane molecule consists of a ring wrapped around a long rod. When the ring is at one end of the rod, the molecule conducts electricity. When the ring slides to the other end, the rotaxane becomes nonconducting.
By modeling the electron orbitals surrounding the rotaxane’s several hundred atoms, the researchers discovered that when the rotaxane is in the nonconducting form, the orbitals are discontinuous; they don’t span the entire length of the molecule. When the ring moves to the other position, the molecular orbitals span the entire length, providing an unbroken pathway for electrons. Says Blanco: “We were able to show that indeed this was acting like a switch.”
Nuts and bolts
Despite amazing advances in computing power, atomistic simulations are still computationally intensive, and there are limits to the detail and scope that researchers can include. For Blanco’s nanovalve simulations, the computations associated with pressing down on a nanotube took four computers working in parallel, nonstop, for 3 to 5 days. “And that’s just for the downward motion,” says Blanco.
To run these simulations, a computer has to calculate the position and velocity of each atom in a system. As the atom count increases, the computational tasks leap exponentially. Even for modest simulations of thousands of atoms, the computer has to solve many thousands of equations in a massive calculation that integrates all the different forces acting on the system.
The calculation is especially complex when it considers not just the atoms but also each of their electrons. Even the fastest computers today can simulate only a few hundred atoms and their electrons in a reasonable amount of time. “For a system with 100,000 atoms, this is nearly impossible,” says Blanco.
Blanco and others, therefore, rely on more-approximate models of atomic behavior. First, they calculate the electron behavior of 100 atoms. Then, they extrapolate the data to a larger simulation comprising hundreds of thousands of atoms and then to millions of atoms or more.
Known as multiscale modeling, this procedure establishes bridges between the scales of electrons, atoms, and molecules, summing into accurate models of the nanoworld. “I can make the different scales talk to each other,” says Srivastava.
Computational nanoscientists model materials at different time scales as well as at different size scales. For example, in the mid-1990s, computer models predicted that a carbon nanotube could stretch 30 percent of its length before breaking, says Srivastava. “Everyone got so excited, and this number remained in the literature for 4 or 5 years,” he recalls.
But later, when researchers actually tried to stretch a nanotube in the lab, they found the breaking strain was between 6 percent and 10 percent.
As it turned out, the early simulations had only enough computational power to calculate the stretching of a carbon nanotube over a split second. When researchers went through the painstaking process of stretching a carbon nanotube in the lab, however, the process took about an hour. Running a computer simulation that long could take months.
Srivastava and his colleagues recently took a second look at the simulation data from the 1990s and identified a unique pattern in the rate at which the nanotube elongated. They then extrapolated that pattern from less than a second to an hour. The result was a breaking strain of 9 percent, confirming the results from the lab.
Knowing the breaking strain of a carbon nanotube could help researchers at NASA Ames design a next generation of strong, lightweight materials for future space missions. Srivastava and his colleagues are currently working on mixing carbon nanotubes with polymers to make new thermal-tile materials for the space shuttle. In theory, the high strength and superior heat conductivity of carbon nanotubes could yield more-capable heat shields than those available today.
Computer power will continue to increase in the future, and with that boost will come ever-more-sophisticated simulations. These superior models will become especially important as researchers begin assembling millions and billions of nanowidgets into macroscopic machines. Atomistic models are currently limited to individual devices—a single transistor, a valve, or a carbon nanotube. Eventually, researchers will need to run simulations of multiple nanotube transistors, for example, all working together on a single microchip.
“The best science is going on at the atomic and electronic scale,” says MIT’s Yip. “Now, we need to link this science with the scale on which we live.”