What made the last century’s great innovations possible?

Transforming how people live requires more than scientific discovery

historical photo of a 1920s radio broadcast station with men wearing headphones

In the early decades of the 20th century, automobiles, telephone service and radio (broadcast of the 1920 U.S. presidential election results from KDKA station in Pittsburgh is shown) were transforming life.

Hulton Archive/Getty Images

In the early decades of the 20th century, a slew of technologies began altering daily life with seemingly unprecedented speed and breadth. Suddenly, consumers could enjoy affordable automobiles. Long-distance telephone service connected New York with San Francisco. Electric power and radio broadcasts came into homes. New methods for making synthetic fertilizer portended a revolution in agriculture. And on the horizon, airplanes promised a radical transformation in travel and commerce.

As the technology historian Thomas P. Hughes noted: “The remarkably prolific inventors of the late nineteenth century, such as [Thomas] Edison, persuaded us that we were involved in a second creation of the world.” By the 1920s, this world — more functional, more sophisticated and increasingly more comfortable — had come into being.

Public figures like Edison or, say, Henry Ford were often described as inventors. But a different word, one that caught on around the 1950s, seemed more apt in describing the technological ideas making way for modern life: innovation. While its origins go back some 500 years (at first it was used to describe a new legal and then religious idea), the word’s popularization was a post–World War II phenomenon.

The elevation of the term likely owes a debt to the Austrian-American economist Joseph Schumpeter, according to the late science historian Benoît Godin. In his academic writings, Schumpeter argued that vibrant economies were driven by innovators whose work replaced existing products or processes. “Innovation is the market introduction of a technical or organizational novelty, not just its invention,” Schumpeter wrote in 1911.

An invention like Fritz Haber’s process for making synthetic fertilizer, developed in 1909, was a dramatic step forward, for example. Yet what changed global agriculture was a broad industrial effort to transform that invention into an innovation — that is, to replace a popular technology with something better and cheaper on a national or global scale.

In the mid-century era, one of the leading champions of America’s innovation capabilities was Vannevar Bush, an MIT academic. In 1945, Bush worked on a landmark report — famously titled “Science, The Endless Frontier” — for President Harry Truman. The report advocated for a large federal role in funding scientific research. Though Bush didn’t actually use the word innovation in the report, his manifesto presented an objective for the U.S. scientific and industrial establishment: Grand innovative vistas lay ahead, especially in electronics, aeronautics and chemistry. And creating this future would depend on developing a feedstock of new scientific insights.

historical photo of Vannevar Bush holding a pencil to some papers
Vannevar Bush was one of the 20th century’s leading champions of American innovation. His landmark report, “Science, The Endless Frontier,” advocated for federal funding for scientific research.MPI/Getty Images

Though innovation depended on a rich trove of discoveries and inventions, the innovative process often differed, both in its nature and complexity, from what occurred within scientific laboratories. An innovation often required larger teams and more interdisciplinary expertise than an invention. Because it was an effort that connected scientific research to market opportunities, it likewise aimed to have both society-wide scale and impact. As the radio, telephone and airplane had proved, the broad adoption of an innovative product ushered in an era of technological and social change.

Bringing inventions “to scale” in large markets was precisely the aim of big companies such as General Electric or American Telephone & Telegraph, which was then the national telephone monopoly. Indeed, at Bell Laboratories, which served as the research and development arm of AT&T, a talented engineer named Jack Morton began to think of innovation as “not just the discovery of new phenomena, nor the development of a new product or manufacturing technique, nor the creation of a new market. Rather, the process is all these things acting together in an integrated way toward a common industrial goal.”

Morton had a difficult job. The historical record suggests he was the first person in the world asked to figure out how to turn the transistor, discovered in December 1947, from an invention into a mass-produced innovation. He put tremendous energy into defining his task — a job that in essence focused on moving beyond science’s eureka moments and pushing the century’s technologies into new and unexplored regions.

From invention to innovation

In the 1940s, Vannevar Bush’s model for innovation was what’s now known as “linear.” He saw the wellspring of new scientific ideas, or what he termed “basic science,” as eventually moving in a more practical direction toward what he deemed “applied research.” In time, these applied scientific ideas — inventions, essentially — could move toward engineered products or processes. Ultimately, in finding large markets, they could become innovations.

In recent decades, Bush’s model has come to be seen as simplistic. The educator Donald Stokes, for instance, has pointed out that the line between basic and applied science can be indistinct. Bush’s paradigm can also work in reverse: New knowledge in the sciences can derive from technological tools and innovations, rather than the other way around. This is often the case with powerful new microscopes, for instance, which allow researchers to make observations and discoveries at tinier and tinier scales. More recently, other scholars of innovation have pointed to the powerful effect that end users and crowdsourcing can have on new products, sometimes improving them dramatically — as with software — by adding new ideas for their own use.

Above all, innovations have increasingly proved to be the sum parts of unrelated scientific discoveries and inventions; combining these elements at a propitious moment in time can result in technological alchemy. Economist Mariana Mazzucato, for instance, has pointed to the iPhone as an integrated wonder of myriad breakthroughs, including touch screens, GPS, cellular systems and the Internet, all developed at different times and with different purposes.

At least in the Cold War era, when military requests and large industrial labs drove much of the new technology, the linear model nevertheless succeeded well. Beyond AT&T and General Electric, corporate titans like General Motors, DuPont, Dow and IBM viewed their R&D labs, stocked with some of the country’s best scientists, as foundries where world-changing products of the future would be forged.

These corporate labs were immensely productive in terms of research and were especially good at producing new patents. But not all their scientific work was suitable for driving innovations. At Bell Labs, for instance, which funded a small laboratory in Holmdel, N.J., situated amid several hundred acres of open fields, a small team of researchers studied radio wave transmissions.

Karl Jansky, a young physicist, installed a moveable antenna on the grounds that revealed radio waves emanating from the center of the Milky Way. In doing so, he effectively founded the field of radio astronomy. And yet, he did not create anything useful for his employer, the phone company, which was more focused on improving and expanding telephone service. To Jansky’s disappointment, he was asked to direct his energies elsewhere; there seemed no market for what he was doing.

Above all, corporate managers needed to perceive an overlap between big ideas and big markets before they would dedicate funding and staff toward developing an innovation. Even then, the iterative work of creating a new product or process could be slow and plodding — more so than it may seem in retrospect. Bell Labs’ invention of the point-contact transistor, in December 1947, is a case in point. The first transistor was a startling moment of insight that led to a Nobel Prize. Yet in truth the world changed little from what was produced that year.

The three credited inventors — William Shockley, John Bardeen and William Brattain — had found a way to create a very fast switch or amplifier by running a current through a slightly impure slice of germanium. Their device promised to transform modern appliances, including those used by the phone company, into tiny, power-sipping electronics. And yet the earliest transistors were difficult to manufacture and impractical for many applications. (They were tried in bulky hearing aids, however.) What was required was a subsequent set of transistor-related inventions to transform the breakthrough into an innovation.

historical photo of John Bardeen, William Shockley and Walter Brattain working with technical equipment
John Bardeen, William Shockley and Walter Brattain (shown from left to right) are credited with the invention of the transistor in 1947. But there were several hurdles to overcome before the transistor could transform electronics.Hulton Archive/Getty Images

The first crucial step was the junction transistor, a tiny “sandwich” of various types of germanium, theorized by Shockley in 1948 and created by engineering colleagues soon after. The design proved manufacturable by the mid-1950s, thanks to efforts at Texas Instruments and other companies to transform it into a dependable product.

A second leap overcame the problems of germanium, which performed poorly under certain temperature and moisture conditions and was relatively rare. In March 1955, Morris Tanenbaum, a young chemist at Bell Labs, hit on a method using a slice of silicon. It was, crucially, not the world’s first silicon transistor — that distinction goes to a device created a year before. But Tanenbaum reflected that his design, unlike the others, was easily “manufacturable,” which defined its innovative potential. Indeed, he realized its value right away. In his lab notebook on the evening of his insight, he wrote: “This looks like the transistor we’ve been waiting for. It should be a cinch to make.”

Finally, several other giant steps were needed. One came in 1959, also at Bell Labs, when Mohamed Atalla and Dawon Kahng created the first silicon metal-oxide-semiconductor-field-effect-transistor — known as a MOSFET — which used a different architecture than either junction or point-contact transistors. Today, almost every transistor manufactured in the world, trillions each second, results from the MOSFET breakthrough. This advance allowed for the design of integrated circuits and chips implanted with billions of tiny devices. It allowed for powerful computers and moonshots. And it allowed for an entire world to be connected.

Getting there

The technological leaps of the 1900s — microelectronics, antibiotics, chemotherapy, liquid-fueled rockets, Earth-observing satellites, lasers, LED lights, disease-resistant seeds and so forth — derived from science. But these technologies also spent years being improved, tweaked, recombined and modified to make them achieve the scale and impact necessary for innovations.

Some scholars — the late Harvard professor Clayton Christensen, for instance, who in the 1990s studied the way new ideas “disrupt” entrenched industries — have pointed to how waves of technological change can follow predictable patterns. First, a potential innovation with a functional advantage finds a market niche; eventually, it expands its appeal to users, drops in cost and step by step pushes aside a well-established product or process. (Over time the transistor, for example, has mostly eliminated the need for vacuum tubes.)

But there has never been a comprehensive theory of innovation that cuts across all disciplines, or that can reliably predict the specific path by which we end up transforming new knowledge into social gains. Surprises happen. Within any field, structural obstacles, technical challenges or a scarcity of funding can stand in the way of development, so that some ideas (a treatment for melanoma, say) move to fruition and broad application faster than others (a treatment for pancreatic cancer).

There can likewise be vast differences in how innovation occurs in different fields. In energy, for example, which involves vast integrated systems and requires durable infrastructure, the environmental scientist and policy historian Vaclav Smil has noted, innovations can take far longer to achieve scale than in others. In software development, new products can be rolled out cheaply, and can reach a huge audience almost instantly.

At the very least, we can say with some certainty that almost all innovations, like most discoveries and inventions, result from hard work and good timing — a moment when the right people get together with the right knowledge to solve the right problem. In one of his essays on the subject, business theorist Peter Drucker pointed to the process by which business managers “convert society’s needs into opportunities” as the definition of innovation. And that may be as good an explanation as any.  

Even innovations that seem fast — for instance, mRNA vaccines for COVID-19 — are often a capstone to many years of research and discovery. Indeed, it’s worth noting that the scientific groundwork preceding the vaccines’ rollout developed the methods that could later be used to solve a problem when the need became most acute. What’s more, the urgency of the situation presented an opportunity for three companies — Moderna and, in collaboration, Pfizer and BioNTech — to utilize a vaccine invention and bring it to scale within a year.

person injecting a COVID-19 vaccine into someone's arm
Innovations that seem fast, like vaccines for COVID-19, often rely on many years of scientific discovery, plus a societal need.Mario Tama/Getty Images

“The history of cultural progress is, almost without exception, a story of one door leading to another door,” the tech journalist Steven Johnson has written. We usually explore just one room at a time, and only after wandering around do we proceed to the next, he writes. Surely this is an apt way to think of our journey up to now. It might also lead us to ask: What doors will we open in future decades? What rooms will we explore?

On the one hand, we can be assured that the advent of mRNA vaccines portends applications for a range of other diseases in coming years. It seems more challenging to predict — and, perhaps, hazardous to underestimate — the human impact of biotechnology, such as CRISPR gene editing or synthetic DNA. And it seems equally hard to imagine with precision how a variety of novel digital products (robotics, for example, and artificial intelligence) will be integrated into societies of the future. Yet without question they will.

Erik Brynjolfsson of Stanford and Andrew McAfee of MIT have posited that new digital technologies mark the start of a “second machine age” that in turn represents “an inflection point in the history of our economies and societies.” What could result is an era of greater abundance and problem-solving, but also enormous challenges — for instance, as computers increasingly take on tasks that result in the replacement of human workers.

If this is our future, it won’t be the first time we’ve struggled with the blowback from new innovations, which often create new problems even as they solve old ones. New pesticides and herbicides, to take one example, allowed farmers to raise yields and ensure good harvests; they also devastated fragile ecosystems. Social media connected people all over the world; it also led to a tidal wave of propaganda and misinformation. Most crucially, the discovery of fossil fuels, along with the development of steam turbines and internal combustion engines, led us into an era of global wealth and commerce. But these innovations have bequeathed a legacy of CO2 emissions, a warming planet, diminished biodiversity and the possibility of impending environmental catastrophe.

The climate dilemma almost certainly presents the greatest challenge of the next 50 years. Some of the innovations needed for an energy transition — in solar and wind power, and in batteries and home heat pumps — already exist; what’s required are policies that allow for deployment on a rapid and more massive scale. But other ideas and inventions — in the fields of geothermal and tidal power, for instance, or next-generation nuclear plants, novel battery chemistries and carbon capture and utilization — will require years of development to drive costs down and performance up. The climate challenge is so large and varied, it seems safe to assume we will need every innovation we can possibly muster.

A solar thermal power plant in Morocco, with long rows of solar panels
Tackling the problem of climate change will draw on existing innovations, such as solar power (a solar thermal power plant in Morocco is shown), and new ones.Jerónimo Alba/Alamy Stock Photo

Perhaps the largest unknown is whether success is assured. Even so, we can predict what a person looking back a century from now might think. They will note that we had a multitude of astonishing scientific breakthroughs in our favor at this moment in time — breakthroughs that pointed the way toward innovations and a cooler, safer, healthier planet. They will reflect that we had a range of extraordinary tools at our beck and call. They will see that we had great engineering prowess, and great wealth. And they will likely conclude that with all the problems at hand, even some that seemed fearsome and intractable, none should have proved unsolvable.

About Jon Gertner

Jon Gertner is a journalist based in New Jersey. He is author of The Idea Factory, about innovation at Bell Labs, and The Ice at the End of the World, about Greenland’s melting ice sheet.

More Stories from Science News on Science & Society