In the early decades of the 20th century, a slew of technologies began altering daily life with seemingly unprecedented speed and breadth. Suddenly, consumers could enjoy affordable automobiles. Long-distance telephone service connected New York with San Francisco. Electric power and radio broadcasts came into homes. New methods for making synthetic fertilizer portended a revolution in agriculture. And on the horizon, airplanes promised a radical transformation in travel and commerce.
As the technology historian Thomas P. Hughes noted: “The remarkably prolific inventors of the late nineteenth century, such as [Thomas] Edison, persuaded us that we were involved in a second creation of the world.” By the 1920s, this world — more functional, more sophisticated and increasingly more comfortable — had come into being.
Public figures like Edison or, say, Henry Ford were often described as inventors. But a different word, one that caught on around the 1950s, seemed more apt in describing the technological ideas making way for modern life: innovation. While its origins go back some 500 years (at first it was used to describe a new legal and then religious idea), the word’s popularization was a post–World War II phenomenon.
Sign Up For the Latest from Science News
Headlines and summaries of the latest Science News articles, delivered to your inbox
Thank you for signing up!
There was a problem signing you up.
The elevation of the term likely owes a debt to the Austrian-American economist Joseph Schumpeter, according to the late science historian Benoît Godin. In his academic writings, Schumpeter argued that vibrant economies were driven by innovators whose work replaced existing products or processes. “Innovation is the market introduction of a technical or organizational novelty, not just its invention,” Schumpeter wrote in 1911.
An invention like Fritz Haber’s process for making synthetic fertilizer, developed in 1909, was a dramatic step forward, for example. Yet what changed global agriculture was a broad industrial effort to transform that invention into an innovation — that is, to replace a popular technology with something better and cheaper on a national or global scale.
In the mid-century era, one of the leading champions of America’s innovation capabilities was Vannevar Bush, an MIT academic. In 1945, Bush worked on a landmark report — famously titled “Science, The Endless Frontier” — for President Harry Truman. The report advocated for a large federal role in funding scientific research. Though Bush didn’t actually use the word innovation in the report, his manifesto presented an objective for the U.S. scientific and industrial establishment: Grand innovative vistas lay ahead, especially in electronics, aeronautics and chemistry. And creating this future would depend on developing a feedstock of new scientific insights.
Bringing inventions “to scale” in large markets was precisely the aim of big companies such as General Electric or American Telephone & Telegraph, which was then the national telephone monopoly. Indeed, at Bell Laboratories, which served as the research and development arm of AT&T, a talented engineer named Jack Morton began to think of innovation as “not just the discovery of new phenomena, nor the development of a new product or manufacturing technique, nor the creation of a new market. Rather, the process is all these things acting together in an integrated way toward a common industrial goal.”
Morton had a difficult job. The historical record suggests he was the first person in the world asked to figure out how to turn the transistor, discovered in December 1947, from an invention into a mass-produced innovation. He put tremendous energy into defining his task — a job that in essence focused on moving beyond science’s eureka moments and pushing the century’s technologies into new and unexplored regions.
From invention to innovation
In the 1940s, Vannevar Bush’s model for innovation was what’s now known as “linear.” He saw the wellspring of new scientific ideas, or what he termed “basic science,” as eventually moving in a more practical direction toward what he deemed “applied research.” In time, these applied scientific ideas — inventions, essentially — could move toward engineered products or processes. Ultimately, in finding large markets, they could become innovations.
In recent decades, Bush’s model has come to be seen as simplistic. The educator Donald Stokes, for instance, has pointed out that the line between basic and applied science can be indistinct. Bush’s paradigm can also work in reverse: New knowledge in the sciences can derive from technological tools and innovations, rather than the other way around. This is often the case with powerful new microscopes, for instance, which allow researchers to make observations and discoveries at tinier and tinier scales. More recently, other scholars of innovation have pointed to the powerful effect that end users and crowdsourcing can have on new products, sometimes improving them dramatically — as with software — by adding new ideas for their own use.
Above all, innovations have increasingly proved to be the sum parts of unrelated scientific discoveries and inventions; combining these elements at a propitious moment in time can result in technological alchemy. Economist Mariana Mazzucato, for instance, has pointed to the iPhone as an integrated wonder of myriad breakthroughs, including touch screens, GPS, cellular systems and the Internet, all developed at different times and with different purposes.
At least in the Cold War era, when military requests and large industrial labs drove much of the new technology, the linear model nevertheless succeeded well. Beyond AT&T and General Electric, corporate titans like General Motors, DuPont, Dow and IBM viewed their R&D labs, stocked with some of the country’s best scientists, as foundries where world-changing products of the future would be forged.
These corporate labs were immensely productive in terms of research and were especially good at producing new patents. But not all their scientific work was suitable for driving innovations. At Bell Labs, for instance, which funded a small laboratory in Holmdel, N.J., situated amid several hundred acres of open fields, a small team of researchers studied radio wave transmissions.
Karl Jansky, a young physicist, installed a moveable antenna on the grounds that revealed radio waves emanating from the center of the Milky Way. In doing so, he effectively founded the field of radio astronomy. And yet, he did not create anything useful for his employer, the phone company, which was more focused on improving and expanding telephone service. To Jansky’s disappointment, he was asked to direct his energies elsewhere; there seemed no market for what he was doing.
Above all, corporate managers needed to perceive an overlap between big ideas and big markets before they would dedicate funding and staff toward developing an innovation. Even then, the iterative work of creating a new product or process could be slow and plodding — more so than it may seem in retrospect. Bell Labs’ invention of the point-contact transistor, in December 1947, is a case in point. The first transistor was a startling moment of insight that led to a Nobel Prize. Yet in truth the world changed little from what was produced that year.
The three credited inventors — William Shockley, John Bardeen and William Brattain — had found a way to create a very fast switch or amplifier by running a current through a slightly impure slice of germanium. Their device promised to transform modern appliances, including those used by the phone company, into tiny, power-sipping electronics. And yet the earliest transistors were difficult to manufacture and impractical for many applications. (They were tried in bulky hearing aids, however.) What was required was a subsequent set of transistor-related inventions to transform the breakthrough into an innovation.