Fill the Void II: An Investigation into Methods of Reducing Voiding
Final Finish Specifications Review
DFX on High Density Assemblies
Challenges on ENEPIG Finished PCBs
Testing PCBs for Creep Corrosion
Screening of Lower Melting Point Pb-Free Alloys
Hand Printing using Nanocoated and other High End Stencil Materials
Board Processes and Effects on Fine Copper Barrel Cracks
Latest Industry News
Foxconn Says China Can No Longer Be 'The World's Factory'
Microsoft's new phone is ... $1,400? Why?
Hon Hai says Q2 results satisfactory
China Launches Beidou, Its Own Version of GPS
Do Engineers Live Longer? A Look at Occupational Factors' Effect on Longevity
The iPhone is still breaking sales records during the pandemic
How to Work from Home Successfully
Smartphone shipments in China plunge 35% in July

Artificial Intelligence Could Revolutionize R&D and Innovation

Artificial Intelligence Could Revolutionize R&D and Innovation
The rapidly improving price/performance of GPTs has led over time to the creation of entirely new applications and industries.
Technology Briefing


Each of the five Techno-Economic Revolutions has been driven by a bundle of complementary General-Purpose Technologies, or GPTs. Beyond innovations in existing sectors, the rapidly improving price/performance of GPTs has led over time to the creation of entirely new applications and industries. For example, in the Mass Production era, the steady declines in the price of electricity and improvement in the efficiency of electric motors led to the radical transformation of manufacturing in the early part of the 20th century with the advent of the assembly line.

It also led to the creation of the consumer appliance industry. Similarly, the Digital Revolution saw the semiconductor industry take-off, which led to the historical transition from the industrial economy of the past two centuries to our ongoing information economy. The opportunities created by the 12 GPTs of the digital revolution were examined in 2013’s Ride the Wave by Fred Rogers and Richard Lalich.

Among those perhaps the most difficult to assess is Artificial Intelligence, or AI. Beyond its use by leading edge technology companies, we’re still in the early stages of AI deployment. It’s only been in the last few years that major advances in machine learning have taken AI from the lab to early adopters in the marketplace. While considerable innovations and investments are required for its wider deployment, AI is likely to become one of the most important GPTs in the 21st century.

At a seminar last year, University of Toronto Professor Avi Goldfarb, offered a compelling explanation for AI as a GPT. Goldfarb, along with his colleagues, has been conducting research on the economics of machine intelligence. Goldfarb says, “The computer revolution can be viewed as being all about the dramatic reductions in the cost of arithmetic calculations. Over the years, we’ve learned to define all kinds of tasks—from inventory management to photography—in terms of such digital operations. Similarly, the economic value of the internet revolution can be described as reducing the cost of communications and of search, thus enabling us to easily find and access all kinds of information.”

In a 2017 article, Goldfarb and his colleagues wrote that the best way to assess the economic impact of a new radical technology is to look at how the technology reduces the cost of a widely used function. “Viewed through this lens, our emerging AI revolution can be viewed as reducing the cost of predictions, based on the explosive growth of big data, powerful and inexpensive computer technologies, and advanced machine-learning algorithms. Given the widespread role of predictions, in business, government, and our everyday lives, AI is most definitely a GPT that’s already having a big impact on a wide range of applications.”

But, AI “may have an even larger impact on the economy by serving as a new general-purpose method of invention that can reshape the nature of the innovation process and the organization of R&D.” This was the thesis behind The Impact of Artificial Intelligence on Innovation, a paper by professors Iain Cockburn, Rebecca Henderson, and Scott Stern that was prepared for a conference on The Economics of AI in September, 2017.

The authors argue that AI--and deep learning in particular-- is actually a new kind of research tool which will open up new avenues of inquiry across a broad set of domains--an invention of a method of inventing. Such inventions not only reduce the costs of specific innovation activities, but actually enable a new approach to innovation itself, “altering the playbook in the domains where the new tools are applied.”

Throughout history, scientific revolutions have been launched when new research tools make possible new measurements and observations, e.g., the telescope, the microscope, spectrometers and DNA sequencers. They’ve enabled us to significantly increase our understanding of the natural world around us by collecting and analyzing large amounts of data. Big data and AI learning algorithms are now ushering in such a scientific revolution.

Moreover, these new research tools can be applied to just about any domain of knowledge, given that we can now gather data in almost any area of interest and analyze the data with increasingly sophisticated AI algorithms. In particular, machine learning methods have great potential in research problems requiring classification and prediction, given their ability to dramatically lower costs and improve performance in R&D projects where these represent significant challenges.

“On the one hand, AI-based learning may be able to substantially automate discovery across many domains where classification and prediction tasks play an important role."

On the other, they may also expand the playbook in the sense of opening up the set of problems that can be feasibly addressed, and radically altering scientific and technical communities’ conceptual approaches and framing of problems.

If advances in AI-based learning represent the arrival of a powerful, general purpose research tool, there will likely be significant economic, social, and technological consequences. On the positive side, “the resulting explosion in technological opportunities and increased productivity of R&D seem likely to generate economic growth that can eclipse any near-term impact of AI on jobs, organizations, and productivity.”

However, it’s important to develop policies that enhance innovation in a way that promotes competition and social welfare. “The proactive development of institutions and policies that encourage competition, data sharing, and openness is likely to be an important determinant of economic gains from the development and application of deep learning.”

Consider drug discovery. It’s estimated that there are as many as 1060 potential drug-like molecules; that’s more than the number of atoms in the solar system. Therefore, human researchers can never hope to explore more than the tiniest slice of what is possible. But investigating such seemingly unlimited possibilities is what machine learning is good at. Trained on large databases of existing molecules and their properties, the programs can explore all possible related molecules.

This is a big deal because, drug discovery is a hugely expensive and often frustrating process. Medicinal chemists must guess which compounds might make good medicines, using their knowledge of how a molecule’s structure affects its properties. Then, they synthesize and test countless variants. And most are failures.

By speeding up this critical step, deep learning could offer far more opportunities for chemists to pursue, thereby making drug discovery much quicker.

And, drug discovery is just one area of opportunity. Another is using machine learning to invent new materials. Among the items on the wish-list are improved batteries and organic solar cells.

Such breakthroughs have become harder and more expensive to attain as chemistry, materials science, and drug discovery have grown mind-bogglingly complex and saturated with data. Even as the pharmaceutical and biotech industries pour money into research, the number of new drugs based on novel molecules has been flat over the last few decades. And we’re still stuck with lithium-ion batteries that date to the early 1990s and designs for silicon solar cells that are also decades old.

The complexity that has slowed progress in these fields is where deep learning excels. Searching through multi-dimensional spaces to come up with valuable predictions is “AI’s sweet spot,” says Ajay Agrawal, author of Prediction Machines: The Simple Economics of Artificial Intelligence.

In a recent paper from the National Bureau of Economic Research, economists at MIT, Harvard, and Boston University argued that AI’s greatest economic impact could come from its potential as a new “method of invention” that ultimately reshapes “the nature of the innovation process and the organization of R&D.”

Iain Cockburn, a BU economist and coauthor of the paper, says: “New methods of invention with wide applications don’t come by very often, and if our guess is right, AI could dramatically change the cost of doing R&D in many different fields.” Much of innovation involves making predictions based on data. In such tasks, Cockburn adds, “machine learning could be much faster and cheaper by orders of magnitude.”

In other words, AI’s chief impact on civilization might not be driverless automobiles, image search, speech recognition, or flying cars, but its ability to come up with new inventions to fuel innovation itself.

Paul Romer won the 2018 Nobel Prize in economics for work that showed how investments in new ideas and innovation drive robust economic growth. But what if our pipeline of new ideas is drying up? Economists at Stanford and MIT looked at the problem in a recent paper called “Are ideas getting harder to find?” Looking at drug discovery, semiconductor research, medical innovation, and efforts to improve crop yields, they found a common story: investments in research are climbing sharply, but the payoffs are staying constant.

From an economist’s perspective, that’s a productivity problem: we’re paying more for a similar amount of output. And the numbers look bad. Research yield, that is, the number of researchers it takes to produce a given result, is declining by around 6.8% annually for the task of extending Moore’s Law, which involves finding ways to pack ever more and smaller components on a semiconductor chip in order to keep making computers faster and more powerful. They found that it takes more than 18 times as many researchers to double chip density today as it did in the early 1970s. Similarly, the research yield for agricultural seeds, as measured by crop yields, is dropping by around 5% each year. And, for the broader U.S. economy, research yield is declining by 5.3% a year.

According to the economists at Stanford and MIT, that means it is taking more researchers and more money to find productive new ideas. And that seems to be a big factor in the overall sluggish growth of the U.S. and European economies in recent decades. The graph below shows the pattern for the overall economy, highlighting US total factor productivity (by decade average and for 2000–2014)—a measure of the contribution of innovation—versus the number of researchers. Similar patterns hold for specific research areas.

Any negative effect of this decline has been offset, so far, by the fact that we’re putting more money and people into research. So, we’re still doubling the number of transistors on a chip every two years, but only because we’re dedicating far more people to the problem. And unless something changes, we’ll have to double our investments in research and development over the next 13 years just to keep treading water.

It could be, of course, that fields like crop science and semiconductor research are getting old and the opportunities for innovation are shriveling up. However, the researchers also found that overall growth tied to innovation in the economy was slow. Any investments in new areas, and any inventions they have generated, have failed to change the overall story.

Dismal research productivity is undoubtedly a contributor to the slow growth we saw after the Great Recession. And if the decline continues, it could do serious damage to future prosperity and growth.

The “low-hanging fruit” of the digital era has been harvested, but there are clearly plenty of big new discoveries out there ready to be harvested; it’s just getting more expensive to find them with conventional methods, as the science becomes increasingly complex. The chances that the next penicillin will just fall into our laps are slim. We’ll need more and more research effort delivered by a combination of human and technological resources to make sense of the advancing science especially in chemistry and biology.

Given this trend, we offer the following forecasts for your consideration:

First, the enormous leap in economic growth, which the Trends editors have forecasted through the mid-2030s, will be driven largely by AI-based R&D and innovation.

When AI or another technology automates most jobs it eliminates human labor and lowers cost, which is good for shareholders and consumers, but not necessarily for workers. On the other hand, when AI automates innovation and discovery, not only is the solution less expensive, but it potentially satisfies a whole new need, creates more jobs, and creates more value for consumers and shareholders. Curing diseases and reaching more people with affordable solutions to problems, benefits everyone in potentially game-changing ways.

Second, applying AI to materials development will result in order-of-magnitude increases in research productivity.

Today, it takes an average of 15 to 20 years to come up with a new material. That’s far too long for most businesses. It’s impractical even for many academic groups. Who wants to spend years on a material that may or may not work? Venture capitalists generally need a return within seven years or sooner. Fortunately, with AI, a 10x acceleration [in the speed of materials discovery] is possible. For instance, the goal of an MIT team, is to use AI and machine learning to get that 15-to-20-year time frame down to around two-to-five years by attacking the various bottlenecks in the lab and automating as much of the process as possible. A faster process gives scientists far more potential solutions to test, allows them to find dead-ends in hours rather than months, and helps optimize the materials. And,

Third, by avoiding human biases, AI will open-up paths to discoveries that might never occur to a human researcher.

Deep-¬learning programs trained on large amounts of experimental data and chemical literature could come up with novel compounds that scientists never imagined. That’s just one more reason why those who bet against rising affluence and technological progress are always wrong, in the long-run.


No comments have been submitted to date.

Submit A Comment

Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name

Your Company
Your E-mail

Your Country
Your Comments

Board Talk
Cleaning Reballed BGA Components
We Bake, But Still Have Delamination, Why?
Reflow For Rigid Flex
Solder Paste Volume for BGA Rework
Problems With Starved "J" Lead Joints
Delay Before Cleaning Partial Assemblies
Can a CTE Mismatch Cause Reliability Problems?
Solder Paste Transfer Efficiency - What/Why
Ask the Experts
Soldering Components with Silver Pads
Environment Impact on Assembly, Printing and Reflow
Solder Balling Prediction Formula
Old Components and Blow Holes
Estimating Failure Rate During Rework
Coating to stop tin whisker growth?
Cleaning an assembled board with IPA
Remove and replace a 240 pin connector