Automated Science Triggers 21st Century Bonanza



Automated Science Triggers 21st Century Bonanza
After being relatively flat since early 2000s, many economists doubt that productivity will again match or exceed the growth rates seen in the 1990s.
Technology Briefing

Transcript


After being relatively flat since early 2000s, it’s not surprising that many economists doubt that productivity will again match or exceed the growth rates seen in the 1990s. That’s because they are thinking in terms of the technological plateau we’ve been on since the invention of the worldwide web.

The great success stories of the past 30 years have mostly been linear extrapolations from that breakthrough. Search engines, social media, smart phones, and software-as-a-service have each created enormous value, but they are all by-products of a technological paradigm created in the early 1990s and refined since then.

The next “leg up” will involve building new business models and technological paradigms on the foundation which has already been created. The new wave of explosive wealth creation from now through the mid-2030s will be enabled by combining ubiquitous networked computing, AI, quantum computing and robotics to take the next major leap forward on a technological road which began 250 years ago. Optimizing existing systems will give way to true innovation.

As discussed in our book Ride the Wave, combining free markets with ceaseless technological innovation, enabled Americans to raise their per capita GDP (in today's dollars) from roughly $1300 a year to nearly $65,000. This unprecedented surge took place across 250 years and four-and-a-half techno-economic revolutions. The first four techno-economic revolutions focused on increasing productivity by harnessing energy and automating physical labor. During that time, researchers harvested “the low-hanging fruit of scientific discovery” using manual calculations combined with trial & error experimentation.

But as many scholars remind us, each meaningful breakthrough now requires far more time and effort to identify and commercialize making it harder for human ingenuity to routinely produce new blockbusters. For that reason, many economists believe we’re stuck in a slow growth “new normal.” Fortunately, the defining technologies of the Fifth Techno-Economic Revolution which began in 1971 are proving to be capable of providing for science the sorts of game-changing solutions that earlier technologies provided for agriculture, manufacturing, transportation, communication and entertainment.

What are these new solutions? And why are they only now beginning to revolutionize our world? The unprecedented advance of Moore’s Law as well Metcalfe’s Law means the price-performance of computing and networks continues to improve at exponential rates. Therefore, the ability to access vast computing resources, databases, and communications capabilities can be in everything, everywhere, all the time.

A defining characteristic of the emerging Golden Age (or synergy stage) of the Fifth Techno-Economic Revolution lies in mankind’s growing ability to qualitatively and quantitatively enhance almost every aspect of life and business using digital technology. A key by-product and enabler of this new era is “machine learning,” also known as “narrow artificial intelligence.” It’s day-to-day value has become ubiquitous as embodied in Google’s search engine, Amazon’s Alexa, and Apple’s latest iPhone.

Now, machine learning is about to revolutionize scientific research in ways that have, until recently, been unimaginable. That’s because these technologies will enable better, faster, and cheaper ways to conduct every aspect of research creating a self-reinforcing virtuous cycle. The widespread implications will become earth-shattering as more productive research leads quickly and cheaply to important new discoveries, which will increase wealth, and encourage society to devote even more resources to enhancing research.

This is especially true, because digital research will enable us to do things that could never have been done before, regardless of the amount of traditional resources allocated. Furthermore, this accelerated research will enable us to invent solutions that benefit the wealthiest among us today and then extend those benefits to even the world’s poorest people within just a few years. Never before has the discovery process been moving so rapidly or been so accessible on a global basis. And it’s getting faster every year.

Scientific research is already leveraging smart machines to empower smart people better than most fields of endeavor. Because of its pace, scientists themselves are frequently only aware of the revolution within their own scientific research specialty. Meanwhile, very few managers, policymakers or investors appreciate the enormous scope of the digital revolution in science which is still in its infancy. That creates big opportunities for those who know what’s happening and can scan the horizon for early signs of change.

Let’s consider some the biggest opportunities and what it will mean for scientists, consumers and investors. To appreciate this revolution, it’s necessary to understand the five steps of the scientific discovery cycle and consider how digitization can enhance the process at each step.

Step One: Explore the scientific literature.

Here the never-ending task is to identify the relevant scientific papers in a sea of millions, while tracking new topics as they emerge.

Step Two: Design experiments.

Here the challenge is to formulate hypotheses and determine how they can be tested. Like business strategy, experimental design determines the execution, investment, and metrics guiding the rest of the study. The key is to find the right trade-off between exploration of new ground and exploitation of well-understood phenomena.

Step Three: Run experiments.

Keep track of millions of data points and their relationships. In the case of the life sciences, for instance, thousands of tiny tubes containing experiments on various molecules and cells must be meticulously monitored over precisely determined time periods, while avoiding contamination. Errors at this stage, can lead to career-ending consequences.

Step Four: Interpret the Data.

This involves making sense of the flood of raw data coming from the experiments. In the life sciences, for example, this could involve many terabytes of genetic and biochemical information. The goal is to transform the experimental results into scientific findings. Here the researcher determines whether the hypothesis is quantifiably confirmed or rejected; or perhaps, another, equally interesting hypothesis is formulated and confirmed.

And, Step Five: Write a New Scientific Paper and/or Apply for a Patent.

This is where the cycle ends and a new one begins. The researchers make sure they cite every relevant precedent, regardless of whether it was identified in step one. Then, once peer-reviewed, the results are added to the body of scientific literature to be cited by other researchers. In the ideal case, the findings translate, not only into a frequently cited research paper, but become the basis for a valuable patent and perhaps even a whole new enterprise.

From the dawn of civilization until the 1980s, every step in this cycle was painstakingly manual. That’s when scientific literature became stored on computers, statistical analysis of large data sets became widely available using mainframes and minicomputers, and experimenters increasingly used digital instrumentation to build data sets. Then, over the next 35 years or so, those conventional digital solutions became better, cheaper and faster. However, it’s only since 2015 or so, that artificial intelligence, big data methods, and robotics have reached the point where they are enabling a quantum leap when applied to research.

Going forward, the primary goal is harnessing these technologies to augment, or even replace, humans in the scientific process. The second and bigger objective is to make research, that was “formerly impossible,” routine. To do this, researchers are already unleashing artificial intelligence, often in the form of artificial neural networks, on the data torrents. Unlike earlier attempts at rule-based systems, these don’t need to be programmed with a human expert’s knowledge. Instead, they learn on their own, often from large sets of “training data,” until they can “see patterns” and “spot anomalies” in data sets that are far larger and messier than human beings can cope with.

To appreciate how this new paradigm will pay-off, consider some of the ways artificial intelligence is transforming the science of drug discovery. First, Artificial intelligence can analyze vast quantities of data, allowing it to identify patterns in datasets that are too complex for humans to discern. Second, artificial intelligence can generate predictions based on these data, potentially leading to the rapid and accurate identification of novel drug targets and lead molecules. Third, artificial intelligence can use natural language processing to bring together disparate information and datasets, providing researchers with insights that no single experiment could provide. That includes all relevant scientific journals. Fourth, one of the key challenges in drug discovery is understanding the structure of the protein that a drug could target. Although structures can be discerned experimentally, the process is time consuming and expensive.

Google’s DeepMind has recently launched AlphaFold, an AI platform that can predict protein structures with high accuracy. AlphaFold has provided a solution to one of the key bottlenecks in drug discovery, enabling a vast new set of potential drug targets to be explored. Fifth, biological systems consist of highly complex networks of interactions. The complexity of the system makes it difficult to predict how a drug might have adverse effects.

For example, E-therapeutics uses artificial intelligence to model and analyze these complex networks and hypothesize that a representative simulation of a whole biological system AI Will Transform Drug Discovery will help translate therapies from laboratory to patient, reducing expensive clinical-stage failure. So, despite its apparent infancy, artificial intelligence is already being widely recognized to have a revolutionary impact on the drug discovery process, with the potential to mitigate attrition, accelerate development timelines and reduce cost.

In response, numerous partnerships have been struck between AI-vendors and drug developers. And this has triggered venture capital in a range of AI-based drug discovery platforms. The potential for a faster and cheaper method of drug discovery has led to the founding of numerous start-ups over the past decade. Many have received large amounts of investment and established partnerships with large biopharma companies.

Which companies are using artificial intelligence in drug discovery and who is backing them? Consider a few examples. BenevolentAI creates so-called “knowledge graphs” using machine learning to connect related biomedical data from its large repository. These knowledge graphs contain insights that humans would not be able to synthesize on their own due to the complexity and volume of the data.

This information can be used to identify drug targets, develop lead molecules and repurpose known drugs. BenevolentAI identified that baricitinib (an approved rheumatoid arthritis drug) had potential to be used in the treatment of COVID-19. The FDA subsequently authorized the use of baricitinib to treat hospitalised COVID-19 patients. BenevolentAI collaborates with AstraZeneca and the partnership combines BenevolentAI’s platform with AstraZeneca’s expertise and large datasets.

In January 2021, it announced the discovery of a novel target for chronic kidney disease. A firm called Recursion aims to make drug discovery faster and cheaper using machine vision to identify subtle changes in cell biology, caused by treatment with molecules. The approach allows the company to rapidly analyze vast quantities of experimental data. The data are generated in-house using its automated robotic laboratory, which performs 1.5 million experiments each week.

The company has four drug candidates in Phase I clinical trials and has an ongoing partnership with Bayer that aims to develop new therapies in fibrotic disease. In April, Recursion completed a $436 million IPO on Nasdaq. Today, the discovery and approval of a new drug is estimated to cost over $2.6 billion and it takes at least 10 years. Although AI-based drug discovery technology is still nascent and many applications are just being explored, there is broad recognition of the potential of AI to improve the drug discovery process.

Since 2015 there have been around 100 new partnerships between AI services and the pharmaceutical industry. Additionally, in November, Alphabet announced the launch of Isomorphic Labs, a spin-off of DeepMind, which aims to deliver a new “AI-first approach” to drug discovery. It’s important to recognize that this early progress in the pharmaceutical industry represents just one particularly high-profile area where automated scientific research will revolutionize the way value is created.

Research laboratories around the world are busy applying these core capabilities to materials science, chemical synthesis, and a host of other fields. The idea is to make cheaper, faster, and more effective materials and processes available in almost every niche. Doing so will transform and enhance industries as diverse as agriculture, transportation, aerospace, construction, energy and consumer packaged goods.

Given this trend we offer the following forecasts for your consideration. First, as soon as 2030, a wide range of industries will be transformed by a flood of revolutionary new materials made possible by automated scientific research. Industries as diverse as automobiles, packaging, construction and farming need better materials to be more productive. Until now, materials discovery has been a trial-and-error process, whereby scientists produce new molecules and then test each one sequentially for the desired properties.

This takes an average of two decades and it’s too expensive and risky for most companies to pursue. However, imagine computer programs that use precise knowledge of a molecule’s electronic structure to create new designs; imagine robots that make and test these molecules; and imagine the software and robots working together—testing molecules, tweaking designs, and testing again—until they produce a material with the properties we’re looking for.

Already, researchers at the University of Toronto are using advances in AI, robotics, and computing to bring this vision to life. At the center of their lab is a nitrogen-filled glass-and-metal enclosure housing a robot that moves back and forth along a track. The robot can select powders and liquids from an array of canisters near the sides of the enclosure and deposit the contents, with exacting accuracy, into one of a number of reactors. The robot is like a tireless lab assistant who mixes chemicals 24/7. It can make 40 compounds every 12 hours.

In addition to the robot, the system features software called ChemOS, which identifies candidate molecules. Another program interfaces ChemOS to the robot, directing it to synthesize these candidates on demand. The third distinctive component of this system is the fully-automated “closed-loop” nature of the production process. Once a reaction is finished, the resulting liquid runs through plastic hoses to an analytical machine the size and shape of a small refrigerator, which separates out unwanted by-products.

The refined output then flows into another robot which tests it to learn about its properties. Once those results are complete, the robot feeds the results of the experiment back into the ChemOS program, enabling the artificial intelligence to learn from that trial and instantly generate a new and better slate of candidate molecules. Then, after many rounds of predictions, synthesis, and testing a winner emerges. Not surprisingly, the idea of using such automated, closed-loop discovery system has become increasingly attractive to chemistry researchers.

Peers in Vancouver, New York City, Champaign-Urbana, and Glasgow are now building similar facilities. As such all-purpose, automated molecular creation facilities appear on university campuses and at corporate R&D centers worldwide, a new era of cost-effective, high-performance materials will dawn. Second, just as firms rely on software-as-a-service, many will rely on fully-automated remote research labs to dramatically cut the time, cost and operational problems associated with running their experiments.

In the life sciences, cloudbased remote laboratories can already deliver enormous benefits that dramatically improve the speed, cost, quality and accessibility of state-of-the-art experimentation. Companies like Emerald Cloud Labs and Transcriptic sell time in their state-of-the-art robotic laboratories. Rather than invest a million dollars or more to build and operate a sterile, fully automated laboratory, any start-up or tech company can buy access to these facilities on an “as-needed basis.”

There, robots flawlessly execute the researchers’ experimental plan and deliver data files along with the frozen end-products of the experiments. Just as the cloud gives nearly every business access to supercomputing power, these labs give nearly every biotech researcher access to a laboratory. Suddenly a startup with angel or VC funding can compete with major company research centers.

And, Third, the biggest impact of artificial intelligence will be in terms of scientific discovery and product development, where whole industries will be built around discoveries that could not have even been made without the use of AI. The protein-folding breakthrough mentioned earlier is only the latest of many blockbuster examples. MIT researchers recently reported that, “A computer model, which can screen more than a hundred million chemial compounds in a matter of days, is designed to pick out potential antibiotics that kill bacteria using different mechanisms than those of existing drugs.”

Similarly, Wired magazine reported on InoBat, a Slovakia-based company which is using a U.S.-developed AI platform to analyze different lithium battery chemistries 10 times faster than what was previously possible. And this is just “the tip of the iceberg” when it comes to harnessing the unique abilities of AI to do game-changing scientific research. Since this technology is still in its infancy, the Trends Editors expect to see a whole wave of previously unimagined solutions, which will form the basis for new companies and even new industries.

Comments

No comments have been submitted to date.

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments