Safely Unlocking the Potential of Exponential Growth Innovation



Safely Unlocking the Potential of Exponential Growth Innovation
Some are worried that a shrinking labor force, limited natural resources and a rising “dependency ratio” will undermine economic growth.
Technology Briefing

Transcript


Together the “golden age” and “maturity” stages of the Mass Production Techno-Economic Revolution ran from roughly 1943-to-1974, bringing a wave of rapid and sustained productivity growth. Especially in the United States, this led to an unprecedented rise in per capita wealth and income. But since 1973, the world has experienced a productivity slowdown referred to as “the great stagnation.” That slump was only interrupted by the Internet-driven productivity surge from 1997 to 2005.

Measurable economic growth during this 50- year period was fueled primarily by one-time demographic and geopolitical events including the influx of workers & consumers from the Baby Boom generation in the 70s and 80s, and the globalization wave that followed the end of the Cold War in 1991. Today, the Boomers are retiring and globalization is reversing. Yet the theory of Techno-Economic Revolutions indicates the Digital Techno-Economic Revolution should be surging into its “golden age.” So, what lies ahead?

Most economists, managers and policymakers are worried that some combination of a shrinking labor force, limited natural resources and a rising “dependency ratio” will undermine economic growth, leading to a decline in human health and happiness. These issues will not come as a surprise to our subscribers. Since the late 1980s, the Trends editors have been warning about the inevitable implications of a global “Age Wave,” better known as “Demographic Winter.”

The trends toward urbanization and industrialization have led to collapsing birth-rates across the OECD countries as well as most of the “newly industrialized” world. This resulted in fewer new workers and dramatically different consumer profiles than we saw in the 20th century. Therefore, continued prosperity will require higher productivity, while more efficiently delivering a host of new products and services optimized for the evolving consumer realities. Fortunately, a confluence of rapidly maturing technologies is poised to deliver levels of productivity growth like the United States experienced from 1948-to-1973.

The multi-faceted transformation required to support this technological change will benefit almost everyone over the long-term. However, there will be some big losers, as well as big winners during the coming decade. The resulting level of uncertainty is creating widespread and often-unwarranted fear. Consider the facts. The next wave of techno-economic innovation will be felt far and wide.

Just as the 19th century saw a transition away from agriculture to manufacturing, and the 20th century saw a transition away from manufacturing to services, the 21st century will experience a transition away from the kind of service economy we’ve known to something new and different. Don’t assume this means that agriculture, manufacturing, or services will atrophy. In fact, we’ll produce far more and better foods, manufactured goods, and services in 2100 than we do today.

However, the value creation process will be dramatically more efficient and effective, while consumer priorities will be substantially different. That means the products, jobs, supply chains and markets will be as different from what people experience today, as today’s commerce is from what people knew at the time of the Civil War. However, some things don’t change. People still want to maximize their health, safety, and happiness. But how they accomplish this depends on the available technologies, acting within our demographic and institutional constraints.

Inevitably, each revolution reaches a tipping point at which a game-changing product or service enters the marketplace triggering a wave of consumer demand and related technological innovation. Key examples include Singer’s Sewing Machine, Edison’s electric light bulb, Bell’s telephone, Ford’s Model T automobile, IBM’s original PC, Tim Berners-Lee’s World Wide Web framework, and Apple’s iPhone. Each of these set off a tsunami of consumer and business innovation which revolutionized the economy of their era.

As of April 2023, OpenAI’s ChatGPT (or an alternative like Elon Musk rumored “TruthChat”) appears poised to join this line of truly disruptive products. Within 60 days of its launch in November 2022, it reached 100 million monthly users, according to a UBS report. That makes ChatGPT the fastest-growing consumer app in history. And, like the Model T and the iPhone, ChatGPT is not alone. As of this writing, it’s primary competitors are Google’s Bard, Baidu’s Ernie, DeepMind’s Sparrow and Meta’s BlenderBot.

Others which potentially outperform ChatGPT include an EU-based alternative called BLOOM and Musk’s yetto-emerge TruthChat. The most basic version of ChatGPT is a free-to-use AI chatbot product. It is built on the structure of GPT-4. GPT-4 is a so-called “large language model,” which includes a deep learning algorithm that checks for the probability of which words might come next in a sequence. The abbreviation GPT stands for generative pre-trained transformer. This type of computer model uses a neural network to learn context about any “language pattern.”

That language pattern might be a spoken language or a computer programming language. The large language models underlying ChatGPT and its competitors don’t “know” what they are saying, but they do know what symbols or words are likely to come after one another based on the “data set” used to trained them. In ChatGPT’s case, it was trained on a data set consisting of a large portion of the World Wide Web.

From there, humans gave feedback on the output generated by the artificial intelligence to confirm whether the words it used made sense. That means that artificial intelligence chatbots, such as ChatGPT, don’t really make intelligently informed decisions; instead, they’re the internet’s version of “parrots”; they simply repeat words that are likely to be found next to one another in the course of natural speech. The underlying math is all about probability. The companies that make and use these AI chatbots promote them as potential “symbolic productivity genies.” They can already generate text in a matter of seconds that would take a human hours or days to produce.

Several organizations have already incorporated ChatGPT’s ability to answer questions into software features. Microsoft, which provides funding for OpenAI, rolled out ChatGPT in Bing search as a preview. And Salesforce.com has added ChatGPT to some of its CRM platforms in the form of its Einstein digital assistant. As we saw with the PC, iPhone and World Wide Web, we can expect new and unexpected uses for large language models to rapidly emerge as costs plummet and capabilities soar.

ChatGPT was built by OpenAI, a research laboratory with both nonprofit and for-profit branches. At the time of its founding in 2015, OpenAI received funding from Amazon Web Services, InfoSys and YC Research as well as tech investors including Elon Musk and Peter Thiel. Since then, Musk has cut his ties with the company, while Microsoft has provided $10 billion in funding for OpenAI. As Harvard’s Clayton Christenson might have said, ChatGPT is finally “good enough” to disrupt mainstream functions like search, summarization, and even simple software production.

In a business context, ChatGPT can realistically enhance productivity by writing and debugging computer code, as well as creating basic reports, presentations, emails and websites. Microsoft showed off key features when it announced that OpenAI is coming to Word and some other parts of the Microsoft 365 business suite. And this first wave of capability is likely just the beginning of a whole wave of AI-based consumer functionality. Meanwhile, AI’s trajectory of adoption is rapidly evolving.

Just like factories, personal computers, and electricity in earlier eras, AI has spawned aggressive push-back. This resistance features workers afraid of losing their jobs, AI competitors jockeying for position, and ordinary people afraid of losing control of their world. Perhaps inspired by science fiction stories about AI taking over the earth, some high-profile players in tech are already urging caution about giving AI “free reign.”

In April, a petition signed by Elon Musk and many others urged companies to pause large AI development until more safeguards can be built in. In response, OpenAI insisted that its products are not intended for use in decisions related to law enforcement or governance. However, this does not address concerns about its biases distorting public opinion. Privacy, which is a more pressing concern than global domination, has led Italy to ban ChatGPT.

In response, OpenAI is working to find a way to let ChatGPT work within the European Union’s strict privacy rules. Such solutions have become increasingly controversial because they’ll potentially transform clerical tasks, software programming, customer services, and other areas of the economy where large numbers of humans are now employed. These jobs lie between the traditional “blue collar” automation associated with industrial machines & robots, and the rarified scientific specialties in which AI can perform functions that no human could ever hope to do alone.

As a result, the expanding debate over the proper use of these tools is rapidly becoming emotional, multifaceted, and divisive. Over the longer term, what we like to call “blue collar” and “gold collar” AI systems are likely to make a far bigger impact on our economy than these mid-tier applications. That’s especially true as North America reindustrializes in the coming decade. To see why, consider how AI can and will lead to a quantum leap in human potential.

As we’ve explained in prior issues, human capabilities have been overwhelmed in many scientific and engineering disciplines. Conducting and interpreting thousands of rigorous experiments each involving hundreds of precise parameters and millions of data-points is a task the human mind is not equipped to handle. However, it’s ideally suited to the strengths of current and future artificial intelligence systems. In many fields like drug discovery, medical diagnostics, synthetic biology, and materials science, the low-hanging fruit accessible to traditional human researchers has been picked, but AI promises to unlock a whole new level of discoveries otherwise inaccessible to mankind.

Over the next few years, business, government, and the public will determine the precise roles AI will play in our lives and in our economy. That process will involve both political and market actors with diverse agendas. Today, it’s unclear who the winners and losers might be. However, it’s safe to say that artificial intelligence will become pervasive in many forms, and it will substantially increase productivity at many levels.

Given this trend, we offer the following forecasts for your consideration. First, General Artificial Intelligence (or GAI) with something resembling “human consciousness” will remain a fantasy embraced by enthusiasts like Google’s Ray Kurzweil. In his famous 2005 book The Singularity is Near, Kurzweil forecast that GAI would be available by 2040. We’re half-way there and machine learning has certainly gotten more “effective,” primarily because of better underlying hardware and more training data. However, as Trends forecast at the time, computers are no more “conscious” now than they were then.

They simply do the same things faster using more input. -- So-called “transhumanists” (like Kurzweil) dream of achieving literal “immortality” by uploading their “consciousness” to an AI system. But, as explained in our December 2022 issue, current science gives us little reason to believe that any uploadable representation of your brain function would actually be “you.” So, despite the hopes and investments of materialists like Larry Page, those who seek immortality via AI will be sadly disappointed.

Second, AI’s primary threat to our lives, culture and civilization will come from the biases and priorities of its creators, not the “mind” of the system itself. When it comes to making genuine scientific discoveries, manufacturing products and serving us in our daily lives, AI is no more threatening than disruptive general-purpose technologies of prior eras. However, large language models like ChatGPT have an unprecedented potential to adversely influence our perceptions.

As Elon Musk and other experts have been insisting since the infancy of AI, such technology can be used to distort the objective truth by inserting its developers’ biases. For example, a recent study by Harvard researchers showed that Google succeeded in shifting voter preferences in recent elections simply using “search biases.” AI based content tools like ChatGPT will enhance that capability and intensify the debate over what can be done to prevent technology from distorting the marketplace of ideas.

Expect large language models to become a significant issue in the on-going judicial and legislative reassessment of “big tech’s” role in our lives. Third, unlike the pivotal general-purpose technologies of the first four techno-economic revolutions, AI will have its greatest and most immediate impact within the highest realms of the workforce. That means lower paying and more mature industries are not the one’s most likely to see disruption, at least in the short-term.

AI will eventually impact taxi drivers, warehouse workers and truck drivers as well as rank-and-file workers in manufacturing, retail, personal care and food service. However, the game-changing applications of the 2020s will be in the rarified fields of scientific research, medicine, software development and engineering as well as in the relatively new industry of ecommerce. And that’s good, because it’s those areas which have been most constrained by human limitations and the skills shortages of recent years. And just as spreadsheets created more demand for business analysts, AI-based R&D tools will create more demand for scientists and engineers doing what they do best.

More complete details appeared in the January 2023 issue. Fourth, contrary to warnings of “job destroying AI,” the remainder of the 2020s will see demand soar for workers with “hard skills,” particularly in the United States. Plumbers, electricians, heavy equipment operators, truck drivers and automation technicians will all be in increasingly short supply as Baby Boomers retire and America dramatically upgrades its manufacturing base.

For instance, in order to address the housing shortage driven by maturing Millennials and continuing immigration, the construction workforce, decimated after the 2008 housing crash will be rebuilt and augmented with new technology. Fifth, self-driving cars and trucks are on the way, but they will eliminate few driving jobs prior to at least the mid-2030s. As explained previously in Trends, institutional hurdles are proving far higher than expected, especially as related to issues of liability.

Expect countries like Singapore and the UAE to roll-out these solutions first, with less controlled economies watching to see when they are really safe. Sixth, small autonomous aircraft piloted by AI will become commonplace by 2040. Piloted air taxis will emerge as soon as 2025, creating employment opportunities for a new kind of pilot who relies heavily on AI-based assistance. However, truly autonomous air taxis will enter commercial service much later.

In fact, that’s likely to happen in the same time frame that regulators and consumers become comfortable with Level 5 self-driving cars and autonomous 18-wheelers taking over the highways. On the other hand, unmanned cargo drones will become commonplace in the late 2020s, starting with remote-controlled operations. As explained in prior issues, this will make a huge impact on package delivery economics. And it will create demand for a growing cadre of ground-based drone pilots and maintenance personnel.

By 2030, fully autonomous AI-based cargo drones with remote human intervention (in case of emergency) will be widely deployed in some locations. Seventh, even if humanoid robots can achieve extraordinary price and performance targets, non-economic factors are likely to prevent them from making a significant dent in the labor shortage by 2035.

A recent Trends issue dissected a market analysis by Goldman Sachs involving the use of Elon Musk’s general-purpose humanoid robot concept in manufacturing and elder-care applications. In that analysis, AEI economist James Pethakoukis, identified issues related to trust, safety and privacy as among the hard-to-quantify impediments to rapid adoption. Fortunately, some settings and cultures are more psychologically amenable to early adoption of humanoid robots.

For instance, American factories, which extol “innovation,” are likely to become early adopters of humanoid robots in the 2020s. Meanwhile Japan has the best history of accepting such leading-edge innovations in the consumer domain. Once humanoid robots prove themselves in such environments, they will diffuse into the global economy when and if the economics make sense.

Eighth, artificial intelligence will enable health care to enter a new era of enormous breakthroughs by the late 2020s. AI is already making rapid progress in supporting diagnosis and treatment of disease, especially in the areas of radiology and genomics. But it is AI’s accelerating contributions to drug discovery which will have the biggest impact, both commercially and therapeutically.

Suddenly, advances in price-performance and functionality have opened the door to formerly impossible breakthroughs. For example, an AI application called RoseTTAFold recently became the first full-fledged solution that can produce precise designs for a wide variety of proteins. It is able to generate drug-related proteins with multiple degrees of symmetry, including proteins that are circular, triangular, or hexagonal. The RoseTTAFold team synthesized several of its protein designs in their lab.

One was a new protein that attaches to the parathyroid hormone, which controls calcium levels in the blood. According to the team’s leader, “We basically gave RoseTTAFold the hormone and nothing else. Then we told it to make a protein that binds to the hormone.” When they tested the novel protein in the lab, they found that it attached to the hormone more tightly than anything that could have been generated using any other computational methods, and more tightly than any existing drugs.

This early success shows why it’s likely that a wave of new game-changing drug discoveries created by AI will enter clinical trials over the next five years. The result will be healthier and happier consumers as well as enormous revenues and profits for the industry. Ninth, AI will trigger explosive game-changing advances in materials science during the 2020s. Materials, such as stone, bronze, iron, steel, plastics, silicon and graphene have always defined the technological and economic possibilities which permit people to survive and thrive.

Society’s capacity to solve global challenges is still constrained by our ability to design and make materials with the targeted functionality needed for computer chips, sensors, robots, electric vehicles and myriad other applications. Since it is not known where economically important materials might exist, the search amounts to a high-risk, complex and often long journey across the infinite space of materials created by combining all of the elements in the periodic table.

Fortunately, new AI-based tools, such as Material.ai , are changing all that. These tools examine the characteristics and relationships of known materials at a scale inconceivable for humans. These characteristics and relationships are used to identify and numerically rank combinations of elements that are likely to form new materials with desired characteristics. Those rankings are used to guide exploration of unknown chemical spaces in a targeted way, making experimental investigation far more efficient. And it’s not just about being faster and cheaper.

Until now, the default approach has been to design new materials by close analogy with existing ones, which usually leads to materials which are similar to ones we already have. On the other hand, the new AIbased tools discover truly new materials. And these new materials not only create societal benefit by enabling new technologies to tackle global challenges, but they also reveal new scientific phenomena and understandings. Those understandings then help train the next generation of AI.

As with health care, AI’s contribution to material science will create enormous value leading to more jobs across the advanced economies. And, Tenth, the enormous cost barrier which has impeded AI implementation is now collapsing and this collapse will only accelerate. In 2014, the total cost of ownership for a state-of-the-art AI accelerator, including electricity & data center overhead, was $11,400 over its 3-year useful life. By 2030 that same processing capability is expected to cost less than $0.05; that’s an astounding price-performance improvement of 228,000-times!

As a result, AI solutions once only affordable to companies like Amazon, Google and Facebook will be affordable to almost any entrepreneur. And the productivity surge enabled by AI is expected to create a $1.7 trillion global market for AI accelerator hardware by 2030. Although Nvidia created the market for AI accelerators and still leads today, many challengers are likely to enter the market during the next eight years. Venture funding for chip startups has doubled in the last five years, and Tesla’s Dojo supercomputer is a contender that is vertically integrating into the AI hardware layer to train its neural nets and maximize performance.

Comments

No comments have been submitted to date.

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments