Economic Realities Driving American AI-Based Reindustrialization



Economic Realities Driving American AI-Based Reindustrialization
We're seeing a convergence for the transformational deployment of Artificial Intelligence at a time when economic progress seems difficult to achieve.
Technology Briefing

Transcript


Steam engines, steel, electricity, railroads, radio, assembly lines and integrated circuits are just a few of the transformational general-purpose technologies which changed our lives and made other technologies dramatically more productive. In each case, their explosive rollouts occurred when a series of factors converged to create optimal circumstances for widespread deployment. In that vein, we’re beginning to see such a convergence making way for the transformational deployment of Artificial Intelligence.

As with many of those prior economic transformations, this one is accelerating at a time when economic progress seems difficult to achieve. At such moments, we tend to see short-term disruptions only in negative terms. Consider today’s inflation shock, rising cost of capital, accelerating tech-related lay-offs and the broader shortage of labor as well as supply chain problems and geopolitical crises driving reshoring.

During such moments, it’s natural to forget that such disruptions often trigger the kind of “creative destruction” that every techno-economic revolution requires. For instance, we saw that happen when World War II forced Americans to rebuild & update our extraordinary mass production capabilities. We saw it again when the Korean War forced South Korea to start with a “clean slate” and build its world-class economy.

Today, the disruptions caused by geopolitical and economic crises around the world are setting the stage for an AI-driven productivity surge. This surge will transform American life and business to a degree last seen 70-to-80 years ago. That’s important because the demographic realities alluded to in trend #2 this month aren’t going away. Using legacy methods, the world simply won’t have the right people, in the right places, with the right skills to enable healthy growth.

Admittedly, among all the regions of the world, North America is better prepared than anywhere else to deal with this trend. However, even North Americans will need to harness the unprecedented capabilities of AI and robotics, if we’re going to provide happy, healthy and secure lives for as many Americans as possible. For the EU and much of East Asia, the need is even more critical. Nevertheless, recent news about inflation, crime, life-expectancy and happiness is not reassuring.

That’s because it’s hard to remember that solutions to systemic problems are so often found when we’re forced to address the acute pain of unexpected crises. That’s because the tools for fixing both systemic and acute problems are imagination and the freedom to innovate. Just as in the 1930s & 1940s, evolving technologies and institutions are laying the groundwork for much better times ahead. But those good times will only come if we seize the opportunities created.

Consider the facts. Unlike prior general-purpose technologies, AI is uniquely appropriate for solving today’s problems. That’s because it enables many processes requiring human thought to be automated in such a way that economies of scale, economies of scope, and the learning curve can be fully exploited. Suddenly, highly labor-intensive “mind work” such as laboratory analyses, molecular compound identification, language translation, prototype building, logistics optimization and retail store management can be performed by algorithms trained on mountains of data.

These algorithms, and the hardware on which they run, never get tired, never retire, and never ask for more money. In fact, they get cheaper, faster and more reliable over time. This is not entirely new. It’s been a fantasy entertained by futurists for at least a century. But now, over the next 10-to-15 years, those dreams of surging productivity can finally become real because multiple economic realities are aligning to pave the way. That’s the way it was with flight and modern medicine.

This alignment is happening at multiple stages of the AI value chain including data collection and storage, AI applications training, and application interfaces, including robotics, sensors and networks. Meanwhile, free markets are evolving in ways that better align labor force capabilities and consumer preferences with AI-based solutions. As the technology becomes exponentially more cost-effective, traditional alternatives become more expensive, and people see early-adopters reaping big rewards, businesses & consumers will enthusiastically adopt AI the way people adopted the Internet in the 1990s and the automobile in the 1920s.

To illustrate the impact of this new paradigm, think about the possibilities for an ordinary retailer, like Walmart, Target or Kroger. Any shopper who has retrieved milk from the farthest corner of a store knows that an efficient store layout presents merchandise in ways intended to direct customer attention to items they had not intended to buy, increase browsing time, and make it easier to find related products or viable alternatives. A well thought out layout has been shown to positively correlate with increased sales and customer satisfaction.

It is one of the most effective in-store marketing tactics and it is used to directly influence customer decisions in order to boost profitability. To optimize store layout and reap further benefits, retailers can now apply AI techniques to leverage existing data from closed-circuit TV cameras, letting them interpret and better understand customers and their behavior, in-store. Video already offers insights into how shoppers travel through the store, the routes they take, and the sections where they spend more time. But new research shows that it lets marketers drill down further by analyzing emotion shown through observable facial expressions such as raising an eyebrow, eyes opening or smiling.

Understanding customers’ emotions as they browse can provide marketers and managers with a valuable tool to understand customer reactions to the products they sell. Understanding customer behaviors is the ultimate goal for business intelligence. Obvious actions like picking up products, putting products into the cart, and returning products back to the shelf have attracted great interest from smart retailers. Other behaviors, like staring at a product or reading the product packaging represent a gold mine for marketers seeking to understand the interest of customers in a product.

Emotion recognition algorithms work by employing computer vision techniques to locate the face, and identify key landmarks on the face, such as corners of the eyebrows, tip of the nose, and corners of the mouth. Along with understanding emotions through facial cues and customer characterization, retail managers could employ heatmap analytics, human trajectory tracking and customer action recognition techniques to inform their decisions. This type of knowledge can be assessed directly from the video and can be helpful to understand customer behavior at a store-level while avoiding the need to know about individual identities.

By adding other easily collected environmental data about lighting conditions, temperatures and fragrances, a more complete picture of influences on customer behaviors can be developed. Based on research recently published in Artificial Intelligence Review, a team of researchers proposed a framework for retailers called Sense-Think-Act-Learn (or STAL). How can this framework be applied? Firstly, 'Sense' means to collect raw data, such as video footage from a store's closed-circuit TV cameras for processing and analysis.

Store managers routinely do this with their own eyes. However, new AI-based approaches allow marketers to automate this aspect of sensing, and to perform it across the entire store, following a customer or customer population as they shop. Secondly, 'Think' means to process the data collected through advanced AI, data analytics, and deep machine learning techniques, in much the same way humans use their brains to process incoming data. The objective is to learn from patterns observed in many stores, for hundreds of products, based on thousands or millions of customers.

Thirdly, 'Act' means to use the knowledge and insights from the “Think” phase to improve and optimize the supermarket layout and supporting parameters. Notably, the intelligent video analytic layer in the Think phase plays a key role in interpreting the content of images and videos. Implemented fully, this process constitutes a continuous “Learning” cycle, which is where the fourth phase of the STAL framework applies. This framework demonstrates how store management learns to optimize profitability.

A key feature of the proposed framework is that it allows retailers to evaluate store design predictions such as traffic flow and customer behavior from the moment customers enter a store. It also lets management assess the effectiveness of alternative store displays placed in different areas of the store. And since privacy is a key concern for customers, these applications are designed so that retained data can be anonymized, while permitting it to be fully examined at an aggregate level.

From an implementation standpoint, the intense data flow from all the closed-circuit TV cameras implies that a cloud-based system is a suitable approach for processing and storing video data for analysis. Importantly, this AI-based framework can help managers optimize critical operating parameters of “the retail mix” by managing at least three sets of variables:

First, design variables such as space design, point-of-purchase displays, product placement, and placement of check-outs. Second, employee variables such as the number, training and placement of personnel. Third, customer variables such as, crowding, visit duration, impulse purchases, use of furniture, queue formation and receptivity to product displays.

As game-changing as this might be for retailers, shoppers and product manufacturers, it’s just one of the more easily understood AI-based opportunities suddenly opening up as capabilities, costs and user needs converge. Others are now emerging in food service, health care, transportation, manufacturing, scientific research, and logistics. And even more will inevitably arise as our imaginations mature. What’s the bottom line?

As Harvard’s innovation guru Clayton Christenson observed, even the most disruptive technologies only serve narrow niches until costs and capabilities evolve to the point that they are “good enough” to replace established alternatives in mass markets. Often, this involves mass markets evolving in ways which make them more accepting of the disruptive solution. In the 2020s, after decades of incremental progress, the costs and capabilities of Artificial Intelligence are converging with maturing “user needs” in industry-after-industry.

The result will be a tsunami of creative destruction which will leave few industries unchanged. This is all good news for the economy. First, the AI-driven productivity surge will represent a strong deflationary force, ultimately returning us to a low inflation environment. Furthermore, it will enable unimagined new products and services on the demand side, while eliminating the skills bottleneck on the supply side. The net result will be healthy growth benefiting a wide swathe of investors, workers, and consumers.

That means it’s important to get on board or prepare to be trampled. Given this trend, we offer the following forecasts for your consideration. First, by 2025, ubiquitous, low-cost sensors will make data collection a nearly-free by-product of daily activity in most businesses. Training AI systems depends on cost-effective access to mountains of data. As already discussed, the sort of cameras needed for emotion recognition in the retail example are already inexpensive and reliable.

Advances in hardware and software will only make them more cost-effective and easier to install. Infrared and ultraviolet images inexpensively complement visible light video and can be correlated with RFID transactions and environmental sensor data. Time-stamped location references will make it easy to overlay this highly granular data with existing databases containing weather, product and customer data. Geo-tracking data for employees and vehicles is readily available and will only get better.

In other applications, cheap and precise haptic sensors and vision systems are finally ready for the mainstream. As a result, the biggest challenge regarding data collection, other than ensuring customer privacy, will be summarizing raw data in the best form for storage, transmission and analysis. Second, over the next decade, the price-performance of data summarization, storage and transmission will improve by one-thousand times making it possible for more firms to handle the flood of sensor data needed to make AI solutions pay.

Cloud-based storage costs and network bandwidth continue to fall rapidly. And there is no limit in sight. Third, the enormous cost barrier which has impeded AI implementation is now collapsing and this collapse will only accelerate. In 2014, the total cost of ownership for a state-of-the-art AI accelerator, including electricity & data center overhead, was $11,400 over its 3-year useful life.

By 2030 that same processing capability is expected to cost less than $0.05; that’s an astounding price-performance improvement of 228,000-times! As a result, AI solutions once only affordable to companies like Amazon, Google and Facebook will be affordable to almost any entrepreneur. And the productivity surge enabled by AI is expected to create a $1.7 trillion global market for AI accelerator hardware by 2030. Although Nvidia created the market for AI accelerators and still leads today, many challengers are likely to enter the market during the next eight years.

Venture funding for chip startups has doubled in the last five years, and Tesla’s Dojo supercomputer is a contender that is vertically integrating into the AI hardware layer to train its neural nets and maximize performance. Fourth, in 2023, the tech talent “log jam” inhibiting development of game-changing artificial intelligence applications will breakup. In recent years, the cash-rich platform companies have been monopolizing available talent, preventing firms in a wide range of industries from being able to build applications. Recent layoffs at tech giants are now changing all that.

AI startups and consultancies are now more able to deliver cost-effective AI-based solutions to a wide range of industries. And armed with the increasingly cost-effective hardware and software solutions just mentioned, they will soon be delivering real capabilities to any business which has the data needed to train and drive these applications. Fifth, over the coming decade, North America will experience a wave of AI-based reindustrialization as productivity soars across the United States, Canada and Mexico. From 1990 to 2016, manufacturing and related capabilities left North America largely because contemporary technology did not enable Americans to compete with workers in China, Malaysia or Vietnam.

But as foreign wages have risen and natural resources have become more crucial, the equation has shifted. Meanwhile, rising concerns about supply chain vulnerabilities have forced countries around the world to reconsider their options. Now, the productivity-enhancing potential of AI and robotics, coupled with North America’s natural resources, consumer markets, business climate, national security and the trifecta of intellectual, human and financial capital, are making it the perfect place for producing most things over the next two decades.

The result will be an enormous wave of new capital spending in North America, with similar surges in South Korea, Japan, Taiwan and Germany. Sixth, by 2030, the convergence of AI and sensors will finally make robots ubiquitous. The shortage of labor and our experiences with web-based bots, are making us all more willing to give robots a try. These evolving consumer expectations and the improving cost-benefit equation will pave the way for widespread adoption of both service and manufacturing robots. This will accelerate from tentative experiments in 2023 to explosive growth in 2026.

Today, fast food chains are on the leading-edge of service robotics, but logistics services aren’t that far behind as they automate "pick-and-pack" functions and test autonomous delivery drones. Until recently, using AI to integrate haptic and vision systems was a barrier to maximizing robot use in services, as well as on the factory floor, but those constraints are quickly disappearing.

However, some barriers will remain for the foreseeable future, as we’ve seen with the slow advance of autonomous cars and trucks. But this is not all bad. Our goal should not be to automate everything; it should be to find ways to enhance the productivity and satisfaction of everyone who wants to work.

Comments

No comments have been submitted to date.

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments