The Future of AI Is Neuromorphic



The Future of AI Is Neuromorphic
Neuromorphic computing leverages the brain's strengths by using an architecture where chips act like neurons. The result, a quantum leap in performance.
Technology Briefing

Transcript


As astonishing as computers are, they are far inferior to the human brain. Unlike devices powered by silicon chips, our brains are capable of learning, understanding images, and recognizing speech, all while using very little energy.

But a new approach called neuromorphic computing seeks to leverage the brain's strengths by using an architecture in which chips act like neurons. The result will be a quantum leap in performance that will revolutionize countless applications.

Companies such as Intel, IBM, and Qualcomm are now involved in a high-stakes race to develop the first neuromorphic computer.

Thanks to Moore's Law, formulated by Intel co-founder Gordon Moore in 1965, devices using the standard CMOS architecture have become lighter, faster, cheaper, and more powerful roughly every eighteen to twenty-four months for the past five decades.

Today, silicon chips have shrunk to fourteen nanometers, and by the end of the year Intel is expected to release the first ten-nanometer chip. The company is already spending $7 billion to revamp one of its factories in Arizona to make seven-nanometer chips.

But there's a limit to how small silicon chips can go. According to an article in Wired, the International Technology Roadmap for Semiconductors, which is sponsored by the chip industries in several countries, recently concluded that by 2021 "transistors could get to a point where they could shrink no further." While it will still be technically possible to make smaller chips, they will reach "the economic minimum" at which the costs will be too high to justify.

It's not just that neuromorphic computing provides a way to keep devices on the same price/ performance trajectory they've been on even after Moore's Law expires. The new brain-inspired architecture will enable machines to do things that silicon chips can't. Traditional chips are good at making precise calculations on any problem that can be expressed in numbers. A neuromorphic system can identify patterns in visual or auditory data, and adjust its predictions based on what it learns.

A research paper by Intel scientist Charles Augustine predicts that neuromorphic chips will be able to handle artificial intelligence tasks such as cognitive computing, adaptive artificial intelligence, sensing data, and associate memory. They will also use 15-300 times less energy than the best CMOS chips use.

That's significant because today's AI services, such as Siri and Alexa, depend on cloud-based computing in order to perform such feats as responding to a spoken question or command. Smartphones run on chips that simply don't have the computing power to use the algorithms needed for AI, and even if they did they would instantly drain the phone's battery.

The limits of Moore's Law threaten to derail efforts to build devices that can instantly recognize images, objects, and sounds and then use that knowledge in such applications as facial recognition, robot navigation, and autonomous vehicles.

IBM's neuromorphic chip, called TrueNorth, has already proven to be extremely adept at such AI tasks as image recognition and speech perception. A blog post by IBM Research revealed that TrueNorth classifies images at a rate of 1,200 to 2,600 frames per second, while consuming a mere 25-275 milliwatts of power. At an average of 6,000 frames per second per watt, it is far superior to the 160 frames per second per watt achieved by the best graphics processing unit on the market, the Tesla4 by NVIDIA.

Neuromorphic chips use less energy and deliver better performance than conventional CPU chips because they are designed differently. Instead of passing information from one transistor to the next in a line of billions of transistors, neuromorphics typically consist of a million "neurons" that can pass information in any direction to any other neurons via 256 million connections, called "synapses."

According to a different article in Wired, "Traditional CPUs process instructions based on "clocked time"-information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using "spikes"-bursts of electric current that can be sent whenever needed. Just like our own brains, the chip's neurons communicate by processing incoming flows of electricity-each neuron able to determine from the incoming spike whether to send current out to the next neuron."

DARPA and the Department of Energy (DOE) are funding much of the research into neuromorphic computing by pouring hundreds of millions of dollars into companies like IBM, Hewlett Packard, AMD, and Intel.5 Supercomputers will soon evolve from the petascale level, meaning they are measured in petaflops, or quadrillion calculations per second, to the exascale level, in which they will be measured in exaflops, or quintillion calculations per second.

An exascale supercomputer would be equivalent to the human brain, and would be at least fifty times faster than the most powerful supercomputer in the U.S. (Oak Ridge National Laboratory's Titan). The DOE has set 2021 as the target date for the first exascale system; it has also mandated that one of the exascale supercomputers it is funding must use a novel architecture, and neuromorphic computing is a strong candidate to provide that new technology.

Based on this analysis, we foresee the following developments:

First, by 2025, neuromorphic chips will be embedded in smart phones.

According to the research firm Research and Markets, the neuromorphic chip business will expand to $1.78 billion by the middle of the decade. The chips will bring true AI capabilities to handheld devices, enabling users to get real-time information without needing to access the cloud. Peter Suma, co-CEO of Applied Brain Research, envisions AI services that are constantly engaged in our lives, unlike Siri, which only responds to commands. As he explains, "Imagine a Siri that listens and sees all of your conversations and interactions. You'll be able to ask it for things like, 'Who did I have that conversation with about doing the launch for our new product in Tokyo?' or 'What was that idea for my wife's birthday gift that Melissa suggested?'" Privacy won't be an issue, because the data will all be stored locally, rather than in the cloud.

Samir Kumar, a business development director at Qualcomm's research lab, believes neuromorphic chips will enable smartphones to continually monitor what you do and where you go in order to offer help before you even ask for it. As he explains, "If you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs."

Second, neuromorphic processors will also help to drive the growth of the Internet of Things.

As we reported in our September 2015 issue, the IoT will produce an economic impact of up to $11.1 trillion per year by 2025, according to the McKinsey Global Institute. Neuromorphic chips will allow tens of billions of wearables, sensors, and devices to perform intelligently without an Internet connection. This will transform our lives in countless ways, including improving healthcare: As reported in the MIT Technology Review, "Medical sensors and devices could track individuals' vital signs and respond to treatments over time, learning to adjust dosages or even catch problems early."

Third, neuromorphic chips will play an important role in transportation, space exploration, defense, and manufacturing.

They will enable autonomous vehicles to navigate, react, and interact appropriately to objects, other vehicles, signs, and sirens. They will enable interstellar spacecraft to operate independently and with very low power consumption. Meanwhile, neuromorphic chips will be deployed in satellites for surveillance. IBM recently provided the DOE's Lawrence Livermore National Laboratory with a sixteen-chip TrueNorth array, called NS16e, which will be used for such applications as identifying cars from overhead with video from drones. According to the IBM Research blog, the laboratory is also studying how to use NS16e "to detect defects in additive manufacturing; and to supervise complex dynamical simulations, like physics problems in industrial design, to avoid failures."

Fourth, the advance of neuromorphic technology will allow engineers to design robots and prosthetic limbs that leverage artificial intelligence.

Robots will be able to respond to spoken commands, recognize their surroundings, and navigate independently, all while consuming low amounts of energy. Meanwhile, according to nextplatform.com, Stanford's Brains in Silicon lab has developed a neuromorphic device with one million neurons called "Brainstorm." Bioengineering professor Kwabena Boahen predicts that Brainstorm will be used to enhance brain-machine interfaces so that an algorithm will identify neural spikes in a paralyzed person's brain that indicate a desired behavior, and then it will direct a robotic arm to execute the task. And IBM researcher Dharmendra Modha believes that neuromorphic chips will be used to create glasses that will help the blind to "see" with the help of sensors that will identify objects and then use spoken words to describe them to the wearer.



Comments

Brilliant article.
Derek, Weldtech

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments