Neuromorphic Chip Runs Computations Directly in Memory



Neuromorphic Chip Runs Computations Directly in Memory
NeuRRAM neuromorphic chip brings AI closer to running on edge devices, performing tasks without relying on a network connection to a central server.
Technology Briefing

Transcript


Artificial intelligence applications abound in every corner of the world and every facet of our lives. They range from smart sensors in factories to rovers for space exploration to VR headsets, smart watches and smart earbuds. An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications — all with a fraction of the energy consumed by general-purpose computing platforms used for AI.

This so-called NeuRRAM neuromorphic chip brings AI closer to running on edge devices, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. The NeuRRAM chip, described recently in Nature, delivers results that are just as accurate as conventional digital chips and it is twice as energy efficient as state-of-the-art “compute-in-memory” chips. In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures.

As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition. Currently, AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing.

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. On AI chips, moving data from memory to computing units is one major bottleneck.

Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago. What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms. This chip now provides engineers with a platform to address these problems across the “technology stack” from devices and circuits to algorithms.

Comments

No comments have been submitted to date.

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments