Electronics Assembly Knowledge, Vision & Wisdom
Neural Networks and Artificial Intelligence
Neural Networks and Artificial Intelligence
Researchers have developed a chip that increases the speed of neural-network computations by three to seven times and reduces power by 93 - 96%.
Technology Briefing
Technology Briefing is brought to you by association with Audio-Tech, publishers of critically acclaimed programs including: Trends Magazine.

Subscribe to their monthly reports and learn about big ideas, new products, new management techniques, breakthrough concepts, and trailblazing technologies.
Submit A Comment
Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company


Your E-mail


Your Country


Your Comment



Transcript
Most recent advances in artificial-intelligence systems have come courtesy of neural networks. These are densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

Until now, neural nets have been large, and their computations have been energy intensive. Therefore, so they're not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

But, according to new research presented at the International Solid-State Circuits Conference, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times, while reducing power consumption 93 to 96 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own "weight," which indicates how large a role the output of one node will play in the computation performed by the next.

Training the network is a matter of setting those weights.

The MIT researchers' new chip improves efficiency by replicating the brain more faithfully than prior designs. In the chip, a node's input values are converted into electrical voltages and then multiplied by the appropriate weights. Only the combined voltages are converted back into a digital representation and stored for further processing.

The chip can thus calculate dot products for multiple nodes-6 at a time, in the prototype in a single step, instead of shuttling between a processor and memory for every computation.

One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy-somewhere between 1 and 2 percent.

In experiments, the MIT researchers ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip's results were generally within 2 to 3 percent of the conventional networks.  


Comments
No comments have been submitted to date.
Free Newsletter Subscription
Every issue of the Circuit Insight email newsletter will bring you the latest information on the issues affecting you and your company.

Insert Your Email Address

Directory Search


Program Search
Related Programs
bullet Ice and Frost Can Damage Machines
bullet The Brave New World Comes to California
bullet 3D Stretchable Circuits
bullet Why AI Isn't the Death of Jobs
bullet Will AI Make Doctors Obsolete?
bullet Get Ready for the Great 5G Business Boom
bullet 21st Century Textiles Make Life Better
bullet Out-Sourcing, In-Sourcing, and Globalization
bullet Ultra-Thin Artificial Retina
bullet Securing the Supply Chain of Strategic Minerals
More Related Programs