Neural Networks and Artificial Intelligence



Neural Networks and Artificial Intelligence
Researchers have developed a chip that increases the speed of neural-network computations by three to seven times and reduces power by 93 - 96%.
Technology Briefing

Transcript


Most recent advances in artificial-intelligence systems have come courtesy of neural networks. These are densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

Until now, neural nets have been large, and their computations have been energy intensive. Therefore, so they're not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

But, according to new research presented at the International Solid-State Circuits Conference, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times, while reducing power consumption 93 to 96 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own "weight," which indicates how large a role the output of one node will play in the computation performed by the next.

Training the network is a matter of setting those weights.

The MIT researchers' new chip improves efficiency by replicating the brain more faithfully than prior designs. In the chip, a node's input values are converted into electrical voltages and then multiplied by the appropriate weights. Only the combined voltages are converted back into a digital representation and stored for further processing.

The chip can thus calculate dot products for multiple nodes-6 at a time, in the prototype in a single step, instead of shuttling between a processor and memory for every computation.

One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy-somewhere between 1 and 2 percent.

In experiments, the MIT researchers ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip's results were generally within 2 to 3 percent of the conventional networks.  


Comments

No comments have been submitted to date.

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments