Research
Technologies and Standards to Realize Smart Manufacturing
SIR Test Method for Developing Evidence for the Production Assembly
Attributes of Cored Solder Wire in LED Luminaire Soldering
Selective Removal of Conformal Coatings by Pulsed Ultraviolet Lasers
Effect of Assembly Pitch and Distance on Solder Joint Thermal Cycling Life
Hybrid Conformal Coatings for Mitigating Tin Whiskers
Printing and Assembly Challenges for QFN Devices
Pad Cratering Susceptibility Testing with Acoustic Emission
MORE RESEARCH
Latest Industry News
Will iPhone 13 Trigger Headaches and Nausea?
You'll hear a lot about how boring the iPhone 13 is, but Apple is still poised to continue its sales super cycle
Foxtron, Taiwan's First EV, Leaks After Arriving in the Country: How Come?
NI, Elektro-Automatik Join Forces for EV Battery Testing
China's manufacturing growth slows
To Manage Your Time Better, Think Of It Like A Balloon
Out of the Verification Crisis: Improving RTL Quality
Deep Learning Method Produces Holograms Instantly
MORE INDUSTRY NEWS

Training Neural Networks to Perform Tasks Using Less Energy



Training Neural Networks to Perform Tasks Using Less Energy
An artificial neuron device developed by neural network computations may use significantly less energy and chip area than CMOS based hardware.
Technology Briefing

Transcript


Training neural networks to perform tasks, such as recognizing images or navigating self-driving cars, currently requires lots of computing hardware and electricity. But thanks to a new artificial neuron device developed by researchers at the University of California San Diego such neural network computations may eventually require between 100 and 1000 times less energy and chip area than existing CMOS based hardware.

Neural networks are a series of connected layers of artificial neurons, where the output of one layer provides the input to the next. Generating that input is done by applying a mathematical calculation called a non-linear activation function. This is a critical aspect of running a neural network. But applying this function requires a lot of computing power and circuitry because it involves transferring data back and forth between two separate units - the memory and an external processor.

As recently reported in Nature Nanotechnology, the UC San Diego researchers have developed a nanometer-sized device that can efficiently carry out the activation function. Today, neural network computations in hardware get increasingly inefficient as the neural network models get larger and more complex.

To address this problem, the researchers developed a single nanoscale artificial neuron device that implements these computations in hardware in a way that is both very area-efficient and energy efficient. The new study was performed in collaboration with a DOE Energy Frontier Research Center, which focuses on developing hardware implementations of energy efficient artificial neural networks.

The device implements one of the most commonly used activation functions in neural network training called a rectified linear unit. This function is special because it must use hardware that can undergo a gradual change in resistance in order to work. And that's exactly what the UC San Diego researchers engineered their device to do; it can gradually switch from an insulating to a conducting state, and it does so with the help of a little bit of heat.

This switch performs what's called a Mott transition. It takes place in a nanometer thick layer of vanadium dioxide. Above this layer is a nanowire heater made of titanium and gold. When current flows through the nanowire, the vanadium dioxide layer slowly heats up, causing a slow, controlled switch from insulating to conducting. This device architecture is very interesting and innovative.

Typically, materials in a Mott transition experience an abrupt switch from insulating to conducting because the current flows directly through the material. But in this case, the flow of current through a nanowire on top of the material heats it and induces a very gradual resistance change. To implement the device, the researchers first fabricated an array of these so-called neuron devices, along with a synaptic device array.

Then they integrated the two arrays on a custom printed circuit board and connected them together to create a hardware version of a neural network. To demonstrate that the integrated hardware system can perform activation functions that are essential for many types of deep neural networks, the researchers used the network to process an image.

It used a type of image processing called "edge detection," which identifies the outlines or edges of objects in an image. The researchers say the technology could be further scaled-up to do more complex tasks such as facial and object recognition in self driving cars. Right now, this artificial neuron device is a proof of concept.

It's a tiny system in which the researchers only stacked one synapse layer with one activation layer. However, by stacking more of these together, engineers could make more complex systems for different applications.

Comments

No comments have been submitted to date.

Submit A Comment


Comments are reviewed prior to posting. You must include your full name to have your comments posted. We will not post your email address.

Your Name


Your Company
Your E-mail


Your Country
Your Comments



Board Talk
01005 Component Challenges and Bugs
Insulation Between Overhanging Component Lead and Circuit Conductor
Sticky Residue Under Low Clearance Parts
Finding the Cause of Cold Solder Joints
Soldering Relays Intrusively in Lead Free Process
Printing vs. Dispensing
Maximum Board Temperature During Tin-Lead
Is There a Spacing Spec for SMD Components?
MORE BOARD TALK
Ask the Experts
Options for Reballing BGA Components
Solder Paste Viscosity
MSD Components Baked Too Long
Aluminum Trays and Rapid Static Discharge
Seeking IPC and J-STD Definitions
Is Component Lead Damage Reparable?
BGA Solder Ball Shelf Life
Conformal Coating Press Fit Connectors
MORE ASK THE EXPERTS