Chip MIT reduced the power consumption of the neural network by 95%

0
251
views
<pre>Chip MIT reduced the power consumption of the neural network by 95%

Neural networks are powerful things, but very voracious. Engineers from the Massachusetts Institute of Technology (MIT) were able to develop a new chip that reduces the energy consumption of the neural network by 95%, which in theory may allow them to work even on mobile devices with batteries. Smartphones these days are becoming smarter and smarter, offering more and more services fueled by artificial intelligence, like virtual assistants and real-time transfers. But usually neural networks process data for these services in the cloud, and smartphones only transfer data back and forth.

This is not ideal, because it requires a thick communication channel and assumes that sensitive data is transmitted and stored outside the user's reach. But the enormous amounts of energy needed to power the neural networks running on GPUs can not be provided in a device running on a small battery.

MIT engineers have developed a chip that reduces this power consumption by 95%. The chip drastically reduces the need for data transfer back and forth between chip memory and processors.

Neural networks consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input data from several neurons in the underlying layer, and if the combined input passes a certain threshold, it passes the result to several neurons higher. The strength of the connection between the neurons is determined by the weight that is set in the learning process.

This means that for each neuron the chip must extract the input data for a particular connection and the weight of the connection from memory, multiply them, store the result, and then repeat the process for each input . A lot of data moves to and fro, a lot of energy is wasted.

The new MIT chip eliminates this by calculating all inputs in parallel in memory using analog circuits. This significantly reduces the amount of data that needs to be surpassed, and leads to significant energy savings.

This approach requires that the weight of the compounds be a binary rather than a range value, but previous theoretical studies have shown that this does not greatly affect accuracy, and scientists found that the performance of the chip was 2-3% of the conventional version of the neural network running on a standard computer.

It's not the first time scientists have created chips that process memory processes, reducing power consumption but for the first time this approach was used to operate a powerful neural network known for its image processing.

“The results show impressive specifications for the energy-efficient implementation of convolution operations within the memory array,” says Dario Gil, vice president of artificial intelligence at IBM .

“This definitely opens up the possibility of using more complex convolutional neural networks to classify images and video on the Internet of things in the future.”

And this is not only interesting to R & D groups. The desire to arrange AI for devices like smartphones, home appliances and all sorts of IoT devices are pushing many of Silicon Valley toward low-power chips.

Apple has already integrated its Neural Engine into the iPhone X to power, for example, face recognition technology, yes and Amazon is rumored to be developing its own AI chips for the next generation of Echo digital assistants.

Large companies, chip manufacturers are also increasingly beginning to rely on machine learning, which forces them to make their devices even more energy efficient effective. Earlier this year, ARM introduced two new chips: the Arm Machine Learning processor, which works with general AI tasks, from translation to face recognition, and the Arm Object Detection processor, which determines, for example, faces in pictures.

Qualcomm's newest mobile chip, Snapdragon 845, has a graphics processor and is largely focused on AI. The company also introduced Snapdragon 820E, which should run in drones, robots and in industrial devices.

Looking ahead, IBM and Intel are developing neurromorphic chips, whose architecture is inspired by the human brain and incredible energy efficiency. This could theoretically allow TrueNorth (IBM) and Loihi (Intel) to conduct powerful machine learning using only a small fraction of the energy of conventional chips, but these projects are still purely experimental.

Making chips that give life to neural networks, save battery power will be very complicated. But at the current pace of innovation, this “very difficult” looks quite feasible.


Source link