A new IBM technology allowed to accelerate the training of the AI 4 times

The computational efficiency of artificial intelligence is kind of a double edged sword. On the one hand he needs to learn pretty quickly, but the more “accelerated” the network — the more it consumes energy. And therefore can become simply unprofitable. However, the way out could give IBM, which demonstrated new methods of teaching AI, which will allow him to learn several times faster for the same level of resources and energy.

To achieve these results, IBM had to abandon calculations using 32-and 16-bit techniques, developing an 8-bit equipment, as well as the new chip to work with it.

“The next generation of applications for AI will require a more rapid response time, large workloads and the ability to work with multiple data streams. To unleash the full potential of the AI, we prepracticum all hardware completely. Scaling AI through new hardware solutions is part of IBM Research on the transition from usageproperty AI, often used to address specific, clearly defined task to a multidisciplinary AI that covers all areas.” — said Vice-President and lab Director of IBM Research Jeffrey’zer.

All development on IBM was presented in the framework NeurIPS 2018 in Montreal. The company’s engineers reported on two developments. The first is called “deep machine learning of neural networks using 8-bit floating point numbers.” In it they describe how they managed to lower arithmetic precision for applications from 32 bits to 16 bits and save it to 8-bit model. Experts claim that their technique speeds up the training time of deep neural networks by 2-4 times compared to the 16-bit systems. The second development “8-bit multiplication in memory with the projected memory phase transition”. Here the experts reveal the method that compensates for the low precision of analog circuits AI, allowing them to consume 33 percent less power than comparable digital AI system.

“Improved accuracy achieved by our research group, indicates that in-memory computation can provide a high-performance deep learning in environments with low power consumption. As with our digital boosters, our analog chips designed for scale and learning AI and output via visual, voice and text datasets, and apply to multidisciplinary AI.”

This and other news you can discuss in our chat in Telegram.

Leave a Reply

Your email address will not be published. Required fields are marked *