Fujitsu's Latest Initiative Increases GPU Memory Enabling Neural Networks of High-Speed Learning
By apacciooutlook | Monday, December 03, 2018
KAWASAKI, JAPAN: Fujitsu Laboratories announces the development of technology to streamline internal memory of GPUs. This is an initiative taken to address the challenges of smaller GPU memory with the limited scaling of neural networks capable of high-speed learning as compared to an ordinary computer. The new technology will support the growing neural network scale that works to heighten machine learning accuracy.
To make use of a GPU's high-speed calculation ability, the data to be used in a series of calculations needs to be stored in the GPU's internal memory while avoiding parallelization methods that greatly reduce learning speed. This createsan issue where the scale of the neural network that could be built is limited by memory capacity. The new technology uses GPUs for high-speed machine learning to support the huge volume of calculations necessary for deep learning processing.
The technology uses AlexNet and VGGNet, image-recognition neural networks to increase the scale of learning of a neural network by up to twice than that of the previous technology, thereby reducing the volume of internal GPU memory used. It expands the scale at high speed on one GPU, enabling the development of more accurate models. The technology improves memory efficiency, implementing and evaluating it in the Caffe open source deep learning framework software.
The memory space in which larger data allocates can be reused as the structure of every layer of the neural network is analyzed by the technology, and the order of calculations is changed. It is capable of independently executing both calculations to generate the intermediate error data from weighted data, and calculations to generate the weighted data error from intermediate data. The technology reduces the volume of memory by enabling the reuse of memory resources.