KAWASAKI, JAPAN: Fujitsu Laboratories announces the development of high-speed deep learning leveraging supercomputer software parallelization technology. The software uses 64 GPUs to attain 27 times the speed of a single GPU. The technology enhances machine learning capabilities and process data in a day compared to traditional ways which consumed about a month. The implementations of 64 GPUs have increased the speed by 71%. It enables the development of higher-quality learning models. The new technology limits the waiting time between processing batches even with shared data of different sizes.
Scheduling data sharing
The new technology automatically shares the data among the computers needed at the start of the process. This results in multiple continuous operations which reduce the waiting time until the start of the next process.
Optimizing data size operations
The total operational time is reduced as each computer shares large volume data automatically and then continues with same operation.
The traditional method of deep learning uses multiple computers equipped with GPUs, networked and arranged in parallel. The problem in using this method is: the effect of parallelization becomes tough to obtain as more than 10 computers are used at the same time.
Machine learning is an improved technology method that gives accuracy of recognition compared to the traditional technologies. To obtain high speed operations the GPUs are widely used. Deep learning operates using multiple GPUs in parallel as the process requires huge time and loads of data.
Fujitsu eyes to commercialize this new development; it also aims in improving in technology further for greater machine learning.