THANK YOU FOR SUBSCRIBING
The Neural Compute Stick is a small, fanless deep learning device that allows learning AI programming at the edge. It is powered by a high-performance Vision Processing Unit (VPU) that can be found in millions of smart security cameras, gesture-controlled drones, and industrial machine vision equipment.
Intel launched the Neural Compute Stick 2 (NCS 2) recently; these are the hardware devices based on Vision Processing Units (VPU). The device looks like a standard USB thumb drive that can be plugged into any Linux PC or a Raspberry Pi. NCS 2 has two Neural Compute Engines (NCE) for accelerating in-depth learning inferences at the edge. The device has the ability to transfer complex mathematical computation to a special chip embedded inside the device and execute faster. It also acts as a micro GPU for reading machine-learning models. NCS 2 is used in production for identifying and classifying objects. The VPU chip can only understand the graph format, which is converted from the TensorFlow or Caffe2 model. Also, the device has the ability to fasten the development of neural networks interference applications.
With NCS 2, the computer vision-based applications can be developed with an open source software called the OpenVINO toolkit. It is capable of delivering higher computing and graphics. The OpenVINO toolkit will allow developers to write their code once and deploy it across multiple architectures. It is based on the convolutional neural networks (CNN) and extends workloads across Intel hardware to maximize performance. The toolkit allows Intel to enable CNN-based deep learning inference on the edge and supports heterogeneous execution across computer vision accelerators including CPU, GPU, FPGA, and Intel Movidius Neural Computer Stick using a common API. The Intel Movidius Myriad 2 VPU was designed to reduce barriers to developing, tuning, and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.
NCS 2 makes testing, tuning, and prototyping deep neural networks easier, with 8x greater performance than its predecessor. The developers can also use the Intel AI in production ecosystem to port prototypes to other form factors.