The Mustang-V100 Series from ICP Deutschland is a flexibly scalable solution for implementing Deep Learning Inference at the Edge, which is energy saving (low power consumption of <15 Watt TDP. 2.5 Watt per VPU) and has a low latency time. The Mustang V100-MX4 PCIe KI AI accelerator card is a variant equipped with four Intel Movidius Myriad X MA2485 Vision Processing Units. The PCI Express bus-based card can be integrated into various embedded systems. Its multi-channel capability enables each VPU to be assigned a different DL topology for simultaneous computing, such as AlexNet, GoogleNet, Yolo Tiny, SSD300, ResNet, SqueezeNet or MobileNet.
Moreover, its compatibility with the OpenVINO Toolkit from Intel optimizes the performance of the training model and scales it to the target system at the edge, for an optimized integration without tedious trail and error. The Mustang-V100 Series is compatible with operating systems such as Ubuntu 16.04, CentosOS 7.4 and Windows 10 IoT. Furthermore, it is actively cooled and the operating temperature ranges from 5°C to 55°C. ICP Deutschland also offers a PCIe variant with 8 VPU units as well as variants based on the Mini-PCIe and M.2 bus.