Neural network software framework extends support for AI
Claimed to be the industry’s first software framework for embedded systems to automatically support networks generated by TensorFlow, Google’s software library for machine learning, CDNN2 (CEVA Deep Neural Network), has been released by the company.
The network enables localised, deep learning-based video analytics on camera devices in real-time. This is claimed to significantly reduce data bandwidth and storage compared to running such analytics in the cloud, while lowering latency and increasing privacy. Coupled with the CEVA-XM4 intelligent vision processor, it offers time-to-market and power advantages, says the company, for implementing machine learning in embedded systems for smartphones, advanced driver assistance systems (ADAS), surveillance equipment, drones, robots and other camera-enabled smart devices.
This second generation edition adds support for TensorFlow, as well as improved capabilities and performance for the latest network topologies and layers. It also supports fully convolutional networks, allowing any given network to work with any input resolution.
Using a set of enhanced APIs, CDNN2 is claimed to improve the overall system performance, including direct offload from the CPU to the CEVA-XM4 for neural network-related tasks. These enhancements, combined with the “push-button” capability that automatically converts pre-trained networks to run seamlessly on the CEVA-XM4, account for the time-to-market and power advantages for developing embedded vision systems. According to the company, this version generates an even faster network model for the CEVA-XM4 imaging and vision DSP, consuming significantly lower power and memory bandwidth compared to CPU- and GPU-based systems.
It is intended to be used for object recognition, advanced driver assistance systems (ADAS), artificial intelligence (AI), video analytics, augmented reality (AR), virtual reality (VR) and similar computer vision applications. The software library is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit (ADK) and computer vision library, CEVA-CV. It is flexible and modular, capable of supporting either complete CNN implementations or specific layers for a breadth of networks, such as Alexnet, GoogLeNet, ResidualNet (ResNet), SegNet, VGG (VGG-19, VGG-16, VGG_S) and Network-in-network (NIN). It supports the most advanced neural network layers including convolution, deconvolution, pooling, fully connected, softmax, concatenation and up-sample, as well as various inception models. All network topologies are supported, including MIMO (multiple input, multiple output), multiple layers per level, fully convolutional networks, in addition to linear networks (such as Alexnet).
The offline CEVA Network Generator converts a pre-trained neural network to an equivalent embedded-friendly network in fixed-point math at the push of a button. Deliverables include a hardware-based development kit which allows developers to not only run their network in simulation, and runs on the CEVA development board in real-time.