Voice cores from CEVA support TensorFlow Lite for Microcontrollers
Machine learning at the edge is now possible for WhisPro speech recognition software from CEVA, as it is now available with open source TensorFlow Lite for Microcontrollers. TensorFlow Lite for Microcontrollers from Google is already optimised and available for CEVA-BX DSP cores, for low power artificial intelligence (AI) in conversational and contextual awareness applications, says CEVA.
The license provider of wireless connectivity and smart sensing technologies targets conversational AI and contextual awareness applications, with support for the TensorFlow Lite for Microcontrollers cross-platform framework for deploying tiny machine learning on power-efficient processors in edge devices.
Tiny machine learning brings AI to low power, always-on, battery operated IoT devices for on-device sensor data analytics in areas such as audio, voice, image and motion. Customers using TensorFlow Lite for Microcontrollers can use a unified processor architecture to run both the framework and the associated neural network workloads required to build intelligent connected products. CEVA’s WhisPro speech recognition software and custom command models are integrated with the TensorFlow Lite framework to accelerate the development of small footprint voice assistants and other voice-controlled IoT devices.
The CEVA-BX DSP family is a high-level programmable hybrid DSP/controller offering high efficiency for a broad range of signal processing and control workloads of real-time applications. Using an 11-stage pipeline and five-way VLIW micro-architecture, it offers parallel processing with dual scalar compute engines, load/store and program control that reaches a CoreMark per MHz score of 5.5, making is suitable for real time signal control. Its support for SIMD instructions makes it suitable for a variety of signal processing applications and the double precision floating point units efficiently handle contextual awareness and sensor fusion algorithms with a wide dynamic range. It also facilitates simultaneous processing of front-end voice, sensor fusion, audio processing, and general DSP workloads in addition to AI runtime inferencing.