Bluetooth LE Audio codec is first for power-sensitive audio says Synopsys

Optimised for Synopsys’ ARC processor IP, a low complexity communication codec (LC3) has been developed by Synopsys with the Fraunhofer Institute for Integrated Circuits (IIS).

The new codec is designed to comply with the forthcoming Bluetooth LC3 audio codec specification and is optimised to deliver high-quality audio and voice playback in battery-powered devices incorporating ARC EM and HS DSP processors, says Synopsys.

It has been added to Synopsys’ portfolio of DesignWare ARC audio codecs and post-processing software supporting popular audio standards. It also extends Synopsys’ DesignWare Bluetooth Low Energy IP offering.

The 32-bit DesignWare ARC EM and HS DSP processors are based on the scalable ARCv2DSP Instruction Set Architecture (ISA) and integrate RISC and DSP capabilities for a flexible processing architecture. The ARC EM DSP processors offer low power and what is claimed to be industry-leading performance efficiency while the multi-core-capable ARC HS DSP processors combine high-performance control and high-efficiency digital signal processing. All ARC processors are supported by the ARC MetaWare Development Toolkit, which includes a library of DSP functions to allow software engineers to rapidly implement algorithms from standard DSP building blocks. ARC processors and the LC3 codec can be combined with Synopsys’ Bluetooth 5.1-compliant DesignWare Bluetooth Low Energy IP to deliver power-efficient, high-quality wireless audio capability for smart IoT and other Bluetooth-enabled devices.

The LC3 codec is an important feature of the Bluetooth LE Audio specification to be released by the Bluetooth Special Interest Group (SIG) that enables system on chip (SoC) designers to efficiently implement high-quality voice and audio streaming in a wide range of applications, including mobile, wearables, and home automation.

The LC3 codec for ARC processors is based on an implementation by Fraunhofer IIS that is designed to meet Bluetooth SIG requirements. The LC3 codec, running on ARC EM and HS DSP processors, allows designers to rapidly integrate a complete, pre-verified hardware and software solution for voice and speech processing into Bluetooth-enabled devices requiring minimal energy consumption, explains Synopsys.

“The rapid growth of wearable devices requiring high-quality Bluetooth audio streaming is driving the need for power-efficient processor IP with DSP capabilities that can meet intensive computation requirements of voice and audio applications. Those applications require an optimised codec providing state-of-the art voice and audio quality at minimum computational complexity,” said Manfred Lutzky, head of Audio for Communications at Fraunhofer IIS. “By porting the LC3 codec to the DSP-enhanced ARC processors, Synopsys is enabling customers to quickly implement LC3 codec functionality in their low-power SoCs. We look forward to continuing our collaboration with Synopsys so that the LC3 codec for ARC processors continues to incorporate the latest updates,” he added.

“The fact that the LC3 codec can provide very high-quality audio even at low bit rates makes it a key feature of the upcoming LE Audio standard,” said Mark Powell, chief executive officer of the Bluetooth SIG. “

John Koeter, senior vice president of marketing for IP at Synopsys, said: “Designed to process high-quality audio streams and deliver superior sound, the LC3 codec for ARC processors provides designers with a certified codec that reduces the integration time and testing required to deliver superior quality audio for Bluetooth streaming applications.”

The Bluetooth LC3 codec is available now from Synopsys with DSP-enhanced ARC EMxD and HS4xD processors.

http://www.synopsys.com

> Read More

i.MX applications processor has NPU for advanced ML at the edge

The latest member of NXP’s EdgeVerse portfolio has been launched at CES in Las Vegas (7 to 10 January). The i.MX 8M Plus heterogeneous application processor is the first i.MX family to integrate a dedicated neural processing unit (NPU) for advanced machine learning (ML) inference at the industrial and IoT (Internet-of-Things) edge.

The NPU delivers 2.3Toperations per second and is combined with a quad-core Arm Cortex-A53 sub-system running at up to 2GHz. There is also an independent real-time sub-system with an 800MHz Cortex-M7, an 800MHz audio DSP for voice and natural language processing, a dual camera image signal processors (ISP) and a 3D GPU for rich graphics rendering. The i.MX 8M will enable edge devices to make intelligent decisions locally by learning and inferring inputs with little or no human intervention, says NXP. Target applications are people and object recognition for public safety, industrial machine vision, robotics, hand gesture, and emotion detection with natural language processing for seamless human-to-device interaction with fast response time and high accuracy.
The applications processor is based on 14nm LPC FinFET process technology. The i.MX 8M Plus can execute multiple, highly-complex neural networks simultaneously, these include multi-object identification, speech recognition of 40,000+ English words and medical imaging. The NPU is capable of processing Mobilenet, a popular image classification network at over 500 images per second, adds NXP.
Developers can off-load machine learning inference functions to the NPU, allowing the Cortex-A and Cortex-M cores, DSP and GPUs to execute other system-level or user applications tasks. The vision pipeline is anchored by dual integrated ISPs that support two high-definition cameras for real-time stereo vision or a single 12Mpixel resolution camera and includes high dynamic range (HDR) and fisheye lens correction for real-time image processing in surveillance, smart retail applications, robot vision and home health monitors.
For voice applications, the i.MX 8M Plus integrates a HiFi 4 DSP that enhances natural language processing with pre- and post-processing of voice streams. The Cortex-M7 domain can be used to run real-time response systems while the applications processor domain executes complex non-real-time applications. It reduces overall system-level power consumption by turning off the application processor domain while keeping only the Cortex-M domain alive for wake word detection. For advanced multimedia, and video processing, the processor can compress multiple video feeds using the H.265 or H.264 HD video encoder and decoder for cloud streaming or local storage, 3D/2D graphics, and Immersiv3D audio with Dolby Atmos and DTS:X.

In industrial scenarios, it can be used in machines that inspect, measure, precisely identify objects and enable predictive maintenance by detecting anomalies in machine operation. It can also support making the factory human machine interfaces (HMIs) more intuitive and secure by combining accurate face recognition with voice/command recognition and gesture recognition. The i.MX 8M Plus integrates Gigabit Ethernet with time sensitive networking (TSN), which combined with Arm Cortex M7 real-time processing provides deterministic wired network connectivity and processing, NXP explains.
Other features for industrial use are error correction code (ECC) for internal memories and the DDR interface.
The family is expected to be qualified to meet the stringent industrial temperature range (-40 to +105 degrees C ambient).

NXP at CES 2020: booth, CP-18
http://www.nxp.com  

> Read More

“World’s smallest” 3D image sensor authenticates faces

At CES this week, Infineon will present what it claims is the world’s smallest 3D image sensor for face authentication and photo effects.

Infineon Technologies has collaborated with software and 3D time of flight (ToF) system specialist, pmdtechnologies, to develop what it claims is the world’s smallest and most powerful 3D image sensor. The Real3 chip measures 4.4 x 5.1mm and is the fifth generation of ToF deep sensors from Infineon.

Andreas Urschitz, president of the power management and multi-market division at Infineon, said: “We see great growth potential for 3D sensors, since the range of applications in the areas of security, image use and context-based interaction with the devices will steadily increase.” The 3D sensor also allows the device to be controlled via gestures, so that human-machine interaction is context-based and touch-free.

The depth sensor ToF technology enables an accurate 3D image of faces, hand details or objects, required to ensure that an image matches the original to verify payment transactions using mobile phones via facial recognition. This function requires an extremely reliable and secure image and return transmission of the high-resolution 3D image data. The same applies to securely unlocking devices with a 3D image. The Infineon 3D image sensor also implements this in extreme lighting conditions such as strong sunlight or in the dark.

The IRS2887C 3D image sensor also has additional options for photo taking, such as enhanced autofocus, bokeh effect for photo and video and improved resolution in poor lighting conditions. Real-time full-3D mapping also allows authentic augmented reality experiences.

Production will begin in the middle of 2020.

In addition, Infineon Technologies offers an optimised illumination driver (IRS9100C).

Infineon Technologies provides semiconductors to “make life easier, safer and greener”. It has approximately 41,400 employees worldwide.

http://www.infineon.com/real3

> Read More

Integrated IP and software develop contextually-aware IoT devices

At CES this week, Ceva will demonstrate its SenslinQ integrated hardware IP and software platform, designed to streamline the development of contextually-aware IoT devices.

The platform collects, processes and links data from multiple sensors to enable intelligent devices to understand their surroundings, explains the company by aggregating sensor fusion, sound and connectivity technologies.

Contextual awareness adds value and enhances the user experience of smartphones, laptops, augmented reality/virtual reality (AR/VR) headsets, robots, hearables and wearables. The SenslinQ platform centralises the workloads that require an intimate understanding of the physical behaviours and anomalies of sensors. It collects data from multiple sensors within a device, including microphones, radars, inertial measurement units (IMUs), environmental sensors, and time of flight (ToF) sensors, and conducts front-end signal processing such as noise suppression and filtering on this data. It applies algorithms to create “context enablers” such as activity classification, voice and sound detection, and presence and proximity detection. These context enablers can be fused on a device or sent wirelessly via Bluetooth, Wi-Fi or NB-IoT, to a local edge computer or the cloud to determine and adapt the device to its environment.

The customisable hardware reference design is composed of an Arm or RISC-V microcontroller, CEVA-BX DSPs and a wireless connectivity island, such as RivieraWaves Bluetooth, wi-fi or Dragonfly NB-IoT platforms, or other connectivity standards provided by the customer or third parties. Each components of these three components are connected using standard system interfaces.

The SenslinQ software is comprised of a portfolio of ready-to-use software libraries from CEVA and its ecosystem partners. Libraries include the Hillcrest Labs MotionEngine software packages for sensor fusion and activity classification in mobile, wearables and robots, the ClearVox front-end voice processing, WhisPro speech recognition and DSP and artificial intelligence (AI) libraries. There is also third party software components for active noise cancellation (ANC), sound sensing and 3D audio.

The accompanying SenslinQ framework is a Linux-based hardware abstraction layer (HAL) reference code and application programming interfaces (APIs) for data and control exchange between the multiple processors and sensors.

https://www.ceva-dsp.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration