Jacinto 7 processors have functional safety features to advance ADAS 

Deep learning capabilities and advanced networking characterise the Jacinto 7 processor platform that Texas Instruments launched at CES. The first two automotive devices in the platform, TDA4VM processors for advanced driver assistance systems (ADAS) and DRA829V processors for gateway systems, include specialised on-chip accelerators to segment and expedite data-intensive tasks, like computer vision and deep learning.

The TDA4VM and DRA829V processors also incorporate a functional safety microcontroller, making it possible for original equipment manufacturers (OEMs) and Tier-1 suppliers to support both ASIL-D safety-critical tasks and convenience features with one chip, points out Texas Instruments. They share a single software platform, to reduce system complexity and cost as developers can reuse their software investment across multiple vehicle domains.

The TDA4VM processor offers on-chip analytics combined with sensor pre-processing, enabling OEMs and Tier-1 suppliers to support front camera applications using high-resolution 8Mpixel cameras to see farther and add enhanced features, such as drive assist. They are also capable of simultaneously operating four to six 3Mpixel cameras while also fusing other sensing modalities such as radar, lidar and ultrasonic on one chip, adds Texas Instruments. This multi-level capability enables TDA4VM to act as the centralised processor for ADAS. It also enables the critical features for automated parking, like surround view and image processing for displays and enhanced vehicle perception.

The DRA829V processor integrates the computing functions required for modern vehicles. It is believed to be the first in the industry to incorporate a PCIe switch on-chip in addition to an eight-port gigabit time sensitive network (TSN) -enabled Ethernet switch for faster computing functions and communications throughout the vehicle.

It also supports ASIL-D safety-critical and non-safety-critical operations and enables  OEMs and Tier-1 suppliers to support mixed-criticality applications on one device. Higher bandwidth on-chip also allows developers to manage software development and validation in vehicles, for more dynamic updates and upgrades.

Pre-production TDA4VM and DRA8329V processors are available now, together with Jacinto 7 processors development kits and TDA4VMXEVM and DRA829VXEVM evaluation modules.

Volume production is expected to be available in the second half of 2020.

http://www.TI.com

> Read More

Nordic Semiconductor prepares for Bluetooth LE Audio

This year, the Bluetooth Special Interest Group (SIG) will release the Bluetooth LE (Low Energy) Audio specification. To support this forthcoming specification, Nordic Semiconductor has partnered with Bluetooth LE stack developer, Packetcraft, to develop an LE Audio evaluation platform.

It demonstrates the benefits of LE Audio and is designed to support the new LE Audio specifications which promise lower power consumption than classic Bluetooth audio, significantly extending battery life, improved audio quality and enables the development of devices capable of both wireless data transfer and audio streaming. The technology also supports broadcast for audio sharing.

Nordic’s LE Audio comprises a hardware reference design based on its nRF52832 Bluetooth LE system on chip (SoC), Cirrus Logic’s CS47L35 smart codec with an integrated low power audio DSP, Packetcraft’s Bluetooth LE host stack and link layer supporting LE Audio and an LE Audio software development kit (SDK). The platform allows developers to start evaluating the technology for Bluetooth LE Audio wireless speakers, over-the-ear headphones and true wireless ear buds.

It is designed to operate in either source (for example, audio source, voice call headset source, and peer-to-peer call) or sink (audio playback, headset playback, and peer-to-peer call). A pair of devices (source and sink) is required to complete an LE Audio link. The LE Audio solution also features Cirrus Logic’s SoundClear for uplink noise reduction and echo cancellation, playback enhancement, voice control, and hearing augmentation.

An acoustic connector is incorporate that can accommodate up to six microphones or two speakers, a 3.5mm headset jack, a 3.5mm source jack, and a USB connector for charging, debug (using Nordic development tools), and acoustic tuning (using the Cirrus Logic WISCE platform). When wirelessly streaming audio from a source device to wireless (sink) earbuds, the LE Audio evaluation platform extends battery life by around 40 per cent compared to contemporary off-the-shelf classic Bluetooth solutions, claims Nordic Semiconductor.

http://www.nordicsemi.com

> Read More

“World’s smallest” 3D image sensor authenticates faces

At CES this week, Infineon will present what it claims is the world’s smallest 3D image sensor for face authentication and photo effects.

Infineon Technologies has collaborated with software and 3D time of flight (ToF) system specialist, pmdtechnologies, to develop what it claims is the world’s smallest and most powerful 3D image sensor. The Real3 chip measures 4.4 x 5.1mm and is the fifth generation of ToF deep sensors from Infineon.

Andreas Urschitz, president of the power management and multi-market division at Infineon, said: “We see great growth potential for 3D sensors, since the range of applications in the areas of security, image use and context-based interaction with the devices will steadily increase.” The 3D sensor also allows the device to be controlled via gestures, so that human-machine interaction is context-based and touch-free.

The depth sensor ToF technology enables an accurate 3D image of faces, hand details or objects, required to ensure that an image matches the original to verify payment transactions using mobile phones via facial recognition. This function requires an extremely reliable and secure image and return transmission of the high-resolution 3D image data. The same applies to securely unlocking devices with a 3D image. The Infineon 3D image sensor also implements this in extreme lighting conditions such as strong sunlight or in the dark.

The IRS2887C 3D image sensor also has additional options for photo taking, such as enhanced autofocus, bokeh effect for photo and video and improved resolution in poor lighting conditions. Real-time full-3D mapping also allows authentic augmented reality experiences.

Production will begin in the middle of 2020.

In addition, Infineon Technologies offers an optimised illumination driver (IRS9100C).

Infineon Technologies provides semiconductors to “make life easier, safer and greener”. It has approximately 41,400 employees worldwide.

http://www.infineon.com/real3

> Read More

Integrated IP and software develop contextually-aware IoT devices

At CES this week, Ceva will demonstrate its SenslinQ integrated hardware IP and software platform, designed to streamline the development of contextually-aware IoT devices.

The platform collects, processes and links data from multiple sensors to enable intelligent devices to understand their surroundings, explains the company by aggregating sensor fusion, sound and connectivity technologies.

Contextual awareness adds value and enhances the user experience of smartphones, laptops, augmented reality/virtual reality (AR/VR) headsets, robots, hearables and wearables. The SenslinQ platform centralises the workloads that require an intimate understanding of the physical behaviours and anomalies of sensors. It collects data from multiple sensors within a device, including microphones, radars, inertial measurement units (IMUs), environmental sensors, and time of flight (ToF) sensors, and conducts front-end signal processing such as noise suppression and filtering on this data. It applies algorithms to create “context enablers” such as activity classification, voice and sound detection, and presence and proximity detection. These context enablers can be fused on a device or sent wirelessly via Bluetooth, Wi-Fi or NB-IoT, to a local edge computer or the cloud to determine and adapt the device to its environment.

The customisable hardware reference design is composed of an Arm or RISC-V microcontroller, CEVA-BX DSPs and a wireless connectivity island, such as RivieraWaves Bluetooth, wi-fi or Dragonfly NB-IoT platforms, or other connectivity standards provided by the customer or third parties. Each components of these three components are connected using standard system interfaces.

The SenslinQ software is comprised of a portfolio of ready-to-use software libraries from CEVA and its ecosystem partners. Libraries include the Hillcrest Labs MotionEngine software packages for sensor fusion and activity classification in mobile, wearables and robots, the ClearVox front-end voice processing, WhisPro speech recognition and DSP and artificial intelligence (AI) libraries. There is also third party software components for active noise cancellation (ANC), sound sensing and 3D audio.

The accompanying SenslinQ framework is a Linux-based hardware abstraction layer (HAL) reference code and application programming interfaces (APIs) for data and control exchange between the multiple processors and sensors.

https://www.ceva-dsp.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration