Renesas develops new AI accelerator for lightweight AI models

Renesas has announced the development of embedded processor technology that enables higher speeds and lower power consumption in microprocessor units (MPUs) that realise advanced vision AI.

The newly developed technologies are a dynamically reconfigurable processor (DRP)-based AI accelerator that efficiently processes lightweight AI models, and Heterogeneous architecture technology that enables real-time processing by cooperatively operating processor IPs, such as the CPU.

Renesas produced a prototype of an embedded AI-MPU with these technologies and confirmed its high-speed and low-power-consumption operation. It achieved up to 16 times faster processing (130 TOPS) than before the introduction of these new technologies, and world-class power efficiency (up to 23.9 TOPS/W at 0.8 V supply).

Amid the recent spread of robots into factories, logistics, medical services, and stores, there is a growing need for systems that can autonomously run in real time by detecting surroundings using advanced vision AI. Since there are severe restrictions on heat generation, particularly for embedded devices, both higher performance and lower power consumption are required in AI chips. Renesas developed new technologies to meet these requirements.

As a typical technology for improving AI processing efficiency, pruning is available to omit calculations that do not significantly affect recognition accuracy. However, it is common that calculations that do not affect recognition accuracy randomly exist in AI models. This causes a difference between the parallelism of hardware processing and the randomness of pruning, which makes processing inefficient.

To solve this issue, Renesas optimised its unique DRP-based AI accelerator (DRP-AI) for pruning. By analysing how pruning pattern characteristics and a pruning method are related to recognition accuracy in typical image recognition AI models (CNN models), we identified the hardware structure of an AI accelerator that can achieve both high recognition accuracy and an efficient pruning rate, and applied it to the DRP-AI design. In addition, software was developed to reduce the weight of AI models optimised for this DRP-AI. This software converts the random pruning model configuration into highly efficient parallel computing, resulting in higher-speed AI processing. In particular, Renesas’ highly flexible pruning support technology (flexible N:M pruning technology), which can dynamically change the number of cycles in response to changes in the local pruning rate in AI models, allows for fine control of the pruning rate according to the power consumption, operating speed, and recognition accuracy required by users.

This technology reduces the number of AI model processing cycles to as little as one-sixteenth of pruning incompatible models and consumes less than one-eighth of the power.
Robot applications require advanced vision AI processing for recognition of the surrounding environment. Meanwhile, robot motion judgment and control require detailed condition programming in response to changes in the surrounding environment, so CPU-based software processing is more suitable than AI-based processing. The challenge has been that CPUs with current embedded processors are not fully capable of controlling robots in real time. That is why Renesas introduced a dynamically reconfigurable processor (DRP), which handles complex processing, in addition to the CPU and AI accelerator (DRP-AI). This led to the development of heterogeneous architecture technology that enables higher speeds and lower power consumption in AI-MPUs by distributing and parallelising processes appropriately.

A DRP runs an application while dynamically changing the circuit connection configuration between the arithmetic units inside the chip for each operation clock according to the processing details. Since only the necessary arithmetic circuits operate even for complex processing, lower power consumption and higher speeds are possible. For example, SLAM (Simultaneously Localisation and Mapping), one of the typical robot applications, is a complex configuration that requires multiple programming processes for robot position recognition in parallel with environment recognition by vision AI processing. Renesas demonstrated operating this SLAM through instantaneous program switching with the DRP and parallel operation of the AI accelerator and CPU, resulting in about 17 times faster operation speeds and about 12 times higher operating power efficiency than the embedded CPU alone.

Renesas created a prototype of a test chip with these technologies and confirmed that it achieved the world-class, highest power efficiency of 23.9 TOPS per watt at a normal power voltage of 0.8 V for the AI accelerator and operating power efficiency of 10 TOPS per watt for major AI models. It also proved that AI processing is possible without a fan or heat sink.
Utilising these results helps solve heat generation due to increased power consumption, which has been one of the challenges associated with the implementation of AI chips in a variety of embedded devices such as service robots and automated guided vehicles. Significantly reducing heat generation will contribute to the spread of automation into various industries, such as the robotics and smart technology markets. These technologies will be applied to Renesas’ RZ/V series—MPUs for vision AI applications.

https://renesas.com.

> Read More

u-blox releases versatile Wi-Fi 6 module for the mass market

u-blox has announced its new NORA-W4 module. With a range of wireless technologies (Wi-Fi 6, Bluetooth LE 5.3, Thread, and Zigbee), compact form factor (10.4 x 14.3 x 1.9 mm), and affordability, NORA-W4 is ideal for IoT applications such as smart home, asset tracking, healthcare, and industrial automation.

NORA-W4 is a single-band tri-radio Wi-Fi 6 module built on the Espressif ESP32-C6 System-on-Chip. It enables battery-powered IoT nodes to operate directly over Wi-Fi. This simplifies implementation and reduces system-level costs by limiting the need for a Bluetooth gateway, making it a perfect match for applications like wireless battery-operated sensors.

NORA-W4 uses Wi-Fi 6 technology that is optimized for IoT and significantly reduces network congestion in environments such as factories, workplaces, or warehouses, thereby improving throughput and reducing latency. Fully backward compatible with Wi-Fi 4, the module can also be used in cases where the Wi-Fi infrastructure has not been upgraded.

The u-blox NORA-W4 module supports Matter protocol, Thread, and Zigbee technologies that are designed for new applications in the smart home environment. Consequently, it allows interoperability with other Matter smart home devices.

NORA-W4’s small form factor permits designers to adapt to device size constraints. Its compatibility with other u-blox NORA modules is key to effortless technology migration, such as transitioning from Wi-Fi 4 to Wi-Fi 6. In addition, the module is packed with enhanced security features, including secure boot, trusted execution environment, and flash encryption, to name a few.

NORA-W4 is available in 6 different variants: open CPU or u-connectXpress, antenna pin or PCB antenna, and with either 4MB or 8MB flash memory. Early samples are available now, with volume production scheduled for H2 2024.

https://www.u-blox.com

> Read More

ST expands into 3D depth sensing with latest time-of-flight sensors

ST has announced an all-in-one, direct Time-of-Flight (dToF) 3D LiDAR module with market-leading 2.3k resolution, and revealed an early design win for the world’s smallest 500k-pixel indirect Time-of-Flight (iToF) sensor.

The VL53L9 is a new direct ToF 3D LiDAR device with a resolution of up to 2.3k zones. Integrating a dual scan flood illumination, unique in the market, the LiDAR can detect small objects and edges and captures both 2D infrared (IR) images and 3D depth map information. It comes as a ready-to-use low power module with its on-chip dToF processing, requiring no extra external components or calibration. Additionally, the device delivers state-of-the-art ranging performance from 5cm to 10 meters.

VL53L9’s suite of features elevates camera-assist performance, supporting macro up to telephoto photography. It enables features such as laser autofocus, bokeh, and cinematic effects for still and video at 60fps (frame per second). Virtual reality (VR) systems can leverage accurate depth and 2D images to enhance spatial mapping for more immersive gaming and other VR experiences like virtual visits or 3D avatars. In addition, the sensor’s ability to detect the edges of small objects at short and ultra-long ranges makes it suitable for applications such as virtual reality or SLAM (simultaneous localisation and mapping).

ST is also announcing news of its VD55H1 ToF sensor, including the start of volume production and an early design win with Lanxin Technology, a China-based company focusing on mobile-robot deep-vision systems. MRDVS, a subsidiary company, has chosen the VD55H1 to add high-accuracy depth-sensing to its 3D cameras. The high-performance, ultra-compact cameras with ST’s sensor inside combine the power of 3D vision and edge AI, delivering intelligent obstacle avoidance and high-precision docking in mobile robots.

In addition to machine vision, the VD55H1 is ideal for 3D webcams and PC applications, 3D reconstruction for VR headsets, people counting and activity detection in smart homes and buildings. It packs 672 x 804 sensing pixels in a tiny chip size and can accurately map a three-dimensional surface by measuring distance to over half a million points. ST’s stacked-wafer manufacturing process with backside illumination enables unparalleled resolution with smaller die size and lower power consumption than alternative iToF sensors in the market. These characteristics give the sensors their excellent credentials in 3D content creation for webcams and VR applications including virtual avatars, hand modelling and gaming.

First samples of the VL53L9 are already available for lead customers and mass production is scheduled for early 2025. The VD55H1 is in full production now.

ST will showcase a range of ToF sensors including the VL53L9 and explain more about its technologies at Mobile World Congress 2024, in Barcelona, February 26-29, at booth 7A61.

https://www.st.com

> Read More

ST reveals cable-free connectivity for eUSB accessories, devices and industrial applications

New short-range wireless point-to-point transceiver ICs from STMicroelectronics remove the need for cables and connectors in consumer-friendly accessories and personal electronics like digital cameras, wearables, portable hard drives, and small gaming terminals. They also address data-transfer challenges in industrial applications such as rotating machinery.

As a cost-effective cable replacement, ST60A3H0 and ST60A3H1 transceivers let designers create products with slim, aperture-free cases that can be stylish, water-resistant, and allow convenient wireless docking. Self-discovery with instant mating saves pairing, while low power consumption preserves battery runtime. They operate in the 60GHz V-band and provide eUSB2, I2C, SPI, UART, and GPIO tunnelling.

Consuming 130mW in eUSB rx/tx mode and just 90mW for UART, GPIO, and I2C modes, and with a 23µW shutdown mode, energy demand is minimal. As the devices can handle exchanges at up to 480Mbit/s, consistent with the USB 2.0 High Speed specification, wireless connections can deliver cable-like speed and low latency.

The ST60A3H1 has an integrated antenna that eases final system design. It comes in a compact 3mm x 4mm VFBGA package. The ST60A3H0 is designed for connecting an external antenna, giving flexibility to address diverse applications. It has a smaller, 2.2mm x 2.6mm footprint.

In industrial environments, wireless connections with these transceivers deliver benefits including safe galvanic isolation and immunity to environmental hazards such as dust and humidity. The devices are also ideal for rotating machinery and instruments such as radars and lidars, as well as mobile equipment like robotic arms. Being free from mechanical wear, their lifetime is not limited by the number of rotations. This ensures greater reliability than slip rings, particularly for high data rate signals, at a lower cost than fiber-optic rotating joints (FORJ).

The transceivers are easy to use without installing software drivers or protocol stack. In addition to enhancing end-user experiences, they enable fast, efficient contactless product testing and debugging, including loading firmware over the air (FOTA), during manufacture and after sales.

http://www.st.com/en/wireless-connectivity/60-ghz-contactless-products.html

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration