Robotics gain ground on perception performance

Initiatives to deliver a suite of perception technologies to the robotics operating system (ROS) developer community have been announced following an agreement between Nvidia and Open Robotics.

The agreement is to accelerate ROS 2 performance on Nvidia’s Jetson edge AI platform and GPU-based systems. These initiatives will reduce development time and improve performance for developers seeking to incorporate computer vision and AI / machine learning functionality into ROS-based applications.

“As more ROS developers leverage hardware platforms that contain additional compute capabilities designed to offload the host CPU, ROS is evolving to make it easier to efficiently take advantage of these advanced hardware resources,” explains Brian Gerkey, CEO of Open Robotics. “Working with an accelerated computing leader like Nvidia and its vast experience in AI and robotics innovation will bring significant benefits to the entire ROS community.”

Open Robotics will enhance ROS 2 to enable efficient management of data flow and shared memory across GPU and other processors present on the Nvidia Jetson edge AI platform. This will improve the performance of applications that have to process high bandwidth data from sensors such as cameras and lidars in real time, reports Open Robotics.

The two parties are also working to enable seamless simulation interoperability between Open Robotics’s Ignition Gazebo and Nvidi Isaac Sim on Omniverse. Isaac Sim already supports ROS 1 and ROS 2 and features an ecosystem of 3D content with popular applications, such as Blender and Unreal Engine 4.

With the two simulators connected, ROS developers can easily move robots and environments between Ignition Gazebo and Isaac Sim to run large-scale simulation and take advantage of high-fidelity dynamics, accurate sensor models and photo-realistic rendering to generate synthetic data for training and testing of AI models.

In addition to being a robotic simulator, Isaac Sim can generate synthetic data to train and test perception models. These capabilities will be more important as roboticists incorporate more perception features which will reduce the need for human intervention in the tasks they perform.

Isaac Sim generates synthetic datasets which are fed directly into NVIDIA’s TAO, an AI model adaptation platform, to adapt perception models for a robot’s specific working environment. Ensuring that a robot’s perception stack is going to perform in a given working environment can begin before any real-data is collected from the target surroundings.

Software is expected to be released in the spring of 2022.

http://www.nvidia.com

> Read More

Development kit brings a voice to Raspberry Pi

Hardware, add on open software and algorithms required to test, prototype and debug voice and audio functionality are provided in a development kit available from Knowles. The Raspberry Pi development kit enables voice integration for smart home, machine learning, consumer technology, industrial and other IoT applications.

The Knowles AISonic IA8201 Raspberry Pi development kit to brings voice, audio edge processing and machine learning (ML) listening capabilities to devices and systems. It can be used by product designers and engineers as a single tool to streamline design, development, and testing of technology for voice and audio integration.

The kit is designed to be a fast way to prototype innovations which address emerging use cases including contextually-aware voice, machine learning listening, and real-time audio processing. All of these require flexible development tools to accelerate the design process, minimise development costs and leverage new technological advances, says Vikram Shrivastava, senior director, IoT marketing at Knowles. “By selecting Raspberry Pi as the system host, we are opening up the ability to add voice and ML to the largest community of system developers that prefer a Linux or Android environment.”

The kit is built around the Knowles AISonic IA8201 audio edge processor OpenDSP, for low power and high performance audio processing. The audio edge processor combines two Tensilica-based, audio-centric DSP cores. One is for high power compute and AI / ML applications, and the other for very low power, always on processing of sensor inputs. The IA8201 has 1Mbyte of RAM on-chip that allows for high bandwidth processing of advanced, always-on contextually-aware machine learning use cases and memory for multiple algorithms.

Using the Knowles open DSP platform, the kit includes a library of on-board audio algorithms and AI/ML libraries. Farfield audio applications can be built using the available low power voice wake, beamforming, custom keywords, background noise elimination, from Knowles’ algorithm partners such as Amazon Alexa, Sensory, Retune and Alango, to support a wide range of voice and audio customisation. The kit also features TensorFlow Lite-Micro software development kit (SDK) for fast prototyping and product development for AI / ML applications. The TensorFlow-Lite SDK allows for porting models developed in larger cloud Tensor Flow frameworks to an embedded platform usually with limited compute and lower power consumption at the edge, for example, AI inference engines for verticals such as industrial and commercial.

The kit has options for either two or three pre-integrated Knowles Everest microphones and two microphone array boards to help select the appropriate algorithm configurations for the end application.

Developer support is available through the Knowles Solutions Portal for configuration tools, firmware and algorithms that are supplied as standard with the kit.

The Knowles IA8201 Raspberry Pi development kit is now available for order.

http://www.knowles.com

> Read More

R-Car development kit assembles tools for deep learning in vehicle design

Software development and validation for smart camera and automated driving applications in passenger, commercial and off-road vehicles can be accelerated using the R-Car software development kit, says Renesas Electronics. The single, multi-OS software platform is easy for customers to access, learn, use, and install, claims the company.

Deep learning is being used by vehicle manufacturers to enable smart camera applications and automated driving systems. Most however are built on consumer or server applications, which do not operate under the same stringent constraints for functional safety, real-time responsiveness, and low power consumption.

Optimised for use with Renesas’ R-Car V3H and R-Car V3M SoCs, the R-Car SDK is built for rule-based automotive computer vision and AI-based functions. The simulation platform offers both AI and conventional hardware accelerators for accurate simulations in real time. Renesas has already confirmed that it will continue to strengthen this virtual platform. A full suite of development PC-based tools is delivered for both Windows and Linux as well as multiple libraries, including support for deep learning, computer vision, video codecs, and 3D graphics. The SDK supports Linux, multiple ASIL-D-compliant operating systems (e.g., QNX, eMCOS, and Integrity) in a single package.

A version of the e² studio is available for the R-Car V series, focusing on the creation of real-time computer vision applications for ADAS and automated driving. The open-source Eclipse-based development environment includes a full set of debug features and an e² studio GUI (graphics user interface) that allows users to customise and integrate third-party tools. It also supports bus monitoring and debug functionalities for image processing and deep learning subsystems.

Software samples, popular CNN networks, a workshop, and application notes are included for a quick start for development. The SDK is also suitable for benchmarking Renesas products and to select the most appropriate SoC for a target application.

The automatic installer ensures all the software libraries and the development environment can be launched quickly on a development workstation. Applications developed and designed on a PC can be seamlessly ported to embedded development hardware. Renesas’ R-Car partner ecosystem – the R-Car Consortium – will have access to the R-Car SDK.

The R-Car SDK is available now.

https://www.renesas.com

> Read More

Wireless power receiver is engineered for charge sharing

Faster wireless charging and flexible charge sharing can enhance the use of portable and mobile devices for use in the home, office, industry, healthcare and in-car applications. The STWLC98 integrated wireless power receiver can be combined with the STWBC2-HP transmitter for a transmit-receive system capable of delivering

STMicroelectronics says that the STWLC98 can fully charge high-end smartphones, which contain high-capacity batteries, in just under 30 minutes. It can also be used for fast and convenient contactless charging, freeing the user from the restrictions of cables, sockets, and restrictive connections and allowing the designer to simplify enclosure designs, reduce cost and complexity and implement slimline styles, says the company.

The STWLC98 integrated wireless power receiver complies with the Qi EPP 1.3 wireless charging standard, commonly used in the smartphone industry. It has a 32-bit Arm Cortex-M3 core which supports built-in protection and its embedded OS simplifies Qi 1.3 certification.

The STWBC2-HP transmitter IC can work with ST’s STSAFE-A110 secure element to store official Qi certificates and provides authentication using cryptography. Support for the ST Super Charge (STSC) protocol enables fast charging up to the maximum power-transfer rate of 70W.

The STWLC98 features ST’s proprietary Adaptive Rectifier Configuration (ARC) mode that enhances the ping-up and power transfer spatial freedom of the system in both horizontal and vertical directions without any change in hardware or coil optimisation. Enabling ARC mode, which transforms the whole surface of the transmitter as usable charging area, increases the ping-up distance by up to 50 per cent in all directions, says ST.

The STWLC98 works directly with the STWBC2-HP, which contains a USB-PD interface, digital buck/boost DC/DC converter, full-bridge inverter, three half-bridge drivers, and voltage, current, and phase sensors. Controlled by a Cortex-M0+ core, the STWBC2-HP executes a patented fast PID (proportional integral derivative) controller loop and also supports the STSC protocol.

The 70W wireless charging chipset can be deployed in smartphones, tablets, laptops, power banks, True Wireless Stereo (TWS) devices, Bluetooth speakers, and AR / VR headsets. Designers can also extend fast and convenient wireless charging to medical equipment like monitors and medicine pumps, as well as cordless power tools, mobile robots, drones, and e-bikes. The chipset is also suited in-cabin charging solutions and wireless charging of various modules on-board a vehicle.

Built-in power management means the STWLC98 has an energy-saving low power standby mode and total end-to-end charging system efficiency which can exceed 90 per cent to meet stringent eco-design targets. The power charger chip features dedicated hardware and advanced algorithms that were developed to address challenges in ASK and FSK communication during high power delivery.

Safety features include foreign object detection (FOD), which leverages high-accuracy current-sense IP, Q-factor detection, and robust communication between transmitter and receiver

The STWLC98 can also operate in high-efficiency transmitter mode to allow high power charge sharing between devices. Coupled with the STWLC98’s embedded Q-factor detection – believed to be the first in a receiver device, it ensures safe operation in transmitter mode.

The PC-based graphical tool, ST Wireless Power Studio, is available for free download.

The STWLC98 is available now in a 4.3 x 3.9mm 90-bump, 0.4mm pitch WLCSP and the STWBC2-HP is available in an 8.0 x 8.0mm VFQFPN, 68-pin, 0.4mm pitch package.

http://www.st.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration