Robotics gain ground on perception performance

Initiatives to deliver a suite of perception technologies to the robotics operating system (ROS) developer community have been announced following an agreement between Nvidia and Open Robotics.

The agreement is to accelerate ROS 2 performance on Nvidia’s Jetson edge AI platform and GPU-based systems. These initiatives will reduce development time and improve performance for developers seeking to incorporate computer vision and AI / machine learning functionality into ROS-based applications.

“As more ROS developers leverage hardware platforms that contain additional compute capabilities designed to offload the host CPU, ROS is evolving to make it easier to efficiently take advantage of these advanced hardware resources,” explains Brian Gerkey, CEO of Open Robotics. “Working with an accelerated computing leader like Nvidia and its vast experience in AI and robotics innovation will bring significant benefits to the entire ROS community.”

Open Robotics will enhance ROS 2 to enable efficient management of data flow and shared memory across GPU and other processors present on the Nvidia Jetson edge AI platform. This will improve the performance of applications that have to process high bandwidth data from sensors such as cameras and lidars in real time, reports Open Robotics.

The two parties are also working to enable seamless simulation interoperability between Open Robotics’s Ignition Gazebo and Nvidi Isaac Sim on Omniverse. Isaac Sim already supports ROS 1 and ROS 2 and features an ecosystem of 3D content with popular applications, such as Blender and Unreal Engine 4.

With the two simulators connected, ROS developers can easily move robots and environments between Ignition Gazebo and Isaac Sim to run large-scale simulation and take advantage of high-fidelity dynamics, accurate sensor models and photo-realistic rendering to generate synthetic data for training and testing of AI models.

In addition to being a robotic simulator, Isaac Sim can generate synthetic data to train and test perception models. These capabilities will be more important as roboticists incorporate more perception features which will reduce the need for human intervention in the tasks they perform.

Isaac Sim generates synthetic datasets which are fed directly into NVIDIA’s TAO, an AI model adaptation platform, to adapt perception models for a robot’s specific working environment. Ensuring that a robot’s perception stack is going to perform in a given working environment can begin before any real-data is collected from the target surroundings.

Software is expected to be released in the spring of 2022.

About Smart Cities

This news story is brought to you by, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration