Neural graphics software development kits create metaverse content

To create the metaverse, a virtual reality, immersive world, internet developers will build 3D objects for building scenes for games. Engineers will also use 3D objects for product design or visual effects in gaming and film. When multiple objects and characters need to interact in a virtual world, simulating physics becomes critical. A robot in a virtual factory, for example, needs to look the same and have the same weight capacity and braking capability as its physical counterpart.

Nvidia has released software development kits (SDKs) to shorten the development process of 3D modelling for use in transportation, healthcare, telecommunications and entertainment, as well as product design.

The SDKs include NeuralVDB, an update to the OpenVDB library and Kaolin Wisp, a Pytorch library.  For 3D creation, Kaolin Wisp is an addition to Kaolin, a PyTorch library. It enables faster 3D deep learning research by reducing the time needed to test and implement new techniques from weeks to days. Kaolin Wisp is a research-oriented library for neural fields, establishing a common suite of tools and a framework to accelerate new research in neural fields.

They also include instant neural graphics primitives. This captures real-world objects and inspired Nvidia’s Instant ReRF, an inverse rendering model that turns a collection of still images into a digital 3D scene. This technique and associated GitHub code accelerate the process by up to 1,000 times, said Nvidia.

A new inverse rendering pipeline is 3D MoMa. It allows users to quickly import a 2D object into a graphics engine to create a 3D object that can be modified with realistic materials, lighting and physics.

Progressing the GauGAN AI model, GauGAN360 turns rough doodles into “photorealistic masterpieces” said Nvidia. It generates 8K, 360 degree panoramas that can be ported into Omniverse (Nvidia’s version of the metaverse) scenes.

The Omniverse Avatar Cloud Engine (ACE) is a new collection of cloud APIs, micro services and tools to create, customise and deploy digital human applications. ACE is built on Nvidia’s Unified Compute Framework, allowing developers to seamlessly integrate core Nvidia AI technologies into their avatar applications.

In addition to the introduction of NeuralVDB, a version of OpenVDB for volumetric data storage and higher resolution 3D data, Nvidia has also introduced Omniverse Audio2Face, an AI technology that generates expressive facial animation from a single audio source. This is useful for interactive real-time applications and as a traditional facial animation authoring tool.

Further work on avatars is via ASE: Animation Skills Embedding. This uses deep learning learning to teach characters how to respond to new tasks and actions. The result is physically simulated characters are able to act in a more responsive and life-like manner in unfamiliar situations.

The TAO toolkit is a framework to enable users to create an accurate, high-performance pose estimation model, which can evaluate what a person might be doing in a scene using computer vision much more quickly than current methods.

For experiences representation, Nvidia has introduced image features eye tracking, a research model linking the quality of pixel rendering to a user’s reaction time. By predicting the best combination of rendering quality, display properties and viewing conditions for the least latency, it will allow for better performance in fast-paced, interactive computer graphics applications such as competitive gaming.

Following a collaboration with Stanford University, there are also holographic glasses for virtual reality. They are claimed to deliver full colour, 3D holographic images in a 2.5mm thick optical stack.

http://www.nvidia.com

> Read More

Low-cost trackers are compatible with Abeeway LoRaWAN tags

An inexpensive tracking device has been developed by Troverlo, Actility, and its subsidiary Abeeway. The Troverlo host powered tag can be read by Abeeway LoRaWAN(long range wide area network) trackers.

The integrated tag, powered by location tracking and data collection service provider, Troverlo, enables customers track assets that may not have been feasible to track due to cost, said the company. 

The Troverlo tags use a standard Wi-Fi chip to send out a beacon, similar to a Wi-Fi access point, that can be picked up by any device looking for a Wi-Fi connection. Due to built-in Wi-Fi sniffing capabilities, they work seamlessly with Abeeway LoRaWAN trackers. The Abeeway tracker “sees” Troverlo tags and reports their location and sensor data through the connected LoRaWAN gateway and ThinkParkX Location Engine. Troverlo tags only require a standard off-the-shelf Wi-Fi chip to be effective, said Actility, and are therefore available in different form factors, from standalone battery powered tags to embedded tags built into equipment or products.

The Troverlo tags are automatically tracked outside of LoRaWAN connectivity through the Troverlo Global Observation Network. This means if a tracked asset leaves the LoRaWAN area, it will be tracked anywhere on the globe without any additional connectivity required. Troverlo’s Global Observation Network consists of connected devices like cell phones, Wi-Fi access points, and telematic nodes.  Compared to other connection methods, like LTE, Troverlo tags can be tracked for one tenth the price, said Agility.

The Troverlo / Abeeway tracking tag can be applied across any Actility implementation, including livestock management, where it is used to monitor abnormal behaviour or locations. However, with relatively low margins not all ranchers can afford to track each animal. Troverlo tags allow farmers or ranchers to track each animal and use the existing Abeeway trackers to backhaul the data.

Other application areas are in logistics and transportation. The inexpensive Troverlo tags can be attached to every pallet or product being shipped for it to be tracked as it moves from the warehouse, to the truck, to the customer site. Troverlo tags enable Abeeway fleet management to scale into more granular tracking of product movement, added Actility.

https://www.actility.com

> Read More

Mouser adds Texas Instruments sensor evaluation module

Described as easy to use, the IWR6843LEVM is a 60GHz mmWave sensor evaluation platform for the IWR6843 with an FR4-based antenna. Available from distributor, Mouser Electronics, the IWR6843LEVM may be used to evaluate both the IWR6843 and IWR6443 sensors. 

This evaluation module enables access to point cloud data and power over USB interfaces. The IWR6843LEVM supports direct connectivity to the MMWAVEICBOOST and DCA1000EVM development kits. IWR6843LEVM contains everything required to develop software for on-chip C67x digital signal processor (DSP) cores, hardware accelerators (HWAs), and low-power Arm Cortex-R4F controllers, confirmed Mouser.

Texas Instruments’ IWR6832LEVM is supported by mmWave tools and software, namely mmWave studio (MMWAVE-STUDIO) and the mmWave software development kit (MMWAVE-SDK). Additional boards may be used to enable additional functionality. For example, DCA1000EVM allows access to the sensor’s raw data via a low-voltage differential signalling (LVDS) interface. The MMWAVEICBOOST development kit allows software development and trace capabilities via Code Composer Studio (CCSTUDIO). IWR6843LEVM paired with MMWAVEICBOOST can interface with the MCU LaunchPad development kit ecosystem.

The sensor evaluation module has four receive (4RX) three transmit (3TX) antenna with 120 degrees azimuth field of view (FoV) and 80 degree elevation FoV.

Discrete DC/DC power management includes onboard capability for power-consumption monitoring. 

http://www.mouser.com

> Read More

NeuralVDB reduces memory footprint and adds AI 

Building on the development of OpenVDB, the open source C++ library for volumetric data, Nvidia has announced NeuralVDB, which brings the power of AI to OpenVDB.

It reduces memory footprint by up to 100 times which allows professionals working in scientific computing and visualisation, medical imaging, rocket science and visual effects to interact with extremely large and complex datasets in real time.

In the last decade, explained Ken Museth, senior director of simulation technology at Nvidia, OpenVDB has moved out of the visual effects industry and into industrial design and scientific use cases where sparse volumetric data is prevalent.

NeuralVDB joins Nvidia’s NanoVDB which was introduced last year, adding GPU support to OpenVDB. This accelerated performance and opened the door to real-time simulation and rendering, said Museth.

NeuralVDB builds on this GPU acceleration by adding machine learning to introduce compact neural representations that “dramatically reduce” its memory footprint. As a result, 3D data can be represented at even higher resolution and at a much larger scale than OpenVDB. Users can therefore handle massive volumetric datasets on devices like individual workstations and even laptops, said Nvidia.

NeuralVDB compresses a volume’s memory footprint up to 100x compared to NanoVDB which allows users to transmit and share large, complex volumetric datasets more efficiently.

To accelerate training up to a factor of two, NeuralVDB allows the weights of a frame to be used for the subsequent one. NeuralVDB also enables users to achieve temporal coherency, or smooth encoding, by using the network results from the previous frame.

This combination reduces memory requirements, accelerates training and enables temporal coherency offering new possibilities for scientific and industrial use cases, including massive, complex volume datasets for AI-enabled medical imaging and large scale digital twin simulations.

http://www.nvidia.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration