Jetson AGX Orin developer kit increases OPS for robotics and edge AI

At GTC 2022, Nvidia has announced the Jetson AGX Orin developer kit, claiming it is the most powerful, compact and energy-efficient AI supercomputer available for advanced robotics, autonomous machines, and next-generation embedded and edge computing.

Jetson AGX Orin delivers 275 trillion operations per second, (OPS) exceeding its the processing power of its predecessor (Jetson AGX Xavier) by a factor of eight, while maintaining the same palm-sized form factor and pin compatibility. It is also offered at “a similar price”. The developer kit features an Nvidia Ampere architecture GPU, Arm Cortex-A78AE CPUs, next-generation deep learning and vision accelerators, high speed interfaces, faster memory bandwidth and multi-modal sensor support to feed multiple, concurrent AI application pipelines.

“As AI transforms manufacturing, healthcare, retail, transportation, smart cities and other essential sectors of the economy, demand for processing continues to surge,” said Deepu Talla, vice president of Embedded and Edge Computing at Nvidia. “The availability of Jetson AGX Orin will supercharge the efforts of the entire industry as it builds the next generation of robotics and edge AI products.”

Customers using Jetson AGX Orin can leverage the full Nvidia CUDA-X accelerated computing stack, the Nvidia JetPack software development kit (SDK), pre-trained models from the company’s NGC catalogue and the latest frameworks and tools for application development and optimisation such as Isaac on Omniverse, Metropolis and the TAO Toolkit. According to Nvidia, this ecosystem reduces time and cost for production-quality AI deployments, and offers access to the largest, most complex models needed to solve robotics and edge AI challenges in 3D perception, natural language understanding or multi-sensor fusion.

The Jetson embedded computing partner ecosystem includes a range of services and products, including cameras and other multi-modal sensors, carrier boards, hardware design services, AI and system software, developer tools and custom software development.

The Nvidia Jetson AGX Orin developer kit is available now. Production modules will be available in Q4.

http://www.nvidia.com

> Read More

50W GaN converter is compact for consumer and industrial applications

Single switch topology, with current sensing and protection circuitry are integrated in the VIPerGaN50 converter. According to STMicroelectronics, it simplifies building single switch flyback converters up to 50W and integrates a 650V GaN power transistor for energy efficiency and miniaturisation.

The VIPerGaN50 is in a compact and low-cost 5.0 x 6.0mm package. The speed of the integrated GaN transistor allows a high switching frequency with a small and lightweight flyback transformer, said ST. Minimal additional external components are needed to design an advanced, high-efficiency switched mode power supply (SMPS), added the company.

Designers can leverage GaN wide-bandgap technology to meet increasingly stringent ecodesign codes that target global energy savings and net-zero carbon emissions. The converters are suitable for consumer and industrial applications such as power adapters, USB-PD chargers, and power supplies for home appliances, air conditioners, LED lighting equipment and smart meters.

The converter operates in multiple modes to maximise efficiency at all line and load conditions. At heavy load, quasi-resonant (QR) operation with zero voltage switching minimises turn-on losses and electromagnetic emissions. At reduced load, valley skipping limits switching losses and leverages ST’s proprietary valley lock to prevent audible noise. Frequency foldback with zero-voltage switching ensures the highest possible efficiency at light load, continued the company, with adaptive burst mode operation to minimise losses at very low load. Power management cuts standby power to below 30mW.

Built-in features include output over-voltage protection, brown-in and brown-out protection and input over-voltage protection. Input-voltage feedforward compensation is also provided, to minimise output peak-power variation, as well as embedded thermal shutdown, and frequency jittering to minimise EMI.

The VIPerGaN50 is in production now and available in a 5.0 x 6.0mm QFN package. It is in a free-sample programme and can be ordered online.

http://www.st.com 

> Read More

GPU server simplifies and futureproofs, claims Super Micro Computer

Customisation is brought to AI, machine learning and high performance computing (HPC) applications with the Universal GPU server system, said Super Micro Computer. According to the company the “revolutionary technology” simplifies large scale GPU deployments and is a futureproof design that supports yet to be announced technologies. 

The Universal GPU server combines technologies supporting multiple GPU form factors, CPU choices, storage and networking options optimised to deliver configured and highly scalable systems for specific AI, machine learning and HPC with the thermal headroom for the next generation of CPUs and GPUs, said the company.

Initially, the Universal GPU platform will support systems that contain the third generation AMD EPYC 7003 processors with either the MI250 GPUs or the Nvidia A100 Tensor Core 4-GPU, and the third generation Intel Xeon Scalable processors with built-in AI accelerators and the Nvidia A100 Tensor Core 4-GPU. These systems are designed with an improved thermal capacity to accommodate up to 700W GPUs.

The Supermicro Universal GPU platform is designed to work with a wide range of GPUs based on an open standards design. By adhering to an agreed upon set of hardware design standards, such as Universal Baseboard (UBB) and OCP Accelerator Modules (OAM), as well as PCI-E and platform-specific interfaces, IT administrators can choose the GPU architecture best suited for their HPC or AI workloads. In addition to meeting requirements, this will simplify the installation, testing, production, and upgrades of GPUs, said Super Micro Computer. In addition, IT administrators will be able to choose the right combination of CPUs and GPUs to create an optimal system based on the needs of their users.

The 4U or 5U Universal GPU server will be available for accelerators that use the UBB standard, as well as PCI-E 4.0, and soon PCI-E 5.0. In addition, 32 DIMM slots and a range of storage and networking options are available, which can also be connected using the PCI-E standard. 

The Supermicro Universal GPU server can accommodate GPUs using baseboards in the SXM or OAM form factors that use high speed GPU-to-GPU interconnects such as Nvidia NVLink or the AMD xGMI Infinity fabric, or which directly connect GPUs via a PCI-E slot. All major current CPU and GPU platforms will be supported, confirmed the company.

The server is designed for maximum airflow and accommodates current and future CPUs and GPUs where the highest TDP (thermal dynamic performance) CPUs and GPUs are required for maximum application performance. Liquid cooling options (direct to chip) are available for the Supermicro Universal GPU server as CPUs and GPUs require increased cooling. 

The modular design means specific subsystems of the server can be replaced or upgraded, extending the service life of the overall system and reducing the e-waste generated by complete replacement with every new CPU or GPU technology generation.

http://www.supermicro.com

> Read More

Nvidia prepares for AI infrastructure with Hopper architecture

At GTC 2022, Nvidia announced its next generation accelerated computing platform “to power the next wave of AI data centres”. The Hopper architecture succeeds the Ampere architecture, launched in 2020.

The company also announced its first Hopper-based GPU, the NVIDIA H100, equipped with 80 billion transistors. Described as the world’s largest and most powerful accelerator, the H100 has a Transformer Engine and a scalable NVLink interconnect for advancing gigantic AI language models, deep recommender systems, genomics and complex digital twins. 

The Nvidia H100 GPU features major advances to accelerate AI, HPC, memory bandwidth, interconnect and communication, including nearly 5Tbytes per second of external connectivity. H100 is the first GPU to support PCIe Gen5 and the first to utilise HBM3, enabling 3Tbytes per second of memory bandwidth, claimed Nvidia. 

According to Jensen Huang, CEO, 20 H100 GPUs can sustain the equivalent of the entire world’s internet traffic, making it possible for customers to deliver advanced recommender systems and large language models running inference on data in real time. 

The Transformer Engine is built to speed up these networks as much as six times compared with the previous generation without losing accuracy. 

MIG technology allows a single GPU to be partitioned into seven smaller, fully isolated instances to handle different types of jobs. The Hopper architecture extends MIG capabilities by up to a factor of seven over the previous generation by offering secure multi-tenant configurations in cloud environments across each GPU instance.

According to the company, H100 is the first accelerator with confidential computing capabilities to protect AI models and customer data while they are being processed. Customers can also apply confidential computing to federated learning for privacy-sensitive industries like healthcare and financial services, as well as on shared cloud infrastructures. 

To accelerate the largest AI models, NVLink combines with an external NVLink Switch to extend NVLink as a scale-up network beyond the server, connecting up to 256 H100 GPUs at nine times higher bandwidth versus the previous generation using NVIDIA HDR Quantum InfiniBand. 

New DPX instructions accelerate dynamic programming by up to 40 times compared with CPUs and up to a factor of seven compared with previous generation GPUs, said Nvidia. This includes, for example, the Floyd-Warshall algorithm to find optimal routes for autonomous robot fleets and the Smith-Waterman algorithm used in sequence alignment for DNA and protein classification and folding. Availability NVIDIA H100 will be available starting in Q3. 

https://www.nvidia.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration