Xilinx introduces Kria adaptive SOMs to add AI at the edge

Adaptive system on modules (SOMs) from Xilinx are production-ready, small form factor  embedded boards that enable rapid deployment in edge-based applications.The Kria adaptive SOMs can be coupled with a software stack and pre-built applications. According to Xilinx, the SOMs are a new method of bringing adaptive computing to AI and software developers.

The initial product to be made available is the Kria 26 SOM. It specifically targets vision AI applications in smart cities and smart factories.

By allowing developers to start at a more evolved point in the design cycle compared to chip-down design, Kria SOMs can reduce time-to-deployment by up to nine months, says Xilinx.

The Kria K26 SOM is built on top of the Zynq UltraScale+™ MPSoC architecture, which features a quad-core Arm Cortex A53 processor, more than 250 thousand logic cells, and a H.264/265 video codec. It has 4Gbyte of DDR4 memory and 245 I/Os, which allow it to adapt to “virtually any sensor or interface”. There is also 1.4TOPs of AI compute performance, sufficient to create vision AI applications with more than three times higher performance at lower latency and power compared to GPU-based SOMs. Target applications are smart vision systems, for example in security, traffic and city cameras, retail analytics, machine vision, and vision guided robotics.

Hardware is coupled with software for production-ready vision accelerated applications which eliminate all the FPGA hardware design work. Software developers can integrate custom AI models and application code. There is also the option to modify the vision pipeline using design environments, such as TensorFlow, Pytorch or Café frameworks, as well as C, C++, OpenCL, and Python programming languages. These are enabled by the Vitis unified software development platform and libraries, adds Xilinx.

The company has also opened an embedded app store for edge applications. There are apps for Kria SOMs from Xilinx and its ecosystem partners. Xilinx apps range from smart camera tracking and face detection to natural language processing with smart vision. They are open source and provided free of charge.

For further customisation and optimisation, embedded developers can draw on support for standard Yocto-based PetaLinux. There is also the first collaboration between Xilinx and Canonical to provide Ubuntu Linux support (the Linux distribution used by AI developers). Customers can develop in either environment and take either approach to production. Both environments will come pre-built with a software infrastructure and helpful utilities.

Finally, the Kria KV260 Vision AI starter kit is purpose-built to support accelerated vision applications available in the Xilinx App Store. The company claims developers can be “up and running in less than an hour with no knowledge of FPGAs or FPGA tools”. When a customer is ready to move to deployment, they can seamlessly transition to the Kria K26 production SOM, including commercial and industrial variants.

Xilinx has published an SOM roadmap with a range of products, from cost-optimised SOMs for size and cost-constrained applications to higher performance modules that will offer developers more real-time compute capability per Watt.

Kria K26 SOMs and the KV260 Vision AI Starter Kit are available now to order from Xilinx and its network of worldwide distributors. The KV260 Vision Starter Kit is available immediately, with the commercial-grade Kria K26 SOM shipping in May 2021 and the industrial-grade K26 SOM shipping this summer. Ubuntu Linux on Kria K26 SOMs is expected to be available in July 2021.

http://www.xilinx.com/kria

> Read More

AI controller and software detect humans, using less energy

Claimed to be the lowest power person detection system, the combination of Maxim Integrated’s MAX78000 artificial intelligence (AI) microcontroller and Aizip’s Visual Wake Words model bring human-figure detection to IoT imaging and video with a power budget of just 0.7 mJ per inference. This, says Maxim, is a 100 fold improvement on the performance of conventional software, making it “the most economical and efficient IoT person-detection solution available”.

The MAX78000 neural-network microcontroller detects people in an image using Aizip’s Visual Wake Words (VWW) model at just 0.7 mJ per inference. This allows 13 million inferences from a single AA/LR6 battery and is 100 times lower than conventional software solutions. It is, says Maxim, the most economical and efficient IoT person-detection solution available. The low-power network provides longer operation for battery-powered IoT systems that require human-presence detection, including building energy management and smart security cameras.

The MAX78000 low-power, neural network accelerated microcontroller executes AI inferences at less than 1/100th the energy of conventional software solutions to improve run-time for battery-powered edge AI applications, continues Maxim. The mixed precision VWW network is part of the Aizip Intelligent Vision Deep Neural Network (AIV DNN) series for image and video applications and was developed with Aizip’s proprietary design automation tools to achieve greater than 85 per cent human-presence accuracy.

The extreme model compression delivers accurate smart vision with a memory-constrained, low-cost AI-accelerated microcontroller and cost-sensitive image sensors, says Maxim.

The MAX78000 microcontroller and MAX78000EVKIT# evaluation kit are available now directly from Maxim Integrated’s website and through authorised distributors.

AIV DNN series models, tools and services are available directly from Aizip.

Aizip develops AI models for IoT applications. Based in Silicon Valley, Aizip provides design services with superior performance, quick turnaround time, and “excellent [return on investment] ROI.

Maxim Integrated offers a broad portfolio of semiconductors, tools and support to deliver efficient power, precision measurement, reliable connectivity and robust protection along with intelligent processing. Designers in application areas such as automotive, communications, consumer, data centre, healthcare, industrial and the IoT.

http://www.maximintegrated.com

> Read More

IoT kit demonstrates value for narrow band technology

Pre-integrated, pre-tested and pre-secure products are available in the IoT Network kit from Wittra which takes customers “straight to the ‘Proof of Value’,” says the Swedish company.

It is based on based on open standards to ensure interoperability and ease of integration, enabling users to collect, communicate and control assets. Devices run on a 6lowpan IP-based true mesh radio network which uses the sub-GHz spectrum providing long range and good penetration of structures for robust and reliable data delivery in any setting.

Wittra says its positioning technology is a simple, practical approach for tracking and monitoring assets, and enables total asset visibility in all environments which were not considered possible using narrow band technology.

To reduce the complexity of an IoT project, Wittra offers pre-integrated, pre-tested and pre-secure products for immediate deployment.

The IoT Network kit contains the Wittra gateway, sensor tags, mesh routers and all the associated accessories ensuring an IoT project is up and running in hours, by deploying the Wittra Solution via the cloud-based application programming interface (API), claims the company.

Each tag contains several sensors which include temperature, accelerometer, gyroscope, magnetometer, and positioning. Additional sensors can be added, covering humidity, ambient light, and air pressure. The mesh network is extended in range by Wittra’s mesh routers creating a multi-hop, self-forming and self-healing true mesh network.

Products are available through Mouser and Premier Farnell.

Wittra Sweden develops technologies and solutions within the IoT. Since 2012, Wittra has been able to secure a substantial portfolio of intellectual property (IP) and rights. The pre-integrated, pre-tested and pre-secure solutions allow users to collect, communicate and control their assets even in the toughest of environments, claims the company.

http://www.wittra.se

> Read More

Omnivision releases automotive image sensors for Nvidia AGX AI platform

Image sensors from OmniVision Technologies are compatible with Nvidia’s Drive AGX artificial intelligence (AI) computing platform for autonomous vehicles and advanced driver assistance systems (ADAS).

The company announced that it has joined the Nvidia Drive autonomous vehicle development system at the (virtual) Nvidia GTC21.

The first three qualified image sensor families offer unique benefits to automotive system designers. The OV2311 features 2Mpixel resolution for driver monitoring systems (DMS) applications, and was the industry’s first automotive-grade imager to offer a global shutter for minimal driver-motion artefacts. The new 5Mpixel version of the image sensor also adds OmniVision’s Nyxel near-infrared (NIR) technology, for the best image captures in low- to no-light conditions, says Omnivision. Nyxel technology is claimed to achieve the world’s highest quantum efficiency for DMS of 36 per cent at the invisible 940nm NIR light wavelength, providing the clearest driver images for use by AI software algorithms.

The second family is the OX08B40, believed to be the industry’s first automotive image sensor with 140dB HDR, LED flicker mitigation (LFM) and 8.3Mpixel resolution. The image sensor enables superior front view captures, regardless of external lighting conditions, claims OmniVision. OmniVision’s on-chip HALE (HDR and LFM engine) combination algorithm simultaneously provides industry-leading LFM and HDR over the entire automotive temperature range, claims the company.

The OX03C10 and OX03F10 are claimed to be the only automotive viewing camera image sensors to combine a large 3.0 micron pixel with the HALE algorithm for minimised motion artefacts and the best LFM performance. The OX03C10 also has the lowest power consumption of any 2.5MP LFM image sensor, says OmniVision. It is also the industry’s smallest package size, enabling cameras that continuously run at 60 fps to be fitted in even the tightest confines of automotive design.

The OX03F10 widens the vertical resolution to 1920x1536p, for image quality appropriate for which provides the high image quality needed when feeding surround view system (SVS) captures into autonomous machine vision systems. The OX03F10’s wider array is also essential in e-mirror applications, for greater coverage and to eliminate blindspots.

OmniVision’s OV2311 and a new 5Mpixel sensor are the exclusive driver monitoring system (DMS) image sensors for the Nvidia Drive Hyperion evaluation architecture.

“Paul Wu, automotive staff marketing manager at OmniVision, said: “We are excited to become an Nvidia Drive ecosystem partner, with three of our premium automotive image sensor families and many more to be added. Our goal is to accelerate time to market for autonomous and ADAS applications by reducing the development effort and cost”. The company will also provide sensors for the driver and occupant monitoring systems on the next-generation Nvidia Drive Hyperion platform which combines hardware and software.

The Nvidia Drive AGX AI computing platform offers automotive designers the flexibility to use Nvidia hardware with its full stack Drive software, or develop custom software. The platform can be used for in-cabin or for autonomous driving applications, or for both. OmniVision offers platform software drivers for its image sensors, as well as complete camera modules that customers can connect to the Nvidia Drive platform for immediate evaluation. OmniVision also provides custom tuning services.

“Adding innovative suppliers like OmniVision to our open AV development platform is a key element of accelerating automotive breakthroughs with Nvidia Drive,” said Glenn Schuster, senior director of sensor ecosystems at Nvidia.

All three initial Nvidia Drive AGX platform-compatible sensor families are available now for sampling and mass production, featuring advanced ASIL functional safety and AEC-Q100 Grade 2 certification. The platform drivers and evaluation camera modules are also available now.

http://www.ovt.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration