Xilinx introduces Kria adaptive SOMs to add AI at the edge

Adaptive system on modules (SOMs) from Xilinx are production-ready, small form factor  embedded boards that enable rapid deployment in edge-based applications.The Kria adaptive SOMs can be coupled with a software stack and pre-built applications. According to Xilinx, the SOMs are a new method of bringing adaptive computing to AI and software developers.

The initial product to be made available is the Kria 26 SOM. It specifically targets vision AI applications in smart cities and smart factories.

By allowing developers to start at a more evolved point in the design cycle compared to chip-down design, Kria SOMs can reduce time-to-deployment by up to nine months, says Xilinx.

The Kria K26 SOM is built on top of the Zynq UltraScale+™ MPSoC architecture, which features a quad-core Arm Cortex A53 processor, more than 250 thousand logic cells, and a H.264/265 video codec. It has 4Gbyte of DDR4 memory and 245 I/Os, which allow it to adapt to “virtually any sensor or interface”. There is also 1.4TOPs of AI compute performance, sufficient to create vision AI applications with more than three times higher performance at lower latency and power compared to GPU-based SOMs. Target applications are smart vision systems, for example in security, traffic and city cameras, retail analytics, machine vision, and vision guided robotics.

Hardware is coupled with software for production-ready vision accelerated applications which eliminate all the FPGA hardware design work. Software developers can integrate custom AI models and application code. There is also the option to modify the vision pipeline using design environments, such as TensorFlow, Pytorch or Café frameworks, as well as C, C++, OpenCL, and Python programming languages. These are enabled by the Vitis unified software development platform and libraries, adds Xilinx.

The company has also opened an embedded app store for edge applications. There are apps for Kria SOMs from Xilinx and its ecosystem partners. Xilinx apps range from smart camera tracking and face detection to natural language processing with smart vision. They are open source and provided free of charge.

For further customisation and optimisation, embedded developers can draw on support for standard Yocto-based PetaLinux. There is also the first collaboration between Xilinx and Canonical to provide Ubuntu Linux support (the Linux distribution used by AI developers). Customers can develop in either environment and take either approach to production. Both environments will come pre-built with a software infrastructure and helpful utilities.

Finally, the Kria KV260 Vision AI starter kit is purpose-built to support accelerated vision applications available in the Xilinx App Store. The company claims developers can be “up and running in less than an hour with no knowledge of FPGAs or FPGA tools”. When a customer is ready to move to deployment, they can seamlessly transition to the Kria K26 production SOM, including commercial and industrial variants.

Xilinx has published an SOM roadmap with a range of products, from cost-optimised SOMs for size and cost-constrained applications to higher performance modules that will offer developers more real-time compute capability per Watt.

Kria K26 SOMs and the KV260 Vision AI Starter Kit are available now to order from Xilinx and its network of worldwide distributors. The KV260 Vision Starter Kit is available immediately, with the commercial-grade Kria K26 SOM shipping in May 2021 and the industrial-grade K26 SOM shipping this summer. Ubuntu Linux on Kria K26 SOMs is expected to be available in July 2021.

http://www.xilinx.com/kria

> Read More

AI controller and software detect humans, using less energy

Claimed to be the lowest power person detection system, the combination of Maxim Integrated’s MAX78000 artificial intelligence (AI) microcontroller and Aizip’s Visual Wake Words model bring human-figure detection to IoT imaging and video with a power budget of just 0.7 mJ per inference. This, says Maxim, is a 100 fold improvement on the performance of conventional software, making it “the most economical and efficient IoT person-detection solution available”.

The MAX78000 neural-network microcontroller detects people in an image using Aizip’s Visual Wake Words (VWW) model at just 0.7 mJ per inference. This allows 13 million inferences from a single AA/LR6 battery and is 100 times lower than conventional software solutions. It is, says Maxim, the most economical and efficient IoT person-detection solution available. The low-power network provides longer operation for battery-powered IoT systems that require human-presence detection, including building energy management and smart security cameras.

The MAX78000 low-power, neural network accelerated microcontroller executes AI inferences at less than 1/100th the energy of conventional software solutions to improve run-time for battery-powered edge AI applications, continues Maxim. The mixed precision VWW network is part of the Aizip Intelligent Vision Deep Neural Network (AIV DNN) series for image and video applications and was developed with Aizip’s proprietary design automation tools to achieve greater than 85 per cent human-presence accuracy.

The extreme model compression delivers accurate smart vision with a memory-constrained, low-cost AI-accelerated microcontroller and cost-sensitive image sensors, says Maxim.

The MAX78000 microcontroller and MAX78000EVKIT# evaluation kit are available now directly from Maxim Integrated’s website and through authorised distributors.

AIV DNN series models, tools and services are available directly from Aizip.

Aizip develops AI models for IoT applications. Based in Silicon Valley, Aizip provides design services with superior performance, quick turnaround time, and “excellent [return on investment] ROI.

Maxim Integrated offers a broad portfolio of semiconductors, tools and support to deliver efficient power, precision measurement, reliable connectivity and robust protection along with intelligent processing. Designers in application areas such as automotive, communications, consumer, data centre, healthcare, industrial and the IoT.

http://www.maximintegrated.com

> Read More

Infineon combines MEMS and automotive expertise for Xensiv microphone

High performance, low noise MEMS microphones are increasingly popular inside and outside of vehicles because of noise quality and hands-free operation. Infineon says it has combined its expertise in the automotive industry, with its MEMS microphones technical know-how to develop the Xensiv IM67D130A. It is claimed to be the first microphone in the market to be qualified for automotive applications.

The Xensiv IM67D130A microphone has a wide operating temperature range of -40C to +105 degrees C for use in harsh automotive environments. The high acoustic overload point (AOP) of 130dB sound pressure level (SPL) allows the microphone to capture distortion-free audio signals in loud environments, enabling it to be effective whether placed inside or outside of the vehicle. The IM67D130A can be used for in-cabin applications such as hands-free systems, emergency calls, in-cabin communication and active noise cancellation (ANC). For exterior applications, it can be used in, for example, siren or road condition detection. Its use allows sound to be a further, complementary sensor for advanced driver assistance systems (ADAS) and predictive maintenance.

The high signal-to-noise ratio (SNR) of 67dB combined with low distortion level are designed for optimum speech quality and speech intelligence for speech recognition applications. The microphones have tight sensitivity matching allowing optimised beamforming algorithms for multi-microphone arrays, added Infineon.

The Xensiv MEMS microphone IM67D130A is qualified for the AEC-Q103-003 standard for automotive applications and available now in PG-LLGA-5-4 package.

http://www.infineon.com

> Read More

MxFE digitiser targets aerospace and defence applications

From Analog Devices comes a 16-channel, mixed-signal front-end (MxFE) digitiser for aerospace and defence applications, including phased array radars, electronic warfare, and ground-based satellite communications.

The new digitiser includes four AD9081 or four AD9082 software-defined, direct RF sampling transceivers. It is designed to accelerate customer development by providing reference RF signal chains, software architectures, power supply designs, and application example code.

ADI has also introduced a digitising card to complement the platform and facilitate system-level calibration algorithms and demonstration of power-up phase determinism.

The ADQUADMXFE1EBZ 16-channel, mixed-signal front-end digitiser offers 16 FR receive (Rx) channels (32 digital Rx channels) and 16 RF transmit (Tx) channels (32 digital Tx channels). It provides application-specific examples in MatLab application scripts and a GUI, and has flexible clock distribution.

The ADQUADMXFE-CAL digitising card key provides both individual adjacent channel loopback and combines channel loopback options, combined Tx and Rx channels output via SMA connectors and on-board log power detectors with AD5592R digitisation.

Analog Devices is a global high-performance semiconductor company dedicated to solving tough engineering challenges.

http://www.analog.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration