AI and robotics starter kit

A developer kit to teach AI has been created by Nvidia. The Nvidia Jetson Nano 2Gbyte developer kit is aimed at a new generation of students, educators and hobbyists, Nvidia said at last week’s GTC.

The Jetson Nano 2Gbyte developer kit is supported by Nvidia’s free online training and AI-certification programs, to supplement the many open-source projects, how-tos and videos contributed by thousands of developers in the Jetson community.

The kit is the latest offering in Nvidia’s Jetson AI at the Edge platform, which ranges from entry-level AI devices to advanced platforms for fully autonomous machines.

It is supported by the company’s JetPack software development kit (SDK), which comes with Nvidia container runtime and a full Linux software development environment. This allows developers to package their applications for Jetson with all its dependencies into a single container that is designed to work in any deployment. And it is powered by the same CUDA-X accelerated computing stack used to create breakthrough AI products in such fields as self-driving cars, industrial IoT, healthcare and smart cities.

The Jetson Nano 2Gbyte developer kit can run a diverse set of AI models and frameworks and provides a scalable platform for learning and creating AI applications as they evolve.

It has been endorsed by organisations, enterprises, educators and partners in the embedded computing ecosystem. For example, Jim McGregor, principal analyst at Tarias Research, said: “Nvidia’s Jetson is driving the biggest revolution in industrial AIoT. With the new Jetson Nano 2GB, NVIDIA opens up AI learning and development to a broader audience, using the same software stack as its data center AI computing platform.”

Matthew Tarascio, vice president of Artificial Intelligence at Lockheed Martin, said: “Acquiring new technical skills with a hands-on approach to AI learning becomes critical as AIoT drives the demand for interconnected devices and increasingly complex industrial applications. We’ve used the Nvidia Jetson platform as part of our ongoing efforts to train and prepare our global workforce for the AI revolution.”

Emilio Frazzoli, professor of Dynamic Systems and Control at ETH Zurich, said: “The Duckietown educational platform provides a hands-on, scaled down, accessible version of real-world autonomous systems. Integrating Nvidia’s Jetson Nano power in Duckietown enables unprecedented, affordable access to state-of-the-art compute solutions for learning autonomy.”

It has also been used as part of the STEM curriculum at Boys & Girls Club of Western Pennsylvania, USA.

The Jetson Nano 2GB Developer Kit will be available at the end of the month for $59 through NVIDIA’s distribution channels.

http://www.nvidia.com

> Read More

Image sensor minimises distortion in machine vision and mixed reality applications

A combination of a high dynamic range and pixel design means that the AR0234CS CMOS global shutter image sensor by ON Semiconductor delivers image clarity at 120 frames per second.
The 2.3 Mpixel CMOS image sensor, with global shutter technology is designed for a variety of applications including machine vision cameras, augmented reality (AR), virtual reality (VR) and mixed reality (MR) headsets, autonomous mobile robots (AMRs) and barcode readers.

The AR0234CS captures 1080p video and single frames, up to 120 frames per second. It is claimed to have industry-leading shutter efficiency, to produce crisp and clear images by minimising frame-to-frame distortion in high speed scenes and reducing the motion artefacts that other image sensors experience.

The pixel architecture delivers high dynamic range to support lighting conditions from the darkness of night to bright sunlight. The low noise and improved low light response makes it suitable for applications spanning across consumer, commercial and industrial IoT. The extended operating temperature range makes it deployable in challenging outdoor conditions.

As manufacturers automate production, using vision-based systems, the need for quality image sensing is increasing, explains Gianluca Colli, vice president and general manager, Industrial and Consumer Sensor Division (ICSD) Group at ON Semiconductor. As a result, they are demanding optimum size, performance and power in image sensors, he continued.

The sensor also has programmable regions of interest with on-chip histogram, auto exposure control and 5 x 5 statistics engine, fully integrated strobe illumination control, a flexible row and column skip mode, along with horizontal and vertical mirroring, windowing and pixel binning.

Together with the AP1302 image signal processor (ISP), the AR0234CS delivers a comprehensive camera system that can be designed and developed quickly for fast time-to-market, says ON Semiconductor. System designers can access the DevSuite software to evaluate features and capabilities, configure and tune the sensor, and provide a ready-made output that is usable for further image processing.

The AR0234CS is offered in colour and mono variants, with 0 or 28 degree chief ray angle (CRA).

Samples and development hardware are available now through local ON Semiconductor sales support representatives and authorised distributors.

http://www.onsemi.com

> Read More

ToF sensor module offers detects moving objects, says Omron

The 3D precision time of flight (ToF) ranging sensor module delivers positioning, autonomous-guidance and proximity sensing for a wide range of applications. Gabriele Fulco, European product marketing manager of sensors at Omron described the B5L-A2S-U01-010 as a mechanical eye that is capable of accurately detecting the surrounding environment. It is expected to contribute to the more widespread use of autonomous robots as well as the automation of various other machinery and equipment.

The B5L sensor module can be fitted to moving objects such as autonomous mobile robots, to provide real-time contextual information such as guidance, collision avoidance, and cliff detection. Alternatively, located in a fixed position, the sensor can accurately detect moving objects in the field of view, making it equally suitable for use in automated packaging equipment, security systems, intruder detection, and patient-monitoring and care for the elderly systems.

The Omron B5L-A2S-U01-010 operates on the proven ToF principle, calculating distances to objects in real time by measuring the round-trip time for near-infra red radiation from the module’s emitter to be reflected from objects in the field of view and returned to the receiver. The B5L’s optical design technology for the stable measurement of three-dimensional distance information operates across a wide area even in sub-optimal conditions, such as under sunlight, says Omron. Running at up to 20 frames per second, its specifications are optimised for long periods of continuous operation to allow it to be used as an embedded sensor in various instruments.

The 103 x 43mm module is fitted with a 24V DC power connector and Micro-USB communication port, offering easy integration and flexibility for embedded systems designers. The 940nm near-infra red emitter and 240 x 320 pixel receiving array, gives ranging information for the entire field of view and measures the absolute distance to objects from 0.5m to 4m. The module offers ±2 per cent precision (at a detection distance of 2m). Built-in temperature compensation simplifies its integration into autonomous robots and other equipment as it eliminates the need for the design of separate compensation processing for different environments.

The wide viewing angle and 0 to 50 degrees C operating temperature range, means that the B5L-A2S-U01-010 can be used in various indoor applications.

The B5L-A2S-U01-010 is in production now and available directly from Omron Electronic Components Europe or through its network of European distributors.

http://components.omron.eu

> Read More

Neural network accelerator chip enables IoT AI in battery-powered devices

To reduce energy consumption and latency by a factor of over 100 to enable complex embedded inference decisions at the IoT edge, Maxim Integrated has developed the MAX78000 low power neural network accelerated microcontroller.

It moves AI to the edge without performance compromises in battery-powered internet of things (IoT) devices, says the company. It is able to execute AI inferences at less than 1/100th the energy of software solutions to dramatically improve the run-time for battery-powered AI applications. It also enables new, complex AI use cases.

The MAX78000 executes inferences 100x faster than software running on low power microcontrollers and at a fraction of the cost of FPGAs or GPUs, continues Maxim.

Rather than gathering data from sensors, cameras and microphones, sending that data to the cloud to execute an inference, then sending an answer back to the edge, which is challenging due to poor latency and energy performance. Using low power microcontrollers can be used to implement simple neural networks but latency suffers and only simple tasks can be run at the edge. By integrating a dedicated neural network accelerator with a pair of microcontroller cores, the MAX78000 overcomes these limitations, enabling machines to see and hear complex patterns with local, low-power AI processing that executes in real-time.

Applications such as machine vision, audio and facial recognition can be made more efficient as the MAX78000 can execute inferences at less than 1/100th the energy required by a microcontroller. The MAX78000’s specialised hardware is designed to minimise the energy consumption and latency of convolutional neural networks (CNNs). Hardware runs with minimal intervention from any microcontroller core, making operation extremely streamlined. Energy and time are only used for the mathematical operations that implement a CNN. To get data from the external world into the CNN engine efficiently, customers can use one of the two integrated microcontroller cores: the low power Arm Cortex-M4 core, or the even lower power RISC-V core.

The MAX78000EVKIT# includes audio and camera inputs, and out-of-the-box running demos for large vocabulary keyword spotting and facial recognition. Complete documentation helps engineers train networks for the MAX78000 in the tools they are used to using, either TensorFlow or PyTorch.

The MAX78000 is available from authorised distributors. The MAX78000EVKIT# evaluation kit is also available now.

http://www.maximintegrated.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration