Adlink takes pallets to the edge with robotic communication

Digital and analogue assets can communicate and collaborate, says Adlink, using its Edge and Vortex DDS edge IoT products. 

The company’s edge IoT products can add intelligence to conveyors. They also find missing inventory, automate bin picking, determine fill levels and connect robots  − all in real-time. Daniel Collins, Adlink IoT director for North America said: “By bringing artificial intelligence (AI) to the edge we’re helping to automate warehouse logistics in a quick and cost-effective way that increases productivity and employee ergonomics. One of our customers decreased the time it takes to build a pallet by 41 per cent, increasing total daily throughput by 200 per cent without disrupting the way employees are used to working.”

The Adlink Edge and Adlink Vortex DDS make pallets and robots intelligent so they can communicate with the world around them, enabling customers to increase pallet profitability, quality inspection, automation, and productivity within the distribution center and manufacturing floor. 

The company will debut its Edge IoT Smart Pallet Experience at Pack Expo, Las Vegas (23 to 25 September) in the Las Vegas Convention Center, Level 1, Lower South Hall booth 6387.

The immersive experience will demonstrate machine vision, AI and robotics technology for warehouse distribution and manufacturing with partners Amazon Web Services (AWS), Intel’s IoT group, and robot maker Rover Robotics. A game-like exhibit will encourage attendees to use hands-on edge IoT technology, stacking pallets against the clock for a chance to win a $100 Visa gift card.

Adlink Technology specialises in edge computing and has a mission to reduce the complexity of building IoT systems. Adlink provides the analogue digital link to make machine connectivity simple through edge hardware and edge software to form edge IoT solutions for manufacturing, networking and communications, medical, transportation, power, oil and gas, and government and defence industries. 

Its edge IoT includes embedded building blocks and intelligent computing platforms, fully featured edge platforms, data connectivity and extraction devices, secure software for data movement, and edge IoT apps to monitor, manage, and analyse data-streaming assets and devices.  

(Picture credit: AlexLMX)

http://www.adlinktech.com
> Read More

Intel Xeon Scalable processors are equipped for AI training

Up to 56 processor cores per socket and built in artificial intelligence training acceleration distinguish the next generation of Intel Xeon Scalable processors. Codenamed Cooper Lake, the processors will be available from the first half of next year. The high core-count processors will use the Intel Xeon platinum 9200 series capabilities for high performance computing (HPC) and AI customers.

The second generation, Intel Xeon Scalable processors will deliver twice the processor core count (up to 56 cores), higher memory bandwidth, and higher AI inference and training performance compared to the standard Intel Xeon Platinum 8200 platforms, confirms Intel. The family will be the first x86 processor to deliver built-in AI training acceleration through new bfloat16 support added to Intel Deep Learning (DL) Boost. 

Intel DL Boost augments the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This “significantly accelerates inference performance for deep learning workloads optimised to use vector neural network instructions (VNNI),” said Jason Kennedy, director of Datacenter Revenue Products and marketing at Intel.

He cites workloads such as image classification, language translation, object detection, and speech recognition, which can be lightened using the accelerated performance. Early tests have shown image recognition 11 times faster using a similar configuration than with current-generation Intel Xeon Scalable processors, reports Intel. Current projections estimate 17 times faster inference throughput benefit with Intel Optimized Caffe ResNet-50 and Intel DL Boost for CPUs.  

The processor family will be platform-compatible with the 10nm Ice Lake processor.

The Intel Xeon Platinum 9200 processors are available for purchase today as part of a pre-configured systems from select OEMs, including Atos, HPE, Lenovo, Penguin Computing, Megware and authorised Intel resellers. 

http://www.intel.com
> Read More

Aaeon uses Jetson TX2 for AI edge computing

Embedded computer specialisr, Aaeon, has introduced the Boxer-8170AI, for artificial intelligence (AI) at the edge networks.

The embedded computer is based on Nvidia’s Jetson TX2 and has four PoE LAN ports and four USB 3.0 ports.

At the heart of the computer is the Nvidia Jetson TX2 6-core processor, created by pairing the dual Denver 2 and quad Arm 57 processors into a single system on chip (SoC). The design has up to 256 CUDA cores which provide speed and performance to power AI at the edge, says Aaeon. The Boxer-8170AI comes with 8Gbyte LPDDR4 memory and 32Gbyte eMMC storage on-board. It supports AI frameworks such as TensorFlow and Caffe, as well as AI inference software from developers and customers.

The Boxer-8170AI’s four PoE LAN ports each have their own dedicated chip. This allows for higher bandwidth and stability for each port, allowing PoE cameras to operate individually on dedicated connections, explains Aaeon. The Boxer-8170AI supports a maximum output of 60W for up to four PoE cameras. It can be used for a range of AI solutions incorporating PoE cameras such as smart retail, virtual fences, and access control.

I/O features include four USB 3.0 ports, allowing for additional cameras or devices to be connected to the system. The Boxer-8170AI also features two COM ports for easy integration into industrial systems, two HDMI ports, and remote on/off. The Boxer-8170AI connects to networks with a Gigabit LAN port and two antenna ports to connect to wireless networks or act as an artificial intelligence of things (AIoT) gateway. The BOXER-8170AI also features an SD Card slot and USB on-the-go (OTG) for easy maintenance.

For operation in harsh environments, the Boxer-8170AI has a fanless design and all- aluminium chassis to protect the system from dust, vibration and other hazards. The Boxer-8170AI operates in temperatures from -20 to +50 degrees C and has an input voltage range of 12 to 24V DC.

The company module is only 48 mm thick, to fit into almost the constrained space needed to power AI edge applications.

Established in 1992, Aaeon designs and manufactures professional IoT solutions. It is committed to innovative engineering and provides reliable computing platforms, including industrial motherboards and systems, industrial displays, rugged tablets, embedded controllers, network appliances and related accessories, as well as integrated solutions.

The company is an Associate Member of the Intel Internet of Things Solutions Alliance.

http://www.aaeon.com

> Read More

Speech inference is optimised for Intel FPGA PAC to save power demands

To save electricity consumption and to reduce data centre infrastructure, Myrtle announces that its artificial intelligence (AI) can run on the new, high-performance Intel FPGA Programmable Acceleration Card (Intel FPGA PAC) D5005 accelerator. The result is to reduce costs and remove growth constraints for businesses offering speech services such as transcription, translation, synthesis or voice assistance in on-premise or cloud-based data centres, says the AI specialist.

Intel and Myrtle have worked together to optimise a recurrent neural network (RNN) for speech inference on the Intel FPGA PAC D5005. The collaboration can run more than 4,000 voice channels concurrently on one FPGA, leading to a six-fold improvement in performance per watt compared with general purpose GPUs with a latency of one 30th that of a GPU, reports Myrtle.

“The industry has to take new approaches to produce machine learning solutions that meet customers’ stringent latency, power and cost constraints”, said Peter Baldwin, CEO, Myrtle. He added that these performance metrics on Intel’s latest PCA will allow customers preserve their investment in hardware as machine learning models evolve.

Myrtle specialises in hardware-software codesign. The quantisation, sparsity and compression of machine learning models has been recognised by the MLPerf consortium. Myrtle dominates MLPerf speech transcription and has open sourced its code to help the industry benchmark new edge and data centre hardware more consistently, says the company.

Myrtle creates high-performance, energy-efficient computing solutions for deep learning inferencing on next-generation data centre hardware. Myrtle’s RNN technology enables companies to cost-efficiently implement and scale speech applications on cloud or on-premise infrastructure.

Myrtle is a partner in Intel’s design solutions network (DSN).

http://www.myrtle.ai

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration