Speech inference is optimised for Intel FPGA PAC to save power demands

To save electricity consumption and to reduce data centre infrastructure, Myrtle announces that its artificial intelligence (AI) can run on the new, high-performance Intel FPGA Programmable Acceleration Card (Intel FPGA PAC) D5005 accelerator. The result is to reduce costs and remove growth constraints for businesses offering speech services such as transcription, translation, synthesis or voice assistance in on-premise or cloud-based data centres, says the AI specialist.

Intel and Myrtle have worked together to optimise a recurrent neural network (RNN) for speech inference on the Intel FPGA PAC D5005. The collaboration can run more than 4,000 voice channels concurrently on one FPGA, leading to a six-fold improvement in performance per watt compared with general purpose GPUs with a latency of one 30th that of a GPU, reports Myrtle.

“The industry has to take new approaches to produce machine learning solutions that meet customers’ stringent latency, power and cost constraints”, said Peter Baldwin, CEO, Myrtle. He added that these performance metrics on Intel’s latest PCA will allow customers preserve their investment in hardware as machine learning models evolve.

Myrtle specialises in hardware-software codesign. The quantisation, sparsity and compression of machine learning models has been recognised by the MLPerf consortium. Myrtle dominates MLPerf speech transcription and has open sourced its code to help the industry benchmark new edge and data centre hardware more consistently, says the company.

Myrtle creates high-performance, energy-efficient computing solutions for deep learning inferencing on next-generation data centre hardware. Myrtle’s RNN technology enables companies to cost-efficiently implement and scale speech applications on cloud or on-premise infrastructure.

Myrtle is a partner in Intel’s design solutions network (DSN).

http://www.myrtle.ai

> Read More

Secure flash memory enhances secure data storage in self-driving cars

Macronix’s secure flash memory has been integrated in Nvidia’s next-generation autonomous driving platforms.

The automotive-grade ArmorFlash memory is being used on the Nvidia Drive AGX Xavier and Drive AGX Pegasus autonomous vehicle computing platforms.

The ArmorFlash memory is secure for data storage in the artificial intelligence (AI)-based Level 2+ advanced driver assistance systems (ADAS) through to Level 5 autonomous driving.   

“Our efforts in conjunction with NVIDIA are singularly focused on elevating the security of data in AI-based autonomous driving applications and ultimately, to enhance the safety of drivers,” said Anthony Le, vice president of marketing, Macronix America.

The ArmorFlash memory on the Drive AGX Xavier and Pegasus platforms can provide trusted identification, authentication and encryption features for autonomous driving security requirements.

ArmorFlash offers a combination of mature security technologies, including unique ID, authentication and encryption features. This blend of features enables superior levels of security in a high-density memory device to prevent data from being compromised, claims Macronix.

The ArmorFlash device provides trusted NVM storage of encrypted and integrity-protected assets. The ArmorFlash supports secure communication channel and protocol with the Nvidia Xavier system on a chip (SoC) via cryptographic operations, integrity checks and additional measures against certain security protocol attacks.

The global ADAS market is expected to exceed $67 billion by 2025, fuelled by a compounded annual growth rate of 19 per cent, according to Grand View Research. The research company attributes the growth to increasing government initiatives mandating driver assistance system to lower road accidents and cites expanding adoption of ADAS in small cars as a factor boosting market demand.

http://www.macronix.com

> Read More

Skyline RFID inlays and tags can be used on metal

Rain radio frequency identification (RFID) inlays can be used on metal surfaces for and have a reading range of 6m.

The Skyline Rain RFID inlays and tags have an optimised antenna and spacer-based design. A customised ultra high frequency (UHF) inlay is based on NXP’s UCODE 7xm IC with 448-bit EPC memory and extended user memory of 2kbits. The resulting transponder is then folded and applied around a synthetic spacer developed and provided by identytag of Bad Berleburg, Germany.

The advanced antenna design, the IC’s long read range and reliable operation in dense reader and noisy environments through high interference rejection, as well as  optimised spacer material result in an on-metal read range in a compact tag, with a die-cut size of 54 x 25 x 1.8mm.

The inlay is permanently attached to the spacer and a layer of strong and resilient RA-33 adhesive is applied. According to Smartrac, this provides “excellent adhesion” to a range of surfaces. As a finished tag, Skyline’s surface is printable with thermal transfer printers.

The RFID tag can be used for tracking metallic assets, items and components in industrial environments such as automotive, mechanical engineering and aviation. Smartrac’s Skyline inlays and tags comply with VDA recommendations for the automotive industry and supported by the leading automation companies globally.

Smartrac and identytag completed initial product volumes and will be ramping up production in the second half of the year 2019.

Smartrac provides both ready-made and customised products. It makes products smart and enables businesses to digitise, identify, authenticate, track and complement products. Products are used in a wide array of applications such as animal identification, automation, automotive, brand protection, customer experience, industry, library and media management, logistics, retail and supply chain management.

Based in Amsterdam, the Netherlands, Smartac has research and development centres, production and a sales network, complemented by the IoT platform, Smart Cosmos. Smartrac embeds intelligence into physical products for an ecosystem of connected things. The company has also received ARC Quality Certification from Auburn University’s RFID Lab for the design and manufacturing of its RFID inlays.

http://www.smartrac-group.com

> Read More

Xilinx introduces PCIe Gen 4 card for critical data centre workloads

Believed to be the industry’s first adaptable compute, network and storage accelerator card which delivers “dramatic improvements” in throughput, latency and power efficiency for critical data centre workloads, says Xilinx.

The Alveo U50 has been added to the Alveo data centre accelerator card range. It is claimed to be the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. According to the company, it boosts a range of critical compute, network and storage workloads in a single, reconfigurable platform.

The Alveo U50 is a programmable low profile and low-power accelerator built for scale-out architectures and domain-specific acceleration of any server deployment, on-premise, in the cloud and at the edge, says Xilinx. It is intended to meet the challenges of emerging dynamic workloads such as cloud microservices and delivers between 10 to 20 times improvements in throughput, latency and power efficiency. The principle is to move the compute closer to the data to help developers identify and eliminate latency and prevent data bottlenecks and thus accelerate networking and storage.

The U50 is powered by the Xilinx UltraScale+ architecture. It is the first in the Alveo family to be packaged in a half-height, half-length form factor and low 75W power envelope. The card features high-bandwidth memory (HBM2), 100Gbit per second networking connectivity, and support for the PCIe Gen 4 and CCIX interconnects.

The 8Gbyte of HBM2 delivers over 400Gbits per second data transfer speeds and the QSFP ports provide up to 100Gbits per second network connectivity. The high-speed networking I/O also supports advanced applications such as NVM Express over Fabrics (NVMe-oF), disaggregated computational storage and specialised financial services applications.

Applications for the Alveo U50 are machine learning inference, video transcoding and data analytics to computational storage, electronic trading and financial risk modelling. For deep learning inference acceleration (speech translation), the Alveo U50 delivers up to 25 times lower latency, 10 times higher throughput and better power efficiency per node compared to GPU-only for speech translation.

For data analytics acceleration (database query), the Alveo U50 runs the TPC-H Query benchmark to deliver four times higher throughput per hour and reduced operational costs by a factor of three, compared to in-memory CPU. It also delviers 20 times more compression/decompression throughput for computational storage acceleration (compression). It also has faster Hadoop and big data analytics, and over 30 per cent lower cost per node compared to CPU-only nodes.

In electronic trading applications, the U50 delivers 20 times lower latency and sub-500 nanosecond trading time compared to CPU-only latency of 10 microseconds. In financial modelling applications, the Alevo U50 runs the Monte Carlo simulation and delivers seven times greater power efficiency compared to GPU-only performance, reports Xilinx.

Xilinx will be showcasing the Alveo U50 at the Flash Memory Summit 2019 (6 to 8 August) at the Santa Clara Convention Center in Santa Clara, California, USA.

The Alveo U50 is sampling now with OEM system qualifications in process. General availability is scheduled for Q3 2019.

http://www.xilinx.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration