Intel ships 10nm Agilex FPGAs for networking, 5G and data analytics

Shipments have commenced for the Intel Agilex field programmable gate arrays (FPGAs). The devices are being used by early access program customers to develop advanced solutions for networking, 5G and accelerated data analytics.

Participants in the early access program include Colorado Engineering, Mantaro Networks, Microsoft and Silicom.

Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group, said that the Agilex FPGA family leverages architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology to enable new levels of  heterogeneous computing, system integration and processor connectivity. It will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link, he added.

The Agilex FPGAs are expected to provide the agility and flexibility that is demanded by data-centric, 5G-fuelled operations where networking throughput must increase and latency must decrease. Intel Agilex FPGAs deliver “significant gains in performance and inherent low latency,” says the company. They are reconfigurable and have reduced power consumption, together with computation and high-speed interfacing capabilities that enable smarter, higher bandwidth networks to be created. They also contribute to delivering real-time actionable insights via accelerated artificial intelligence (AI) and other analytics performed at the edge, in the cloud and throughout the network.

According to Doug Burger, technical fellow, Azure Hardware Systems at Microsoft, the software company has been working closely with Intel on the development of the Agilex FPGAs and is planning to use them in accelerating real-time AI, networking and other applications/infrastructure across Azure Cloud Services, Bing and other data centre services.

The Intel Agilex family combines second-generation HyperFlex FPGA fabric built on Intel’s 10nm process, which is up to 40 per cent higher performance or up to 40 per cent lower total power compared with Intel Stratix 10 FPGAs. There is also the heterogeneous 3D silicon-in-package (SiP) technology based on Intel’s proven embedded multi-die interconnect bridge (EMIB) technology. As a result, Intel can integrate analogue, memory, custom computing, custom I/O and Intel eASIC device tiles into a single package along with the FPGA fabric.

Intel also says they are the only FPGAs that support hardened BFLOAT16, with up to 40TFLOPS of digital signal processor (DSP) performance. They also have the ability to scale for higher bandwidth compared with PCIe Gen 4, due to the use of PCIe Gen 5.

Transceiver data rates support up to 112Gbits per second for high-speed networking requirements for 400GE and beyond. There is also support for memory option, such as current DDR4, upcoming DDR5, HBM, and Intel Optane DC persistent memory.

Design development support for Intel Agilex FPGAs is available today via Intel Quartus Prime Design Software.

http://www.intel.com

> Read More

“World’s largest chip” has more compute cores for data access

Claimed to be the largest chip in the world, the Cerebras wafer scale engine (WSE) measures 216 x 216mm (8.5 x 8.5 inch). At 46,225mm2 the chip is 56x larger than the biggest graphics processing unit (GPU) ever made, claims Cerebras.

It has 400,000 cores and 18Gbyte on-chip SRAM. The large silicon area, more than the largest graphics processing unit, enables the WSE to provide more compute cores, tightly coupled memory for efficient data access, and an extensive high bandwidth communication fabric for groups of cores to work together, claims Cerebras.

The WSE contains 400,000 sparse linear algebra (SLA) cores. Each core is flexible, programmable, and optimised for the computations that underpin most neural networks. Programmability ensures the cores can run all algorithms for constantly changing machine learning operations.

The cores on the WSE are connected via the Swarm communication fabric in a 2D mesh with 100 petabytes (Pbytes) per second of bandwidth. The Swarm on-chip communication fabric delivers breakthrough bandwidth and low latency at a fraction of the power draw of traditional techniques used to cluster GPUs, says Cerebras. It is fully configurable. Software configures all the cores on the WSE to support the precise communication required for training the user-specified model. For each neural network, Swarm provides an optimised communication path.

The 18Gbyte of on-chip memory is accessible within a single clock cycle, and provides 9 Pbytes per second memory bandwidth. This is 3,000 times more capacity and 10,000 times greater bandwidth than the leading competitor, claims Cerebras. The WSE provides moree cores, more local memory and enables fast, flexible computation, at lower latency and with less energy than other GPUs, concludes Cerebras.

https://www.cerebras.net/technology/

> Read More

Optiga Trust M secures automated, cloud connected devices

To improve the security and performance of cloud connected devices and services, Infineon has launched the Optiga Trust M.

It helps manufacturers to enhance the security of their devices, says Infineon, and improves overall system performance. The single chip securely stores unique device credentials and enables devices to connect to the cloud up to 10 times faster than software-only alternatives, claims the company. It is intended for industry and building automation, smart homes and consumer electronics and anywhere that hardware-based trust anchors are critical for connected applications and smart services, from a robotic arm in the smart factory to automated air conditioning in the home.

The growth of cloud connectivity and AI-based applications means that zero-touch provisioning of devices to the network or cloud is gaining more and more traction. Optiga Trust M injects critical assets, such as certificates and key pairs which identify a device, into the chip at the factory premises. The turnkey set-up minimises design, integration and deployment of embedded systems by providing a cryptographic toolbox, protected I2C interface and open source code on GitHub. The high-end security controller is certified according to CC EAL6+ (high) and provides advanced asymmetric cryptography. It has a lifetime of 20 years and can be securely updated in the field.

Infineon’s Optiga family combines hardware security controllers with software to increase the overall security of embedded systems, including IoT end nodes, edge gateways and cloud servers, from basic device authentication to Java card-based programmable components.

The Optiga Trust M is available now. Evaluation kits are also available.

http://www.infineon.com

> Read More

Intel Xeon Scalable processors are equipped for AI training

Up to 56 processor cores per socket and built in artificial intelligence training acceleration distinguish the next generation of Intel Xeon Scalable processors. Codenamed Cooper Lake, the processors will be available from the first half of next year. The high core-count processors will use the Intel Xeon platinum 9200 series capabilities for high performance computing (HPC) and AI customers.

The second generation, Intel Xeon Scalable processors will deliver twice the processor core count (up to 56 cores), higher memory bandwidth, and higher AI inference and training performance compared to the standard Intel Xeon Platinum 8200 platforms, confirms Intel. The family will be the first x86 processor to deliver built-in AI training acceleration through new bfloat16 support added to Intel Deep Learning (DL) Boost. 

Intel DL Boost augments the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This “significantly accelerates inference performance for deep learning workloads optimised to use vector neural network instructions (VNNI),” said Jason Kennedy, director of Datacenter Revenue Products and marketing at Intel.

He cites workloads such as image classification, language translation, object detection, and speech recognition, which can be lightened using the accelerated performance. Early tests have shown image recognition 11 times faster using a similar configuration than with current-generation Intel Xeon Scalable processors, reports Intel. Current projections estimate 17 times faster inference throughput benefit with Intel Optimized Caffe ResNet-50 and Intel DL Boost for CPUs.  

The processor family will be platform-compatible with the 10nm Ice Lake processor.

The Intel Xeon Platinum 9200 processors are available for purchase today as part of a pre-configured systems from select OEMs, including Atos, HPE, Lenovo, Penguin Computing, Megware and authorised Intel resellers. 

http://www.intel.com
> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration