“World’s largest chip” has more compute cores for data access

Claimed to be the largest chip in the world, the Cerebras wafer scale engine (WSE) measures 216 x 216mm (8.5 x 8.5 inch). At 46,225mm2 the chip is 56x larger than the biggest graphics processing unit (GPU) ever made, claims Cerebras.

It has 400,000 cores and 18Gbyte on-chip SRAM. The large silicon area, more than the largest graphics processing unit, enables the WSE to provide more compute cores, tightly coupled memory for efficient data access, and an extensive high bandwidth communication fabric for groups of cores to work together, claims Cerebras.

The WSE contains 400,000 sparse linear algebra (SLA) cores. Each core is flexible, programmable, and optimised for the computations that underpin most neural networks. Programmability ensures the cores can run all algorithms for constantly changing machine learning operations.

The cores on the WSE are connected via the Swarm communication fabric in a 2D mesh with 100 petabytes (Pbytes) per second of bandwidth. The Swarm on-chip communication fabric delivers breakthrough bandwidth and low latency at a fraction of the power draw of traditional techniques used to cluster GPUs, says Cerebras. It is fully configurable. Software configures all the cores on the WSE to support the precise communication required for training the user-specified model. For each neural network, Swarm provides an optimised communication path.

The 18Gbyte of on-chip memory is accessible within a single clock cycle, and provides 9 Pbytes per second memory bandwidth. This is 3,000 times more capacity and 10,000 times greater bandwidth than the leading competitor, claims Cerebras. The WSE provides moree cores, more local memory and enables fast, flexible computation, at lower latency and with less energy than other GPUs, concludes Cerebras.

https://www.cerebras.net/technology/

> Read More

Optiga Trust M secures automated, cloud connected devices

To improve the security and performance of cloud connected devices and services, Infineon has launched the Optiga Trust M.

It helps manufacturers to enhance the security of their devices, says Infineon, and improves overall system performance. The single chip securely stores unique device credentials and enables devices to connect to the cloud up to 10 times faster than software-only alternatives, claims the company. It is intended for industry and building automation, smart homes and consumer electronics and anywhere that hardware-based trust anchors are critical for connected applications and smart services, from a robotic arm in the smart factory to automated air conditioning in the home.

The growth of cloud connectivity and AI-based applications means that zero-touch provisioning of devices to the network or cloud is gaining more and more traction. Optiga Trust M injects critical assets, such as certificates and key pairs which identify a device, into the chip at the factory premises. The turnkey set-up minimises design, integration and deployment of embedded systems by providing a cryptographic toolbox, protected I2C interface and open source code on GitHub. The high-end security controller is certified according to CC EAL6+ (high) and provides advanced asymmetric cryptography. It has a lifetime of 20 years and can be securely updated in the field.

Infineon’s Optiga family combines hardware security controllers with software to increase the overall security of embedded systems, including IoT end nodes, edge gateways and cloud servers, from basic device authentication to Java card-based programmable components.

The Optiga Trust M is available now. Evaluation kits are also available.

http://www.infineon.com

> Read More

Intel Xeon Scalable processors are equipped for AI training

Up to 56 processor cores per socket and built in artificial intelligence training acceleration distinguish the next generation of Intel Xeon Scalable processors. Codenamed Cooper Lake, the processors will be available from the first half of next year. The high core-count processors will use the Intel Xeon platinum 9200 series capabilities for high performance computing (HPC) and AI customers.

The second generation, Intel Xeon Scalable processors will deliver twice the processor core count (up to 56 cores), higher memory bandwidth, and higher AI inference and training performance compared to the standard Intel Xeon Platinum 8200 platforms, confirms Intel. The family will be the first x86 processor to deliver built-in AI training acceleration through new bfloat16 support added to Intel Deep Learning (DL) Boost. 

Intel DL Boost augments the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This “significantly accelerates inference performance for deep learning workloads optimised to use vector neural network instructions (VNNI),” said Jason Kennedy, director of Datacenter Revenue Products and marketing at Intel.

He cites workloads such as image classification, language translation, object detection, and speech recognition, which can be lightened using the accelerated performance. Early tests have shown image recognition 11 times faster using a similar configuration than with current-generation Intel Xeon Scalable processors, reports Intel. Current projections estimate 17 times faster inference throughput benefit with Intel Optimized Caffe ResNet-50 and Intel DL Boost for CPUs.  

The processor family will be platform-compatible with the 10nm Ice Lake processor.

The Intel Xeon Platinum 9200 processors are available for purchase today as part of a pre-configured systems from select OEMs, including Atos, HPE, Lenovo, Penguin Computing, Megware and authorised Intel resellers. 

http://www.intel.com
> Read More

Secure flash memory enhances secure data storage in self-driving cars

Macronix’s secure flash memory has been integrated in Nvidia’s next-generation autonomous driving platforms.

The automotive-grade ArmorFlash memory is being used on the Nvidia Drive AGX Xavier and Drive AGX Pegasus autonomous vehicle computing platforms.

The ArmorFlash memory is secure for data storage in the artificial intelligence (AI)-based Level 2+ advanced driver assistance systems (ADAS) through to Level 5 autonomous driving.   

“Our efforts in conjunction with NVIDIA are singularly focused on elevating the security of data in AI-based autonomous driving applications and ultimately, to enhance the safety of drivers,” said Anthony Le, vice president of marketing, Macronix America.

The ArmorFlash memory on the Drive AGX Xavier and Pegasus platforms can provide trusted identification, authentication and encryption features for autonomous driving security requirements.

ArmorFlash offers a combination of mature security technologies, including unique ID, authentication and encryption features. This blend of features enables superior levels of security in a high-density memory device to prevent data from being compromised, claims Macronix.

The ArmorFlash device provides trusted NVM storage of encrypted and integrity-protected assets. The ArmorFlash supports secure communication channel and protocol with the Nvidia Xavier system on a chip (SoC) via cryptographic operations, integrity checks and additional measures against certain security protocol attacks.

The global ADAS market is expected to exceed $67 billion by 2025, fuelled by a compounded annual growth rate of 19 per cent, according to Grand View Research. The research company attributes the growth to increasing government initiatives mandating driver assistance system to lower road accidents and cites expanding adoption of ADAS in small cars as a factor boosting market demand.

http://www.macronix.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration