Intel Xeon Scalable processors are equipped for AI training
Up to 56 processor cores per socket and built in artificial intelligence training acceleration distinguish the next generation of Intel Xeon Scalable processors. Codenamed Cooper Lake, the processors will be available from the first half of next year. The high core-count processors will use the Intel Xeon platinum 9200 series capabilities for high performance computing (HPC) and AI customers.
The second generation, Intel Xeon Scalable processors will deliver twice the processor core count (up to 56 cores), higher memory bandwidth, and higher AI inference and training performance compared to the standard Intel Xeon Platinum 8200 platforms, confirms Intel. The family will be the first x86 processor to deliver built-in AI training acceleration through new bfloat16 support added to Intel Deep Learning (DL) Boost.
Intel DL Boost augments the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This “significantly accelerates inference performance for deep learning workloads optimised to use vector neural network instructions (VNNI),” said Jason Kennedy, director of Datacenter Revenue Products and marketing at Intel.
He cites workloads such as image classification, language translation, object detection, and speech recognition, which can be lightened using the accelerated performance. Early tests have shown image recognition 11 times faster using a similar configuration than with current-generation Intel Xeon Scalable processors, reports Intel. Current projections estimate 17 times faster inference throughput benefit with Intel Optimized Caffe ResNet-50 and Intel DL Boost for CPUs.
The processor family will be platform-compatible with the 10nm Ice Lake processor.
The Intel Xeon Platinum 9200 processors are available for purchase today as part of a pre-configured systems from select OEMs, including Atos, HPE, Lenovo, Penguin Computing, Megware and authorised Intel resellers.