Wind River develops reference system based on Intel FlexRAN for 5G vRAN

Wind River and Intel are jointly developing a 5G virtual radio access network (vRAN) which integrates the Intel FlexRAN reference software for systems using third generation Intel Xeon Scalable processors with built-in AI acceleration. Also included are Intel Ethernet 800 network adapters and the Intel vRAN dedicated accelerator ACC100 with Wind River Studio.

The collaboration will optimise new capabilities using the future next generation Intel Xeon Scalable processors (code-named Sapphire Rapids).

Wind River Studio provides a cloud-native platform for the development, deployment, operations, and servicing of mission-critical intelligent systems. Its cloud infrastructure capabilities include a fully cloud-native, Kubernetes container-based architecture based on the StarlingX open source project for the deployment and management of distributed edge networks at scale. Studio is characterised by low latency and a small footprint infrastructure to allow the flexibility to grow new business and further improve total cost of ownership, explained Wind River. Intel FlexRAN is a vRAN reference implementation for virtualised cloud-enabled radio access networks.

Operators are pursuing vRAN for greater agility following the shift to cloud-native approaches, driven by 5G. There is the potential for vRAN to act as an anchor tenant for far-edge cloud but challenges include system integration, operational complexity for multi-vendor vRAN and edge cloud server performance.

Using the reference design, prospective customers and vRAN solution providers will be able to review evaluation packages consisting of Intel FlexRAN reference software pre-integrated with Wind River Studio to simplify and accelerate customer trials and deployments of 5G vRAN.

“As 5G opens up new opportunities across industries, there will be an increasing need to put greater intelligence and compute to the edges of the network, where new use cases evolve and thrive. As such, operators must adopt a low-latency far-edge cloud architecture to enable these new use cases in an intelligent, AI-first world,” said Kevin Dallas, president and CEO at Wind River.

“As operators realise the benefits and embrace a virtualised, software-defined architecture, our products, ecosystem, and years of experience help accelerate this shift to flexible, agile networks,” said Dan Rodriguez, Intel corporate vice president and general manager, Network Platforms Group.

Wind River delivers software for intelligent systems. Its technology has been powering the safest, most secure devices in the world since 1981. Wind River offers a comprehensive portfolio, supported by world-class global professional services and support and a broad partner ecosystem. The company says its software and expertise are accelerating digital transformation of mission-critical intelligent systems that will increasingly demand greater compute and AI capabilities while delivering the highest levels of security, safety, and reliability.

http://www.windriver.com

> Read More

Intel addresses data centres with Sapphire Rapids processor

Another launch at this year’s Intel’s Architecture Day was the next generation Intel Xeon Scalable processor (code-named Sapphire Rapids). The processor delivers substantial compute performance across dynamic and increasingly demanding data centre uses and is workload-optimised to deliver high performance on elastic compute models like cloud, microservices and artificial intelligence (AI).

The processor is based on a tiled, modular SoC architecture that leverages Intel’s embedded multi-die interconnect bridge (EMIB) packaging technology making it scalable while maintaining the benefits of a monolithic CPU interface. Sapphire Rapids provides a single balanced unified memory access architecture, with every thread having full access to all resources on all tiles, including caches, memory and I/O. According to Intel, the processor offers consistent low latency and high cross-section bandwidth across the entire SoC.

Sapphire Rapids is built on Intel 7 process technology and features Intel’s new Performance-core microarchitecture (see softei news 23 August), which is designed for speed and pushes the limits of low latency and single-threaded application performance. Sapphire Rapids delivers the industry’s broadest range of data centre-relevant accelerators, including new instruction set architecture and integrated IP to increase performance across a broad range of customer workloads and usages.

The processor integrates acceleration engines including the Intel Accelerator Interfacing Architecture (AIA). This supports efficient dispatch, synchronisation and signalling to accelerators and devices. There is also the Intel Advanced Matrix Extensions (AMX) which is a workload acceleration engine which delivers massive acceleration to the tensor processing at the heart of deep learning algorithms. It can provide an increase in computing capabilities with 2K INT8 and 1K BFP16 operations per cycle, said Intel.

Tests using early Sapphire Rapids silicon and optimised internal matrix-multiply micro benchmarks run over seven times faster using Intel AMX instruction set extensions compared to a version of the same micro benchmark using Intel AVX-512 VNNI instructions. This is significant for performance gains across AI workloads for both training and inference.

The Intel Data Streaming Accelerator (DSA) is designed to offload the most common data movement tasks to alleviate overhead and improve processing these overhead tasks to deliver increased overall workload performance. It can move data among CPU, memory and caches, as well as all attached memory, storage and network devices

The processor is built to drive industry technology transitions with advanced memory and next generation I/O, including PCIe 5.0, CXL 1.1, DDR5 and HBM technologies. An Infrastructure Processing Unit (IPU) is a programmable networking device designed to enable cloud and communication service providers to reduce overhead and free up performance for CPUs. Intel’s IPU-based architecture separates infrastructure functions and tenant workload which allows tenants to take full control of the CPU. The cloud operator can offload infrastructure tasks to the IPU to maximise the CPU. IPUs can manage storage traffic, which reduces latency while efficiently using storage capacity via a diskless server architecture. IPU allows users to use resources with a secure, programmable and stable solution that enables them to balance processing and storage.

Mount Evans is Intel’s first ASIC IPU. It integrates learnings from multiple generations of FPGA SmartNICs. It offers high performance network and storage virtualisation offload while maintaining a high degree of control. It provides a programmable packet processing engine for firewalls and virtual routing. There is also a hardware accelerated NVMe storage interface scaled up from Intel Optane technology to emulate NVMe devices. Intel Quick Assist technology deploys advanced crypto and compression acceleration.

http://www.intel.com

> Read More

Intel introduces two x86 core architectures at Intel Architecture Day 2021

Intel has introduced two x86 core architectures – the Efficient-core and Performance-core microarchitectures.

The Efficient-core microarchitecture, previously code-named Gracemont, is designed for throughput efficiency, enabling scalable multi-threaded performance for multi-tasking. It is, says Intel, the company’s most efficient x86 microarchitecture for multi-core workloads with a workload that can increase with the number of cores.

It also delivers a wide frequency range. The microarchitecture allows Efficient-core to run at low voltage to reduce overall power consumption, while creating the power headroom to operate at higher frequencies. As a result, the microarchitecture can ramp up performance for more demanding workloads.

Efficient-core include a variety of advances to optimise workloads while conserving processing power and to improve instruction per cycle (IPC) rates. For example, it has a 5,000 entry branch target cache for more accurate branch prediction and a 64kbyte instruction cache to keep useful instructions close without expending memory subsystem power.

It also includes Intel’s first on-demand instruction length decoder that generates pre-decode information and Intel’s clustered out-of-order decoder to decode up to six instructions per cycle while maintaining energy efficiency. Other features include a wide back end with five-wide allocation and eight-wide retire, 256 entry out-of-order window and 17 execution ports.

Technology advanced include Intel Control-flow Enforcement Technology and Intel Virtualization Technology Redirection Protection. There Efficient-core microarchitecture also has the AVX ISA and new extensions to support integer artificial intelligence (AI) operations.

Intel says that the Efficient-core achieves 40 per cent more performance at the same power, compared with the Skylake CPU core, in single-thread performance. Alternatively, it can deliver the same performance while consuming 40 per cent less power. For throughput performance, four Efficient-cores offer 80 per cent more performance while still consuming less power than two Skylake cores running four threads or the same throughput performance while consuming 80 per cent less power.

The second microarchitecture to be launched Intel Architecture Day is Performance-core (previously code-named Golden Cove). This microarchitecture is the highest performing CPU core built by Intel and is designed for speed with low latency and single-threaded application performance. It has been introduced to address the fact that workloads are growing in terms of code footprint, demand more execution capabilities and have growing data sets and data bandwidth requirements. Performance-core microarchitecture is intended to provide “a significant boost in general purpose performance and better support for large code footprint applications,” said the company.

The Performance-core features a wider, deeper and smarter architecture than earlier microarchitectures with six decoders, eight-wide micro-op cache, six allocation and 12 execution ports. It also has bigger physical register files and deeper re-order buffer with 512 entry.

Other advances are improved branch prediction accuracy, reduced effective L1 latency and full write predictive bandwidth optimisations in L2.

Other features which help it to lower latency and advance single-threaded application performance are a Geomean improvement of around 19 per cent across a range of workloads over current 11th Gen Intel Core processor architecture (Cypress Cove) at ISO frequency.

There is also more parallelism and an increase in execution parallelism. For deep learning inference and training, there are Intel Advance Matrix Extension to accelerate AI. There is also dedicated hardware and new instruction set architecture to perform matrix multiplication operations “significantly faster”. The reduced latency is accompanied by increased support for large data and large code footprint applications.

http://www.Intel.com

> Read More

Chip scale atomic clock (CSAC) is designed for mission-critical military projects

Precise timing is required for advanced military platforms, ocean-bottom survey systems and remote sensing applications, says Microchip, as it introduces the SA65 chip scale atomic clock (CSAC).

The CSACs are used to ensure stable and accurate timing even when global navigation satellite systems (GNSS) time signals are unavailable. Microchip’s SA65 CSAC is environmentally rugged and has double the frequency stability over a wider area of the earlier SA.45s CSAC. It also has faster warm up at cold temperatures. The SA65 has an operating temperature range of -40 to 80 degrees C and a storage temperature range of -55 to 105 degrees C. The warm-up time of two minutes at -40 degrees C is 33 per cent faster than that of the SA.45s.

The SA65 CSAC is intended to be portable for military applications such as assured position, navigation and timing (A-PNT) and command, control, communications, computers, cyber, intelligence, surveillance and reconnaissance (C5ISR), which require precise frequencies generated by a low size, weight and power (SWaP) atomic clock.

The CSAC also has improvements such as fast warm-up to frequency after cold start, temperature stability over a wide operating range, and frequency accuracy and stability, all of which extend operation while GNSS is denied help, says Microchip.

Claimed to be the world’s lowest-power commercial atomic clock, the CSAC provides precise timing for portable and battery-powered applications requiring continuous operation and holdover in GNSS-denied environments. The SA65 is form-, fit- and function-compatible with the SA.45s, which minimises risk and redesign costs when upgrading to improve performance and environmental insensitivity.

The CSAC family of atomic clocks is supported by Developer Kit 990-00123-000, as well as associated software, a user guide and technical support.

Microchip Technology provides smart, connected and secure embedded control solutions and development tools. The company’s solutions serve more than 120,000 customers across the industrial, automotive, consumer, aerospace and defence, communications and computing markets. It is headquartered in Chandler, Arizona, USA.

http://www.microchip.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration