5G front end modules help improve 5G call quality and internet speed

Front end modules and pre-drivers developed by NXP are claimed to improve 5G network coverage and quality. The BTS7202 RX front end modules and BTS6403 / BTS6305 pre-drivers offer high output power, improved linearity and reduced noise.

The RX front end modules (FEM) and BTS6403 / BTS6305 pre-drivers for 5G MIMO (massive multiple-input multiple-output) going up to 20W per channel. The devices were developed and implemented in NXP’s SiGe (silicon germanium) process. They operate with modest current consumption, said NXP, designed to reduce operational costs for mobile network operators (MNOs). They are also claimed to offer improved linearity and reduced noise figure to support better 5G signal quality. 

The BTS7202 RX FEMs and BTS6403/6305 pre-drivers complement NXP’s power amplifier solutions for 32T32R radios. The BTS7202 RX FEMs feature a switch capable of handling up to 20W of power leaking from transmit line-ups, reducing system complexity. The BTS6305 pre-drivers also integrate a balun to reduce costs. 

As 5G networks are adopted, MNOs are increasingly leveraging 32T32R solutions to improve massive MIMO coverage in less dense urban and suburban areas. Utilising 32T32R solutions requires using higher power devices that increase the power level per channel in order to achieve the total power required to ensure strong coverage of the 5G signal. 

Doeco Terpstra, vice president and general manager, smart antenna solutions, radio power, NXP, said: “Our customers recognise that higher power solution offers a way for network operators to address the power needs of 32-antenna solutions without compromising network quality.”

NXP Semiconductors specialises in secure connectivity solutions for embedded applications, in the automotive, industrial and IoT, mobile, and communication infrastructure markets. 

http://www.nxp.com

> Read More

Image sensor is adaptive for all vehicle occupants

In-car safety and comfort in automotives is advanced with the VD/VB1940 DMS (driver monitoring system) sensor, a hybrid sensor for interior monitoring, announced by STMicroelectronics.

Leading automotive markets start to mandate driver monitoring systems (DMS) reported STMicroelectronics. While DMS promises greater road safety by assessing driver alertness, ST said that its next-generation dual image sensor monitors the full vehicle interior, i.e., the driver and all passengers. The sensor enables new applications such as passenger safety-belt checks, vital-sign monitoring, child-left detection, gesture recognition and high-quality video/picture recording.

The image sensor uses ST’s second-generation 3D-stacked back-side illuminated (BSI) wafer technology, which maximises the optical area and on-chip processing in relation to die size. This lets the sensor perform sophisticated algorithms locally for optimal performance in both colour and near-infra-red (NIR) imaging, saving power and relieving demand for an external co-processor.

Algorithms performed on-chip include Bayer conversion and HDR merging for optimal image-quality and frame rate. On-chip Bayerisation processing enables the user to reshuffle the colour pixels of the RGB NIR 4X4 pattern into RGGB format compatible with a range of SoCs. Local processing also handles independent colour and NIR pixel-exposure optimisation for optimum image quality in both modes, as well as smart upscaling to maximise NIR image resolution by capturing extra NIR information from RGB pixels.

The VD/VB1940 sensor combines the sensitivity and high resolution of infra red sensing with high dynamic range (HDR) colour imaging in a single component. It can capture frames alternatively in rolling-shutter and global-shutter modes. With 5.1Mpixels, it captures the high dynamic range (HDR) colour images needed for an occupant monitoring system (OMS) in addition to the high-quality NIR images typically captured by standard DMS sensors. DMS uses NIR imaging to analyse driver head and eye movements in all lighting conditions.

Offered in both bare wafers (VDB1940) and packaged in BGAs (VB1940), samples are available now and mass production is planned to meet model-year 2024 vehicles being designed now. 

Qualified to AEC-Q100, the VD/VB1940 is ISO 26262 compliant to facilitate use in functional-safety systems up to ASIL-B.

http://www.st.com

> Read More

Intel introduces socketed SoCs for edge and AI

12th gen Intel Core SoC processors have been announced. Intel announced the line-up of purpose built edge processors which can be used to enhance graphics, AI for the IoT edge.

According to Intel, the purpose-built edge SoC processors mark an industry first: a socketed SoC for high performance integrated graphics and media processing for visual compute workloads. They have a compact footprint and a wide operating thermal design power (TDP) for small form factor, fanless designs.

The SoC has been developed in response to the increased volume of data created at the edge which needs to be processed and analysed. Digital transformation at the edge requires increased processing power and AI inference performance to future-proof AI workloads, said the company. These 12th gen Intel Core SoC processors for IoT edge feature manageability capabilities including Intel vPro options for remote control and manageability which is required for managing and servicing systems deployed at the IoT edge.

The 12th gen Intel Core SoC processors deliver up to four times faster graphics, as measured by 3DMark, the benchmarking tool, and up to 6.6 times faster GPU image classification inference performance compared with 10th gen Intel Core desktop processors in a 12 to 65W design. 

The 12th gen Intel Core SoC processors for IOT edge include Intel Thread Director, which intelligently directs the operating system to assign the right workload to the right core. With up to 14 cores and 20 threads, the SoC processors also reach up to 1.32 times faster single-thread performance and up to 1.27 times faster multi-thread performance compared with 10th Gen Intel Core desktop processors, said Intel.

The SoC processors also support AI for inferencing and machine vision, with up to 96 graphics execution units for a high degree of parallelisation in AI workloads. AI acceleration on the CPU via Intel Deep Learning Boost provides additional inferencing. The processors also support Intel Distribution of OpenVINO toolkit optimisations and cross-architecture inferencing. The integrated graphics, enhanced visual compute, and AI capabilities may realise imaging and pattern recognition for healthcare or create new opportunities for point of sale retail, said Jeni Panhorst, vice president Data Platforms and general manager of the Network & Edge Platforms division.

Other market sectors likely to expand remote control and manageability of systems at the edge include banking, hospitality and education to respond to changing supply and demand. For industrial manufacturing, they can enhance industrial PCs, edge servers, advanced controllers, machine vision systems and virtualised control platforms, while in healthcare the processors can deliver enhanced ultrasound imaging, medical carts, endoscopy and clinical devices at the edge.

http://www.intel.com

> Read More

Intel engineers Flex series GPUs for the intelligent visual cloud

The data centre (GPU) graphics processing codenamed Arctic Sound-M has been unveiled as the Flex series. The Flex series GPUs are designed to meet the requirements for intelligent visual cloud workloads, said Intel. The Flex 170 is designed for maximum peak performance while the Flex 140 is for maximum density.

The Flex series GPU are capable of processing up to 68 simultaneous cloud gaming streams and handle workloads without having to use separate, discrete solutions or rely on siloes or proprietary environments, said the company. This helps lower and optimise the total cost of ownership for diverse cloud workloads like media delivery, cloud gaming, AI, metaverse and other emerging visual cloud use cases.

“We are in the midst of a pixel explosion driven by more consumers, more applications and higher resolutions,” explained Jeff McVeigh, Intel vice president and general manager of the Super Compute Group, Intel. “Today’s data centre infrastructure is under intense pressure to compute, encode, decode, move, store and display visual information”.

The Flex series GPUs have what is claimed to be the first hardware-based AV1 encoder in a data centre GPU to provide five times the media transcode throughput performance and two times the decode throughput performance at half the power of the Nvidia A10 in the case of the Intel Flex series 140 GPU, for example. According to Intel, the series also delivers more than 30 per cent bandwidth improvement to save on the total cost of ownership and has broad support for popular media tools, APIs, frameworks and the latest codecs, including HEVC, AVC and VP9.

The GPUs are powered by Intel’s Xe-HPG architecture and can provide scaling of AI inference workloads from media analytics to smart cities to medical imaging between CPUs and GPUs without “locking developers into proprietary software”.

The video processing demands of video conferencing, streaming, and social media have transformed the compute resource requirements of the data centre. The increased media processing, media delivery, AI visual inference, cloud gaming and desktop virtualisation has presented a challenge for an industry largely dependent on proprietary, licensed coding models, like CUDA for GPU programming, said Intel.

The Flex series GPU software stack includes support for oneAPI and OpenVINO. Developers can use Intel’s oneAPI tools to deliver accelerated applications and services, including the Intel oneAPI Video Processing Library (oneVPL) and Intel VTune Profiler, for example. The open alternative to proprietary language lock-in enables the performance of the hardware and has a set of tools that complement existing languages and parallel models. This allows users to develop open, portable code that will take maximum advantage of various combinations across Intel CPUs and GPUs. It also means developers are not tied to proprietary programming models, which can be financially or technically restrictive, said Intel.

The Flex series GPU media architecture is powered by up to four Xe media engines, for streaming density and to deliver up to 36 streams 1080p60 transcode throughput per card. It is also capable of delivering eight streams 4K60 transcode throughput per card.

When scaled to 10 cards in a 4U server configuration, it can support up to 360 streams HEVC-HEVC 1080p60 transcode throughput.

Leveraging the Intel Deep Link Hyper Encode feature, the GPU Flex series 140 with two devices on a single card can meet the industry’s one-second delay requirement while providing 8K60 real-time transcode, reported Intel. This capability is available for AV1 and HEVC HDR formats.

To meet the growth in Android cloud gaming, the GPUs are validated on nearly 90 of the most popular Google Play Android game titles. A single Flex series 170 GPU can achieve up to 68 streams of 720p30 while a single Flex series 140 GPU can achieve up to 46 streams of 720p30 (measured on select game titles).

When scaled with six Flex Series 140 GPU cards, it can achieve up to 216 streams of 720p30.

Systems featuring Flex series GPUs will be available from providers including Dell Technologies, HPE, H3C, Inspur, Lenovo and Supermicro. Solutions with the Flex series GPU will ramp over the coming months, starting with media delivery and Android cloud gaming workloads, followed by Windows cloud gaming, AI and VDI workloads.

http://www.intel.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration