White chip LEDs save space in IoT devices and drones

Compact high luminous intensity white chip LEDs from Rohm are optimised for applications which require high brightness white light emission. The CSL1104WB LEDs are intended for IoT devices, drones, and other battery equipped applications.

High luminous intensity (2.0cd) white LEDs are being increasingly adopted to improve visibility in a range of applications in the consumer electronics and automotive sectors. At the same time, the emergence of applications that mount multiple LEDs in a small space – for example IoT devices and drones – require high density mounting. This makes it difficult to achieve high brightness in a compact footprint, explains Rohm.

The CSL1104WB series achieves a high luminous intensity of 2.0cd in an ultra-compact 1608 size (1.6 x 0.8mm for 1.28mm²), which was previously difficult to achieve. The result is the same luminosity as the current mainstream 3528 size PLCC package (3.5 x 2.8mm or 9.8mm²) but in an 87 per cent smaller form factor.

Colour variation is significantly improved, simplifying the colour adjustment process by ensuring accurate white colour chromaticity, says Rohm. This contributes to space savings, but also improves design flexibility along with visibility through high density mounting of high luminosity LEDs. Qualification under the automotive reliability standard AEC-Q102 is specifically developed for optical devices is planned, enabling a smooth application inside industrial equipment and automotive applications that are exposed to harsh environments.

Rohm is committed to expand its line up of 1608 size white chip LEDs from low to high brightness.

Rohm Semiconductor develops and manufactures a large product range from SiC diodes and MOSFETs, analogue ICs such as gate drivers and power management ICs to power transistors and diodes to passive components. The company has manufacturing plants in Japan, Korea, Malaysia, Thailand, the Philippines, and China.

Lapis Technology and Lapis Semiconductor (former OKI Semiconductor), SiCrystal and Kionix are companies of the Rohm Semiconductor Group. Rohm Semiconductor Europe has its head office near Dusseldorf, Germany serving the EMEA region (Europe, Middle East and Africa).

http://www.rohm.com/eu

> Read More

Synopsys bundles design engines for hyper-convergent ICs

For memory, artificial intelligence (AI), automotive and 5G applications, the PrimeSim design environment provides comprehensive analysis, improved productivity, says Synopsys.

At the SNUG World international user conference, Synopsys unveiled the PrimeSim Continuum. This is a unified workflow for circuit simulation technologies to accelerate the creation and signoff of hyper-convergent designs. PrimeSim Continuum is built on next-generation Spice and FastSpice architectures and is the industry’s only proven GPU acceleration technology, claims Synopsys. It provides design teams 10X runtime improvements with golden signoff accuracy, says the company. PrimeSim Continuum combines PrimeSim Spice, PrimeSim Pro, PrimeSim HSpice and PrimeSim XA. PrimeWave delivers a seamless simulation experience around all PrimeSim engines with comprehensive analysis, improved productivity and ease of use.

“PrimeSim Continuum represents a revolutionary breakthrough in circuit simulation innovation with heterogeneous compute acceleration on GPU/CPU, setting a new bar for EDA solutions,” said Sassine Ghazi, chief operating officer (COO) at Synopsys. The PrimeSim Continuum technologies complement the company’s Custom Design Platform and Verification Continuum, continued Ghazi.

Today’s hyper-convergent SoCs consist of larger and faster embedded memories, analogue front-end devices and complex I/O circuits that communicate at 100Gb+ data rates with the DRAM stack connected on the same piece of silicon in a system-in-package (SiP) design. Verifying complex designs at advanced technology process nodes present increased parasitics, process variability and reduced margins, reports Synopsys. This results in more simulations with longer runtimes at higher accuracy impacting the overall time-to-results, quality-of-results and cost-of-results. PrimeSim Continuum addresses the systemic complexity of such hyper-convergent designs with a unified workflow of sign-off quality simulation engines tuned for analogue, mixed-signal, RF, custom digital memory designs, says Synopsys. PrimeSim Continuum uses next-generation Spice and FastSpice architectures and heterogenous computing to optimise the use of CPU and GPU resources and improve time-to-results and cost of results.

The PrimeSim Pro simulator represents a next-generation FastSpice architecture for fast and high-capacity analysis of modern DRAM and Fflash memory designs.

The PrimeSim Spice simulator’s next-generation architecture with GPU technology delivers significant performance improvements needed to perform comprehensive analysis for analogue and RF design while meeting signoff accuracy requirements.

The PrimeSim Continuum integrates PrimeSim Spice and PrimeSim Pro with the PrimeSim HSpice simulator for foundation IP and signal integrity and the PrimeSim XA simulator, for SRAM and mixed-signal verification. PrimeWave delivers a seamless experience by providing a consistent and flexible environment across all PrimeSim Continuum engines optimising design set-up, analysis and post-processing, says Synopsys.

PrimeSim Continuum is available now.

https://www.synopsys.com

> Read More

Xilinx introduces Kria adaptive SOMs to add AI at the edge

Adaptive system on modules (SOMs) from Xilinx are production-ready, small form factor  embedded boards that enable rapid deployment in edge-based applications.The Kria adaptive SOMs can be coupled with a software stack and pre-built applications. According to Xilinx, the SOMs are a new method of bringing adaptive computing to AI and software developers.

The initial product to be made available is the Kria 26 SOM. It specifically targets vision AI applications in smart cities and smart factories.

By allowing developers to start at a more evolved point in the design cycle compared to chip-down design, Kria SOMs can reduce time-to-deployment by up to nine months, says Xilinx.

The Kria K26 SOM is built on top of the Zynq UltraScale+™ MPSoC architecture, which features a quad-core Arm Cortex A53 processor, more than 250 thousand logic cells, and a H.264/265 video codec. It has 4Gbyte of DDR4 memory and 245 I/Os, which allow it to adapt to “virtually any sensor or interface”. There is also 1.4TOPs of AI compute performance, sufficient to create vision AI applications with more than three times higher performance at lower latency and power compared to GPU-based SOMs. Target applications are smart vision systems, for example in security, traffic and city cameras, retail analytics, machine vision, and vision guided robotics.

Hardware is coupled with software for production-ready vision accelerated applications which eliminate all the FPGA hardware design work. Software developers can integrate custom AI models and application code. There is also the option to modify the vision pipeline using design environments, such as TensorFlow, Pytorch or Café frameworks, as well as C, C++, OpenCL, and Python programming languages. These are enabled by the Vitis unified software development platform and libraries, adds Xilinx.

The company has also opened an embedded app store for edge applications. There are apps for Kria SOMs from Xilinx and its ecosystem partners. Xilinx apps range from smart camera tracking and face detection to natural language processing with smart vision. They are open source and provided free of charge.

For further customisation and optimisation, embedded developers can draw on support for standard Yocto-based PetaLinux. There is also the first collaboration between Xilinx and Canonical to provide Ubuntu Linux support (the Linux distribution used by AI developers). Customers can develop in either environment and take either approach to production. Both environments will come pre-built with a software infrastructure and helpful utilities.

Finally, the Kria KV260 Vision AI starter kit is purpose-built to support accelerated vision applications available in the Xilinx App Store. The company claims developers can be “up and running in less than an hour with no knowledge of FPGAs or FPGA tools”. When a customer is ready to move to deployment, they can seamlessly transition to the Kria K26 production SOM, including commercial and industrial variants.

Xilinx has published an SOM roadmap with a range of products, from cost-optimised SOMs for size and cost-constrained applications to higher performance modules that will offer developers more real-time compute capability per Watt.

Kria K26 SOMs and the KV260 Vision AI Starter Kit are available now to order from Xilinx and its network of worldwide distributors. The KV260 Vision Starter Kit is available immediately, with the commercial-grade Kria K26 SOM shipping in May 2021 and the industrial-grade K26 SOM shipping this summer. Ubuntu Linux on Kria K26 SOMs is expected to be available in July 2021.

http://www.xilinx.com/kria

> Read More

AI controller and software detect humans, using less energy

Claimed to be the lowest power person detection system, the combination of Maxim Integrated’s MAX78000 artificial intelligence (AI) microcontroller and Aizip’s Visual Wake Words model bring human-figure detection to IoT imaging and video with a power budget of just 0.7 mJ per inference. This, says Maxim, is a 100 fold improvement on the performance of conventional software, making it “the most economical and efficient IoT person-detection solution available”.

The MAX78000 neural-network microcontroller detects people in an image using Aizip’s Visual Wake Words (VWW) model at just 0.7 mJ per inference. This allows 13 million inferences from a single AA/LR6 battery and is 100 times lower than conventional software solutions. It is, says Maxim, the most economical and efficient IoT person-detection solution available. The low-power network provides longer operation for battery-powered IoT systems that require human-presence detection, including building energy management and smart security cameras.

The MAX78000 low-power, neural network accelerated microcontroller executes AI inferences at less than 1/100th the energy of conventional software solutions to improve run-time for battery-powered edge AI applications, continues Maxim. The mixed precision VWW network is part of the Aizip Intelligent Vision Deep Neural Network (AIV DNN) series for image and video applications and was developed with Aizip’s proprietary design automation tools to achieve greater than 85 per cent human-presence accuracy.

The extreme model compression delivers accurate smart vision with a memory-constrained, low-cost AI-accelerated microcontroller and cost-sensitive image sensors, says Maxim.

The MAX78000 microcontroller and MAX78000EVKIT# evaluation kit are available now directly from Maxim Integrated’s website and through authorised distributors.

AIV DNN series models, tools and services are available directly from Aizip.

Aizip develops AI models for IoT applications. Based in Silicon Valley, Aizip provides design services with superior performance, quick turnaround time, and “excellent [return on investment] ROI.

Maxim Integrated offers a broad portfolio of semiconductors, tools and support to deliver efficient power, precision measurement, reliable connectivity and robust protection along with intelligent processing. Designers in application areas such as automotive, communications, consumer, data centre, healthcare, industrial and the IoT.

http://www.maximintegrated.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration