Embedded Vision Processor IP prepares for AI-intensive edge applications

Integrating a deep neural network accelerator, vector digital signal processor (DSP) and vector floating point unit (FPU), Synopsys explains that the DesignWare EV7x Vision Processors’ heterogeneous architecture delivers 35 Tera operations per second (TOPS) for artificial intelligence system on chips (AI SoCs).

The DesignWare ARC EV7x Embedded Vision processors, with deep neural network (DNN) accelerator provide sufficient performance for AI-intensive edge applications.

The ARC EV7x Vision Processors integrate up to four enhanced vector processing units (VPUs) and a DNN accelerator with up to 14,080 MACs to deliver up to 35 TOPS performance in 16-nm FinFET process technologies under typical conditions, which is four times the performance of the ARC EV6x processors, reports Synopsys.

Each EV7x VPU includes a 32-bit scalar unit and a 512-bit-wide vector DSP and can be configured for 8-, 16-, or 32-bit operations to perform simultaneous multiply-accumulates on different streams of data.

The optional DNN accelerator scales from 880 to 14,080 MACs and employs a specialized architecture for faster memory access, higher performance, and better power efficiency than alternative neural network IP. In addition to supporting convolutional neural networks (CNNs), the DNN accelerator supports batched long short-term memories (LSTMs) for applications that require time-based results (such as predicting the location of a pedestrian based on their observed path and speed). The vision engine and the DNN accelerator work on tasks in parallel, for example in autonomous vehicles and ADAS applications where multiple cameras and vision algorithms operate concurrently.

The combination of vision engine and DNN accelerator with high productivity programming tools make the ARC EV7x Embedded Vision processors suitable for applications as varied as advanced driver assist systems (ADAS), video surveillance, smart home, and augmented reality (AR) and virtual reality (VR).

The processors optimise the execution of linear algebra and matrix math operations. In navigation systems this can accelerate processing on simultaneous localisation and mapping (SLAM) and provide real-time tracking for AR/VR and localisation for autonomous driving and increase the accuracy of environmental maps, for example.

To speed application software development for ARC EV7x Vision Processors, Synopsys’ MetaWare EV development toolkit provides a software programming environment based on common embedded vision standards, including OpenVX and OpenCL C. The mapping tools support Caffe and Tensorflow frameworks, as well as the ONNX neural network interchange format.

ASIL B and ASIL D -compliant versions of the new processors, the ARC EV7xFS portfolio, accelerate ISO 26262 certification of automotive SoCs. The functional safety-enhanced processors offer hardware safety features, safety monitors, and lockstep capabilities that enable designers to achieve stringent levels of functional safety and fault coverage without significant impact on power or performance.

A hybrid option enables system architects to select required safety levels up to ASIL D in the software, post-silicon.

ARC EV7x Embedded Vision processors, DNN accelerator option up to 14,080 MACs, and MetaWare EV software are expected to be available for lead customers in Q1 2020. The DNN accelerator option with up to 3,520 MACs is available now.

https://www.synopsys.com/designware

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration