Automotive Ethernet switch is “key component” for high speed networks

At CES, in Las Vegas this week, NXP Semiconductors will introduce an Ethernet switch that it describes as a “key component” of vehicle networks, with high levels of performance, safety and security.

The SJA1110 multi-gigabit Ethernet switch is designed to help vehicle manufacturers deliver the high-speed networks required for connected vehicles. It is believed to be the first automotive Ethernet switch with built in safety capabilities, offering integrated 100BASE-T1 PHYs, hardware-assisted security as well as multi-gigabit interfaces. It is optimised for integration with NXP’s S32G vehicle network processor and is designed to be used with the VR5510 power management IC in vehicle networks.

It has been introduced to support the Ethernet-based web sensors, actuators and processing unit and service-oriented gateways and domain controllers required for over-the-air updates and data-driven applications in connected vehicle networks. These networks are required to be scalable and move data quickly and securely. The car networks must also deliver functional safety in case of failure.

The SJA1110 Ethernet switch is aligned to the latest tine-sensitive network (TSN) standards and offers integrated 100BASE-T1 PHYs, hardware-assisted security and safety capabilities along with multi-gigabit interfaces.

It enables customers to meet ASIL requirements and implement dedicated failure-detection mechanisms for predictive maintenance.

The SJA1110 switch processes every Ethernet frame reaching the engine control unit (ECU) by validating it against hardware-based security rules which collect statistics and can trigger escalations if something does not conform to specification.

It is available in four hardware- and software- compatible variants, with a set of NXP original software and an open integrated controller.

NXP Semiconductors enables secure connections for a smarter world and provides secure connectivity solutions for embedded applications for applications in the automotive, industrial and IoT, mobile, and communication infrastructure markets. The company has approximately 30,000 employees in more than 30 countries.

http://www.nxp.com

> Read More

4D lidar chip holds promise of mass scale autonomous vehicles

The Aeries frequency modulated continuous wave (FMCW) lidar-on-chip sensing system provides high range performance and low cost to take autonomous driving to mass scale, claims Aeva.

The Aeries system integrates key elements of a lidar sensor into a miniaturised photonics chip, says Aeva. The company claims that its 4D lidar-on-chip significantly reduces the size and power of the device while achieving full range performance of over 300m for low reflective objects and the ability to measure instant velocity for every point; this is a first for the autonomous vehicle industry, says Aeva. The lidar-on-chip will cost less than $500 at scale, compared with $10,000s for today’s lidar sensors.

“One of the biggest roadblocks to bringing autonomous vehicles to the mainstream has been the lack of a high-performance and low-cost LiDAR that can scale to millions of units per year,” said Soroush Salehian, Aeva’s co-founder. “From the beginning we’ve believed that the only way to truly address this problem was to build a unique LiDAR-on-chip sensing system that can be manufactured at silicon scale.”

A distinguishing approach by Aeva is for the Aeries sensing system to measure the instant velocity of every point on objects beyond 300m. Aeva’s lidar is also free from interference from other sensors or sunlight and it operates at only a fraction of the optical power typically required to achieve long range performance. These factors contribute to increasing the factor of safety and scalability for autonomous driving.

Aeva also differs from other FMCW approaches by being able to provide multiple millions of points per second for each beam, resulting in high fidelity data that has been unprecedented until today, claims the company.

Co-founder, Mina Rezk, explains: “A key differentiator of our approach is breaking the dependency between maximum range and points density, which has been a barrier for time-of-flight and FMCW lidars . . . Our 4D lidar integrates multiple beams on a chip, each . . .capable of measuring more than two million points per second at distances beyond 300m.”

Porsche SE, a majority voting shareholder of the Volkswagen Group, has recently invested in Aeva, expanding the existing partnership between Aeva and Audi’s self-driving unit (Audi is part of the Volkswagen Group). Aeva points out that this is Porsche’s only investment in lidar technology to date.

Alex Hitzinger, senior vice president of Autonomous Driving at VW Group and CEO of VW Autonomy, said: “Together we are looking into using Aeva’s 4D lidar for our VW ID Buzz AV, which is scheduled to launch in 2022/23.”

Aeries meets the final production requirements for autonomous driving robo-taxis and large volume ADAS customers and will be available for use in development vehicles in the first half of 2020.

Aeva will unveil Aeries at CES 2020 (7 to 10 January) at the Las Vegas Convention Center (Booth 7525, Tech East, North Hall).

http://www.Aeva.com

> Read More

Sensor fusion software places control in the palm of consumer’s hand

Motion-based gesture control together with 3D motion tracking and pointing are enabled by software by Ceva that can be used for wireless, handheld controllers for consumer devices.

The Hillcrest Labs MotionEngine Air algorithms and software stack will be demonstrated at CES in Las Vegas (7 to 10 January 2020). The production-ready software delivers low power, motion-based gesture control, 3D motion tracking and pointing for consumer handheld devices in high volume markets, such as smartphone and PC stylus pens, smart TV and over-the-top (OTT) remote controls, game controllers, augmented reality and virtual reality (AR and VR) controllers, and PC peripherals.

Ceva reports that advances in low power inertial sensors and Bluetooth Low Energy along with the MotionEngine Air sensor fusion software yields sub-mA level power draw for the whole system for precise, interactive and intuitive motion-control for always-on and always-aware control.

MotionEngine Air software is optimised for use in embedded devices. It has precise pointing, gesture and motion control algorithms, a cursor for point-and-click control of an onscreen user interface, gestures such as flick, shake, tap, tilt, rotate, or circle for intuitive user interface controls, six-axis sensor fusion enabling 3D motion tracking for gaming and VR, motion events (pick-up, flip and stability detector) to enable power savings and patented orientation compensation and adaptive tremor removal.

The software stack has host drivers and sensor management to streamline software integration with popular operating systems including Android, Windows, MacOS and Linux.

The MotionEngine Air software is flexible with a low power and small memory footprint and can run on a variety of processors, including Arm Cortex-M, RISC-V and CEVA-BX and CEVA-TeakLite families of DSPs. It can be delivered in multiple configurations – including one that requires an accelerometer plus gyroscope (IMU) and a gesture and motion event-based one that requires only an accelerometer.

MotionEngine Air software is pre-qualified for use with sensors from the leading inertial sensor suppliers, says Ceva.

The company will demonstrate its MotionEngine Air software and evaluation board during CES 2020 at its hospitality suite in the Westgate Hotel.

http://www.ceva-dsp.com 

> Read More

Partners to develop AI hardware and software for autonomous vehicles

Videantis, which provides automotive deep learning, computer vision and video coding solutions, has announced that it will partner with the Fraunhofer Institute for Integrated Circuits IIS, Infineon and other leading companies and universities to develop an artificial intelligence (AI) ASIC and software development tools specifically for intelligent autonomous vehicles.

The Videantis AI multi-core processor platform and tool flow has been selected for the KI-Flex autonomous driving chip project.

Autonomous driving relies on fast and reliable processing and merging of data from several lidar, camera and radar sensors in the vehicle. This data can provide an accurate picture of the traffic conditions and environment to allow the vehicle to make intelligent decisions when driving. The process of intelligently analysing these volumes of sensor data requires high-performance, efficient, and versatile compute solutions.

Videantis, Fraunhofer IIS and partners working on the KI-Flex project to develop a powerful system-on-a-chip and associated software development tools will use the AI multi-core processor to run the algorithms used for sensor signal processing and sensor data fusion to enable the vehicle’s exact position and environment to be understood. The chip will integrate the next-generation videantis processors and technology to run these demanding artificial intelligence algorithms in real-time and with low power consumption. This chip will also be supported by an automated tool flow from Videantis, which automatically distributes and maps AI workloads onto the parallel architecture.

Under the KI-Flex program, Videantis will integrate its multi-core processor into a software-programmable and reconfigurable chip that processes the sensor data using AI-based methods for autonomous driving.

The project, which runs until August 2022, is funded by the German Federal Ministry of Education and Research (BMBF). Fraunhofer IIS leads the project consortium, which comprises Ibeo Automotive Systems, Infineon Technologies, Videantis, Technical University of Munich (Chair of Robotics, Artificial Intelligence and Real-time Systems), Fraunhofer Institute for Open Communication Systems FOKUS, Daimler Center for Automotive IT Innovations (DCAITI, Technical University of Berlin) and FAU Erlangen-Nürnberg (Chair of Computer Science 3: Computer Architecture).

Videantis is headquartered in Hannover, Germany. It provides deep learning, computer vision and video processor IP for flexible computer vision, imaging and multi-standard hardware/software video coding for automotive, mobile, consumer, and embedded markets. Based on a unified processor platform approach that is licensed to chip manufacturers, Videantis provides tailored solutions to meet the specific needs of its customers. Core competencies are deep camera and video application with SoC design and system architecture expertise. Target applications are advanced driver assistance systems (ADAS) and autonomous driving, mobile phones, augmented reality / virtual reality (AR/VR), IoT, gesture interfacing, computational photography, in-car infotainment, and over-the-top (OTT) TV.

http://www.videantis.com

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration