“World’s smallest” 3D image sensor authenticates faces

At CES this week, Infineon will present what it claims is the world’s smallest 3D image sensor for face authentication and photo effects.

Infineon Technologies has collaborated with software and 3D time of flight (ToF) system specialist, pmdtechnologies, to develop what it claims is the world’s smallest and most powerful 3D image sensor. The Real3 chip measures 4.4 x 5.1mm and is the fifth generation of ToF deep sensors from Infineon.

Andreas Urschitz, president of the power management and multi-market division at Infineon, said: “We see great growth potential for 3D sensors, since the range of applications in the areas of security, image use and context-based interaction with the devices will steadily increase.” The 3D sensor also allows the device to be controlled via gestures, so that human-machine interaction is context-based and touch-free.

The depth sensor ToF technology enables an accurate 3D image of faces, hand details or objects, required to ensure that an image matches the original to verify payment transactions using mobile phones via facial recognition. This function requires an extremely reliable and secure image and return transmission of the high-resolution 3D image data. The same applies to securely unlocking devices with a 3D image. The Infineon 3D image sensor also implements this in extreme lighting conditions such as strong sunlight or in the dark.

The IRS2887C 3D image sensor also has additional options for photo taking, such as enhanced autofocus, bokeh effect for photo and video and improved resolution in poor lighting conditions. Real-time full-3D mapping also allows authentic augmented reality experiences.

Production will begin in the middle of 2020.

In addition, Infineon Technologies offers an optimised illumination driver (IRS9100C).

Infineon Technologies provides semiconductors to “make life easier, safer and greener”. It has approximately 41,400 employees worldwide.

http://www.infineon.com/real3

> Read More

Integrated IP and software develop contextually-aware IoT devices

At CES this week, Ceva will demonstrate its SenslinQ integrated hardware IP and software platform, designed to streamline the development of contextually-aware IoT devices.

The platform collects, processes and links data from multiple sensors to enable intelligent devices to understand their surroundings, explains the company by aggregating sensor fusion, sound and connectivity technologies.

Contextual awareness adds value and enhances the user experience of smartphones, laptops, augmented reality/virtual reality (AR/VR) headsets, robots, hearables and wearables. The SenslinQ platform centralises the workloads that require an intimate understanding of the physical behaviours and anomalies of sensors. It collects data from multiple sensors within a device, including microphones, radars, inertial measurement units (IMUs), environmental sensors, and time of flight (ToF) sensors, and conducts front-end signal processing such as noise suppression and filtering on this data. It applies algorithms to create “context enablers” such as activity classification, voice and sound detection, and presence and proximity detection. These context enablers can be fused on a device or sent wirelessly via Bluetooth, Wi-Fi or NB-IoT, to a local edge computer or the cloud to determine and adapt the device to its environment.

The customisable hardware reference design is composed of an Arm or RISC-V microcontroller, CEVA-BX DSPs and a wireless connectivity island, such as RivieraWaves Bluetooth, wi-fi or Dragonfly NB-IoT platforms, or other connectivity standards provided by the customer or third parties. Each components of these three components are connected using standard system interfaces.

The SenslinQ software is comprised of a portfolio of ready-to-use software libraries from CEVA and its ecosystem partners. Libraries include the Hillcrest Labs MotionEngine software packages for sensor fusion and activity classification in mobile, wearables and robots, the ClearVox front-end voice processing, WhisPro speech recognition and DSP and artificial intelligence (AI) libraries. There is also third party software components for active noise cancellation (ANC), sound sensing and 3D audio.

The accompanying SenslinQ framework is a Linux-based hardware abstraction layer (HAL) reference code and application programming interfaces (APIs) for data and control exchange between the multiple processors and sensors.

https://www.ceva-dsp.com

> Read More

Automotive Ethernet switch is “key component” for high speed networks

At CES, in Las Vegas this week, NXP Semiconductors will introduce an Ethernet switch that it describes as a “key component” of vehicle networks, with high levels of performance, safety and security.

The SJA1110 multi-gigabit Ethernet switch is designed to help vehicle manufacturers deliver the high-speed networks required for connected vehicles. It is believed to be the first automotive Ethernet switch with built in safety capabilities, offering integrated 100BASE-T1 PHYs, hardware-assisted security as well as multi-gigabit interfaces. It is optimised for integration with NXP’s S32G vehicle network processor and is designed to be used with the VR5510 power management IC in vehicle networks.

It has been introduced to support the Ethernet-based web sensors, actuators and processing unit and service-oriented gateways and domain controllers required for over-the-air updates and data-driven applications in connected vehicle networks. These networks are required to be scalable and move data quickly and securely. The car networks must also deliver functional safety in case of failure.

The SJA1110 Ethernet switch is aligned to the latest tine-sensitive network (TSN) standards and offers integrated 100BASE-T1 PHYs, hardware-assisted security and safety capabilities along with multi-gigabit interfaces.

It enables customers to meet ASIL requirements and implement dedicated failure-detection mechanisms for predictive maintenance.

The SJA1110 switch processes every Ethernet frame reaching the engine control unit (ECU) by validating it against hardware-based security rules which collect statistics and can trigger escalations if something does not conform to specification.

It is available in four hardware- and software- compatible variants, with a set of NXP original software and an open integrated controller.

NXP Semiconductors enables secure connections for a smarter world and provides secure connectivity solutions for embedded applications for applications in the automotive, industrial and IoT, mobile, and communication infrastructure markets. The company has approximately 30,000 employees in more than 30 countries.

http://www.nxp.com

> Read More

RoboSense claims solid-state lidar is a world-first

RoboSense will demonstrate the world’s first smart, solid-state lidar (light detection and ranging) system, the RS-LiDAR-M1Smart (smart sensor version), at CES 2020 in Las Vegas (7 to 10 January), Booth 6138, LVCC North Hall and with an on-vehicle public road test.

The RS-LiDAR-M1Simple is less – with dimensions of 110 x 50 x 120mm or 4.3 x 1.9 x 4.7-inch, is half the size of the previous version. It is equipped with an artificial intelligence (AI) perception algorithm that takes advantage of lidar’s potential to transform conventional 3D lidar sensors to a full data analysis and comprehension system, providing semantic-level structured environment information in real-time for autonomous vehicle decision making.

The RS-LiDAR-M1 family inherits the performance advantages of traditional mechanical lidar, and meets every automotive-grade requirement, says RoboSense, including intelligence, low cost, stability, simplified structure and small size.

“The RS-LiDAR-M1 is an optimal choice for the serial production of self-driving cars,” said Mark Qui, chief operating officer at RoboSense. “The sooner solid-state lidar is used, the sooner production will be accelerated to mass-market levels,” he added.

The system has 125 laser beams for a field of view of 120 degrees; this is the MEMS solid-state lidar’s largest field of view among released products worldwide, claims the company. RoboSense uses 905nm lasers which are cost-efficient, automotive grade and small in size instead of expensive 1550nm lasers. It also reports ranging ability limits to 150m at 10 per cent, which is also MEMS solid-state lidar’s longest detection range, says the company. The frame rate of RS-LiDAR-M1 is increased to 15Hz, which can reduce the impact of point cloud distortion caused by target movement.

The solid-state lidar has been reduced by half and is one-tenth the size of conventional 64-beam mechanical lidar. The RS-LiDAR-M1 can be easily embedded into the car’s body while still maintaining the vehicle’s appearance, confirmed RoboSense.

The RS-LiDAR-M1 uses VDA6.3 as the basis for project management, and the development of all modules undergoes a complete V model closed loop. RoboSense  mplemented IATF16949 quality management system and ISO26262 functional safety standards, combining ISO16750 test requirement and other automotive-grade reliability specifications to verify the RS-LiDAR-M1 products.

According to the AEC-Q100 standard, combining the characteristics of MEMS micro-mirror, a total of 10 verification test groups are designed covering temperature, humidity, packaging process, electromagnetic compatibility, mechanical vibration and shock and lifetime. The cumulative test time for all test samples has now exceeded 100,000 hours.

The longest-running prototype has been tested for more than 300 days, while the total road test mileage exceeds 150,000km with no degradation found in various testing scenarios, reports RoboSense.

The RS-LiDAR-M1 was tested for rain and fog under different light and wind speed conditions. It met the standards and the final mass-produced RS-LiDAR-M1 will adapt to all climatic and working conditions.

As a solid-state products, the RS-LiDAR-M1 has minimal wear and tear compared with movable mechanical structures, eliminating potential optoelectronic device failures due to mechanical rotation. The characteristics of solid state provide a reasonable internal layout, heat dissipation, and stability, adds RoboSense, another advantage over mechanical lidar.

The RS-LiDAR-M1Smart is a comprehensive system with sensor hardware, AI point cloud algorithm, and chipsets, for an end-to-end environment perception system. RoboSense’s AI perception algorithm injects the sensor with structured semantic-level comprehensive information, focusing on the perception of moving objects.

The coin-sized module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements, explains the company. Parts have reduced from hundreds to dozens compared to traditional mechanical lidar, to result in reduced cost and production time.

The scalability and layout flexibility of the optical module ensures subsequent MEMS lidar products and supports customisation for different application cases.

The smart sensor version of the RS-LiDAR-M1 is currently available for key customers who have purchased the solid-state LiDAR A-C sample kit, and will be available to all customers after Q1 2020.

http://www.robosense.ai

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration