Automotive Ethernet switch is “key component” for high speed networks

At CES, in Las Vegas this week, NXP Semiconductors will introduce an Ethernet switch that it describes as a “key component” of vehicle networks, with high levels of performance, safety and security.

The SJA1110 multi-gigabit Ethernet switch is designed to help vehicle manufacturers deliver the high-speed networks required for connected vehicles. It is believed to be the first automotive Ethernet switch with built in safety capabilities, offering integrated 100BASE-T1 PHYs, hardware-assisted security as well as multi-gigabit interfaces. It is optimised for integration with NXP’s S32G vehicle network processor and is designed to be used with the VR5510 power management IC in vehicle networks.

It has been introduced to support the Ethernet-based web sensors, actuators and processing unit and service-oriented gateways and domain controllers required for over-the-air updates and data-driven applications in connected vehicle networks. These networks are required to be scalable and move data quickly and securely. The car networks must also deliver functional safety in case of failure.

The SJA1110 Ethernet switch is aligned to the latest tine-sensitive network (TSN) standards and offers integrated 100BASE-T1 PHYs, hardware-assisted security and safety capabilities along with multi-gigabit interfaces.

It enables customers to meet ASIL requirements and implement dedicated failure-detection mechanisms for predictive maintenance.

The SJA1110 switch processes every Ethernet frame reaching the engine control unit (ECU) by validating it against hardware-based security rules which collect statistics and can trigger escalations if something does not conform to specification.

It is available in four hardware- and software- compatible variants, with a set of NXP original software and an open integrated controller.

NXP Semiconductors enables secure connections for a smarter world and provides secure connectivity solutions for embedded applications for applications in the automotive, industrial and IoT, mobile, and communication infrastructure markets. The company has approximately 30,000 employees in more than 30 countries.

http://www.nxp.com

> Read More

RoboSense claims solid-state lidar is a world-first

RoboSense will demonstrate the world’s first smart, solid-state lidar (light detection and ranging) system, the RS-LiDAR-M1Smart (smart sensor version), at CES 2020 in Las Vegas (7 to 10 January), Booth 6138, LVCC North Hall and with an on-vehicle public road test.

The RS-LiDAR-M1Simple is less – with dimensions of 110 x 50 x 120mm or 4.3 x 1.9 x 4.7-inch, is half the size of the previous version. It is equipped with an artificial intelligence (AI) perception algorithm that takes advantage of lidar’s potential to transform conventional 3D lidar sensors to a full data analysis and comprehension system, providing semantic-level structured environment information in real-time for autonomous vehicle decision making.

The RS-LiDAR-M1 family inherits the performance advantages of traditional mechanical lidar, and meets every automotive-grade requirement, says RoboSense, including intelligence, low cost, stability, simplified structure and small size.

“The RS-LiDAR-M1 is an optimal choice for the serial production of self-driving cars,” said Mark Qui, chief operating officer at RoboSense. “The sooner solid-state lidar is used, the sooner production will be accelerated to mass-market levels,” he added.

The system has 125 laser beams for a field of view of 120 degrees; this is the MEMS solid-state lidar’s largest field of view among released products worldwide, claims the company. RoboSense uses 905nm lasers which are cost-efficient, automotive grade and small in size instead of expensive 1550nm lasers. It also reports ranging ability limits to 150m at 10 per cent, which is also MEMS solid-state lidar’s longest detection range, says the company. The frame rate of RS-LiDAR-M1 is increased to 15Hz, which can reduce the impact of point cloud distortion caused by target movement.

The solid-state lidar has been reduced by half and is one-tenth the size of conventional 64-beam mechanical lidar. The RS-LiDAR-M1 can be easily embedded into the car’s body while still maintaining the vehicle’s appearance, confirmed RoboSense.

The RS-LiDAR-M1 uses VDA6.3 as the basis for project management, and the development of all modules undergoes a complete V model closed loop. RoboSense  mplemented IATF16949 quality management system and ISO26262 functional safety standards, combining ISO16750 test requirement and other automotive-grade reliability specifications to verify the RS-LiDAR-M1 products.

According to the AEC-Q100 standard, combining the characteristics of MEMS micro-mirror, a total of 10 verification test groups are designed covering temperature, humidity, packaging process, electromagnetic compatibility, mechanical vibration and shock and lifetime. The cumulative test time for all test samples has now exceeded 100,000 hours.

The longest-running prototype has been tested for more than 300 days, while the total road test mileage exceeds 150,000km with no degradation found in various testing scenarios, reports RoboSense.

The RS-LiDAR-M1 was tested for rain and fog under different light and wind speed conditions. It met the standards and the final mass-produced RS-LiDAR-M1 will adapt to all climatic and working conditions.

As a solid-state products, the RS-LiDAR-M1 has minimal wear and tear compared with movable mechanical structures, eliminating potential optoelectronic device failures due to mechanical rotation. The characteristics of solid state provide a reasonable internal layout, heat dissipation, and stability, adds RoboSense, another advantage over mechanical lidar.

The RS-LiDAR-M1Smart is a comprehensive system with sensor hardware, AI point cloud algorithm, and chipsets, for an end-to-end environment perception system. RoboSense’s AI perception algorithm injects the sensor with structured semantic-level comprehensive information, focusing on the perception of moving objects.

The coin-sized module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements, explains the company. Parts have reduced from hundreds to dozens compared to traditional mechanical lidar, to result in reduced cost and production time.

The scalability and layout flexibility of the optical module ensures subsequent MEMS lidar products and supports customisation for different application cases.

The smart sensor version of the RS-LiDAR-M1 is currently available for key customers who have purchased the solid-state LiDAR A-C sample kit, and will be available to all customers after Q1 2020.

http://www.robosense.ai

> Read More

Wireless VR/AR haptic glove allows gamers to “feel” digital objects

At CES next week, BeBop Sensors will announce the Forte Data Glove, claimed to be the first virtual reality (VR) haptic glove integrated and exclusively designed for Oculus Quest, Oculus Link, Oculus Rift S, Microsoft Windows Mixed Reality, HTC Vive Cosmos, HTC Vive Pro, HTC Focus Plus, and Varjo VR headset technology. It is also the first haptic glove for the HTC Cosmos and for the Microsoft Windows mixed reality headsets, including HP, Lenovo, Acer, Dell, and Samsung, through integration with the HP Reverb. In addition, it is believed to be the first haptic VR glove to fully support Oculus Quest Link, which allows Oculus Quest to leverage the graphics capabilities and processing power of a VR computer for higher end VR interaction, says BeBop Sensors.

Described as the first affordable, all-day wireless VR/AR (augmented reality) data glove, the VR headset/data glove fits in a small bag for portability and requires almost no set-up, bringing VR enterprise training, maintenance and gaming to new areas. The Forte Data Glove ushers in the next generation of VR, says BeBop Sensors, by allowing people to do real practical things in the virtual world with natural hand interactions to feel different textures and surfaces.

A nine degree inertial measurement unit (IMU) is integrated, to provide low drift and reliable pre-blended accelerometer and gyro sensor data. Six haptic actuators are located on four fingertips, the thumb and the palm.

Up to 16 haptic sound files can reside on the glove and new files can be rapidly uploaded over Bluetooth or USB.

The sensors are fast, operating at 160Hz, with instantaneous (sub six millisecond) response. By providing touch feedback, the user experiences a more realistic and safer training for business and enhanced VR gaming experiences, says the company.

Hand tracking ties natively into each system’s translation system, with top-of-the-line finger tracking supplied by Bebop Sensors’ fabric sensors. Haptic feelings include those for hitting buttons, turning knobs, opening doors for touch sensations in VR/AR.

The universal open palm design fits most people and the glove can be cleaned, is hygienic and breathable with waterproof sensors.

The glove targets enterprise, as well as location-based entertainment (LBE) gaming markets, including VR enterprise training, VR medical trials/rehabilitation, robotics and drone control, VR CAD design and review and gaming.

BeBop Sensors will be at CES in Las Vegas, (7 to 10 January, 2020) Booth 22032 LVCC South Hall.

http://www.bebopsensors.com

> Read More

Perception software runs in sensors of autonomous vehicles

Believed to be the first commercially available, 2D/3D perception system designed to run in the sensors of autonomous vehicles, AEye will showcase its adaptive sensing platform at CES 2020 (7 to 10 January). It combines deterministic and artificial intelligence (AI) -driven perception to classify speed and range, motion forecasting and collision avoidance capabilities.

Its advent means that basic perception can be distributed to the edge of the sensor network. This allows autonomous designers to use sensors to not only search and detect objects, but also to acquire, and ultimately to classify and track these objects. The ability to collect this information in real-time both enables and enhances existing centralised perception software platforms, says AEye, by reducing latency, lowering costs and securing functional safety.

A perception system at the sensor level can potentially deliver more depth, nuance and critical information than with a 2D image-based system, says the company, for improved prediction for advanced driver assistance systems (ADAS) and autonomous vehicles.

This in-sensor perception system is based on AEye’s flexible iDAR platform that enables intelligent and adaptive sensing. The iDAR platform is based on biomimicry and replicates the perception of human vision through a combination of lidar, fused camera and AI. It is the first system to take a fused approach to perception – leveraging iDAR’s Dynamic Vixels, which combine 2D camera data (pixels) with 3D lidar data (voxels) inside the sensor, explains AEye. The software-definable perception platform allows for disparate sensor modalities to complement each other, enabling the camera and lidar to work together to make each sensor more powerful, while providing “informed redundancy” for functional safety.

Delivering perception at speed and at range has been a challenge for the autonomous vehicle industry. The reliability of detection and classification has to be improved, while extending the range at which objects can be detected, classified and tracked. The sooner an object can be classified and its trajectory accurately forecasted, the more time the vehicle has to brake, steer or accelerate in order to avoid collisions.

Rather than trying to capture as much data as possible, which requires time and power to process, second generation autonomous vehicle systems collect, manage and transform data into actionable information. The iDAR platform allows for applications ranging from ADAS safety augmentation, such as collision avoidance, to selective autonomy (highway lane change), to fully autonomous use cases in closed-loop geo-fenced or open-loop scenarios.

Engineers can now experiment using software-definable sensors without waiting for the next generation of hardware. They can adapt shot patterns in less than a second and simulate impact to find the optimal performance. They can also customise features or power usage through modular design, for instance using a smaller laser and no camera to create a specialised ADAS system, or mixing and matching short and long range lidar with camera and radar for more advanced, cost-sensitive 360 degree systems. Unlike with the industry’s previous generations of sensors, OEMs and Tier 1s can now also move algorithms into the sensors when it is appropriate, advises AEye.

AEye’s system more quickly and accurately searches, detects and segments objects and, as it acquires specific objects, validates that classification with velocity and orientation information. This enables the system to forecast the object’s behaviour, including inferring intent. By capturing better information faster, the system enables more accurate, timely, reliable perception, using far less power than traditional perception solutions, explains AEye.

The iDAR platform will be available via a software reference library which includes identification of objects (e.g. cars, pedestrians) in the 3D point cloud and camera. The system accurately estimates their centroids, width, height and depth to generate 3D bounding boxes for the objects.

It will also classify the type of objects detected to understand the motion characteristics of these objects. The segmentation function will further classify each point in the scene to identify specific objects those points belong to. This is especially important to accurately identify finer details, such as lane divider markings on the road.

Tracking objects through space and time helps keep track of objects that could intersect the vehicle’s path.

For range and orientation, identifying where the object is relative to the vehicle, and how it’s oriented relative to the vehicle helps the vehicle contextualise the scene around it.

Leveraging the benefits of agile lidar to capture the speed and direction of the object’s motion relative to the vehicle provides the foundation for motion forecasting, which is where the object will be at different times in the future. This helps the vehicle to assess the risk of collision and charter a safe course.

AEye’s iDAR software reference library will be available in Q1 2020.

http://www.aeye.ai

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration