Wireless VR/AR haptic glove allows gamers to “feel” digital objects

At CES next week, BeBop Sensors will announce the Forte Data Glove, claimed to be the first virtual reality (VR) haptic glove integrated and exclusively designed for Oculus Quest, Oculus Link, Oculus Rift S, Microsoft Windows Mixed Reality, HTC Vive Cosmos, HTC Vive Pro, HTC Focus Plus, and Varjo VR headset technology. It is also the first haptic glove for the HTC Cosmos and for the Microsoft Windows mixed reality headsets, including HP, Lenovo, Acer, Dell, and Samsung, through integration with the HP Reverb. In addition, it is believed to be the first haptic VR glove to fully support Oculus Quest Link, which allows Oculus Quest to leverage the graphics capabilities and processing power of a VR computer for higher end VR interaction, says BeBop Sensors.

Described as the first affordable, all-day wireless VR/AR (augmented reality) data glove, the VR headset/data glove fits in a small bag for portability and requires almost no set-up, bringing VR enterprise training, maintenance and gaming to new areas. The Forte Data Glove ushers in the next generation of VR, says BeBop Sensors, by allowing people to do real practical things in the virtual world with natural hand interactions to feel different textures and surfaces.

A nine degree inertial measurement unit (IMU) is integrated, to provide low drift and reliable pre-blended accelerometer and gyro sensor data. Six haptic actuators are located on four fingertips, the thumb and the palm.

Up to 16 haptic sound files can reside on the glove and new files can be rapidly uploaded over Bluetooth or USB.

The sensors are fast, operating at 160Hz, with instantaneous (sub six millisecond) response. By providing touch feedback, the user experiences a more realistic and safer training for business and enhanced VR gaming experiences, says the company.

Hand tracking ties natively into each system’s translation system, with top-of-the-line finger tracking supplied by Bebop Sensors’ fabric sensors. Haptic feelings include those for hitting buttons, turning knobs, opening doors for touch sensations in VR/AR.

The universal open palm design fits most people and the glove can be cleaned, is hygienic and breathable with waterproof sensors.

The glove targets enterprise, as well as location-based entertainment (LBE) gaming markets, including VR enterprise training, VR medical trials/rehabilitation, robotics and drone control, VR CAD design and review and gaming.

BeBop Sensors will be at CES in Las Vegas, (7 to 10 January, 2020) Booth 22032 LVCC South Hall.

http://www.bebopsensors.com

> Read More

Perception software runs in sensors of autonomous vehicles

Believed to be the first commercially available, 2D/3D perception system designed to run in the sensors of autonomous vehicles, AEye will showcase its adaptive sensing platform at CES 2020 (7 to 10 January). It combines deterministic and artificial intelligence (AI) -driven perception to classify speed and range, motion forecasting and collision avoidance capabilities.

Its advent means that basic perception can be distributed to the edge of the sensor network. This allows autonomous designers to use sensors to not only search and detect objects, but also to acquire, and ultimately to classify and track these objects. The ability to collect this information in real-time both enables and enhances existing centralised perception software platforms, says AEye, by reducing latency, lowering costs and securing functional safety.

A perception system at the sensor level can potentially deliver more depth, nuance and critical information than with a 2D image-based system, says the company, for improved prediction for advanced driver assistance systems (ADAS) and autonomous vehicles.

This in-sensor perception system is based on AEye’s flexible iDAR platform that enables intelligent and adaptive sensing. The iDAR platform is based on biomimicry and replicates the perception of human vision through a combination of lidar, fused camera and AI. It is the first system to take a fused approach to perception – leveraging iDAR’s Dynamic Vixels, which combine 2D camera data (pixels) with 3D lidar data (voxels) inside the sensor, explains AEye. The software-definable perception platform allows for disparate sensor modalities to complement each other, enabling the camera and lidar to work together to make each sensor more powerful, while providing “informed redundancy” for functional safety.

Delivering perception at speed and at range has been a challenge for the autonomous vehicle industry. The reliability of detection and classification has to be improved, while extending the range at which objects can be detected, classified and tracked. The sooner an object can be classified and its trajectory accurately forecasted, the more time the vehicle has to brake, steer or accelerate in order to avoid collisions.

Rather than trying to capture as much data as possible, which requires time and power to process, second generation autonomous vehicle systems collect, manage and transform data into actionable information. The iDAR platform allows for applications ranging from ADAS safety augmentation, such as collision avoidance, to selective autonomy (highway lane change), to fully autonomous use cases in closed-loop geo-fenced or open-loop scenarios.

Engineers can now experiment using software-definable sensors without waiting for the next generation of hardware. They can adapt shot patterns in less than a second and simulate impact to find the optimal performance. They can also customise features or power usage through modular design, for instance using a smaller laser and no camera to create a specialised ADAS system, or mixing and matching short and long range lidar with camera and radar for more advanced, cost-sensitive 360 degree systems. Unlike with the industry’s previous generations of sensors, OEMs and Tier 1s can now also move algorithms into the sensors when it is appropriate, advises AEye.

AEye’s system more quickly and accurately searches, detects and segments objects and, as it acquires specific objects, validates that classification with velocity and orientation information. This enables the system to forecast the object’s behaviour, including inferring intent. By capturing better information faster, the system enables more accurate, timely, reliable perception, using far less power than traditional perception solutions, explains AEye.

The iDAR platform will be available via a software reference library which includes identification of objects (e.g. cars, pedestrians) in the 3D point cloud and camera. The system accurately estimates their centroids, width, height and depth to generate 3D bounding boxes for the objects.

It will also classify the type of objects detected to understand the motion characteristics of these objects. The segmentation function will further classify each point in the scene to identify specific objects those points belong to. This is especially important to accurately identify finer details, such as lane divider markings on the road.

Tracking objects through space and time helps keep track of objects that could intersect the vehicle’s path.

For range and orientation, identifying where the object is relative to the vehicle, and how it’s oriented relative to the vehicle helps the vehicle contextualise the scene around it.

Leveraging the benefits of agile lidar to capture the speed and direction of the object’s motion relative to the vehicle provides the foundation for motion forecasting, which is where the object will be at different times in the future. This helps the vehicle to assess the risk of collision and charter a safe course.

AEye’s iDAR software reference library will be available in Q1 2020.

http://www.aeye.ai

> Read More

4D lidar chip holds promise of mass scale autonomous vehicles

The Aeries frequency modulated continuous wave (FMCW) lidar-on-chip sensing system provides high range performance and low cost to take autonomous driving to mass scale, claims Aeva.

The Aeries system integrates key elements of a lidar sensor into a miniaturised photonics chip, says Aeva. The company claims that its 4D lidar-on-chip significantly reduces the size and power of the device while achieving full range performance of over 300m for low reflective objects and the ability to measure instant velocity for every point; this is a first for the autonomous vehicle industry, says Aeva. The lidar-on-chip will cost less than $500 at scale, compared with $10,000s for today’s lidar sensors.

“One of the biggest roadblocks to bringing autonomous vehicles to the mainstream has been the lack of a high-performance and low-cost LiDAR that can scale to millions of units per year,” said Soroush Salehian, Aeva’s co-founder. “From the beginning we’ve believed that the only way to truly address this problem was to build a unique LiDAR-on-chip sensing system that can be manufactured at silicon scale.”

A distinguishing approach by Aeva is for the Aeries sensing system to measure the instant velocity of every point on objects beyond 300m. Aeva’s lidar is also free from interference from other sensors or sunlight and it operates at only a fraction of the optical power typically required to achieve long range performance. These factors contribute to increasing the factor of safety and scalability for autonomous driving.

Aeva also differs from other FMCW approaches by being able to provide multiple millions of points per second for each beam, resulting in high fidelity data that has been unprecedented until today, claims the company.

Co-founder, Mina Rezk, explains: “A key differentiator of our approach is breaking the dependency between maximum range and points density, which has been a barrier for time-of-flight and FMCW lidars . . . Our 4D lidar integrates multiple beams on a chip, each . . .capable of measuring more than two million points per second at distances beyond 300m.”

Porsche SE, a majority voting shareholder of the Volkswagen Group, has recently invested in Aeva, expanding the existing partnership between Aeva and Audi’s self-driving unit (Audi is part of the Volkswagen Group). Aeva points out that this is Porsche’s only investment in lidar technology to date.

Alex Hitzinger, senior vice president of Autonomous Driving at VW Group and CEO of VW Autonomy, said: “Together we are looking into using Aeva’s 4D lidar for our VW ID Buzz AV, which is scheduled to launch in 2022/23.”

Aeries meets the final production requirements for autonomous driving robo-taxis and large volume ADAS customers and will be available for use in development vehicles in the first half of 2020.

Aeva will unveil Aeries at CES 2020 (7 to 10 January) at the Las Vegas Convention Center (Booth 7525, Tech East, North Hall).

http://www.Aeva.com

> Read More

Sensor fusion software places control in the palm of consumer’s hand

Motion-based gesture control together with 3D motion tracking and pointing are enabled by software by Ceva that can be used for wireless, handheld controllers for consumer devices.

The Hillcrest Labs MotionEngine Air algorithms and software stack will be demonstrated at CES in Las Vegas (7 to 10 January 2020). The production-ready software delivers low power, motion-based gesture control, 3D motion tracking and pointing for consumer handheld devices in high volume markets, such as smartphone and PC stylus pens, smart TV and over-the-top (OTT) remote controls, game controllers, augmented reality and virtual reality (AR and VR) controllers, and PC peripherals.

Ceva reports that advances in low power inertial sensors and Bluetooth Low Energy along with the MotionEngine Air sensor fusion software yields sub-mA level power draw for the whole system for precise, interactive and intuitive motion-control for always-on and always-aware control.

MotionEngine Air software is optimised for use in embedded devices. It has precise pointing, gesture and motion control algorithms, a cursor for point-and-click control of an onscreen user interface, gestures such as flick, shake, tap, tilt, rotate, or circle for intuitive user interface controls, six-axis sensor fusion enabling 3D motion tracking for gaming and VR, motion events (pick-up, flip and stability detector) to enable power savings and patented orientation compensation and adaptive tremor removal.

The software stack has host drivers and sensor management to streamline software integration with popular operating systems including Android, Windows, MacOS and Linux.

The MotionEngine Air software is flexible with a low power and small memory footprint and can run on a variety of processors, including Arm Cortex-M, RISC-V and CEVA-BX and CEVA-TeakLite families of DSPs. It can be delivered in multiple configurations – including one that requires an accelerometer plus gyroscope (IMU) and a gesture and motion event-based one that requires only an accelerometer.

MotionEngine Air software is pre-qualified for use with sensors from the leading inertial sensor suppliers, says Ceva.

The company will demonstrate its MotionEngine Air software and evaluation board during CES 2020 at its hospitality suite in the Westgate Hotel.

http://www.ceva-dsp.com 

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration