by CNN Business
Sector News
Solid-state lidar improves detection distance for vehicles
At this week’s CES 2019 (8 to 12 January) in Las Vegas, USA, RoboSense will demonstrate an upgraded version of its MEMS solid-state lidar, an automotive grade version designed for the mass production of autonomous vehicles. The RS-LiDAR-M1 has patented MEMS technology and offers vehicle intelligence awareness to a level that fully supports Level 5 driverless automated driving. The company also claims a breakthrough on the measurement range limit based on 905nm lidar with a detection distance to 200m. As a result, says the company, the upgraded optical system and signal processing technology can now clearly recognise even small objects, such as railings and fences.
The first generation MEMS solid-state lidar RS-LiDAR-M1Pre was launched at last year’s CES and was loaded on the Cainiao unmanned logistics vehicle in May 2018. This year the company will be showcasing the potential of its MEMS optomechanical system design, with improvements in detection distance, resolution, field of view (FoV) and reliability.
The RS-LiDAR-M1 MEMS optomechanical lidar provides an increased horizontal field of view compared to the previous generation, reaching 120 degrees FoV; only a few RS-LiDAR-M1s are needed to cover the 360 degrees field of view. It also means that with only five RS-LiDAR-M1s, there is no blind zone around the car with dual lidar sensing redundancy provided in front of the car for a Level 5, i.e. full driverless – driving.
The company believes that the battle between 1550 and 905nm lidar is about cost and performance. When aiming for a low-cost 905nm lidar, it is necessary to overcome the technical difficulties of achieving sufficient measurement range. The RS-LiDAR-M1 achieves what the Robosense describes as a breakthrough on the measurement range limit based on the 905nm lidar, with a detection distance to 200m.
The final output point cloud effect means that the RS-LiDAR-M1 has improved detection capability via the upgraded optical system and signal processing technology, which can now clearly recognise even small objects, such as railings and fences.
Developer toolbox supports STM32Cube microcontrollers at the edge
Driving artificial intelligence (AI) to edge and node embedded devices, STMicroelectronics has introduced the STM32 neural network developer toolbox.
AI uses trained artificial neural networks to classify data signals from motion and vibration sensors, environmental sensors, microphones and image sensors, more quickly and efficiently than conventional handcrafted signal processing.
The STM32Cube.AI extension (X-Cube-AI) software tool generates optimised code to run neural networks on STM32 microcontrollers. It can be downloaded inside ST’s STM32CubeMX MCU configuration and software code-generation ecosystem.
Today, the tool supports Caffe, Keras (with TensorFlow backend), Lasagne, ConvnetJS frameworks and integrated development environments (IDEs) including those from Keil, IAR, and System Workbench.
The FP-AI-Sensing1 software function pack provides examples of code to support end-to-end motion (human-activity recognition) and audio (audio-scene classification) applications based on neural networks. This function pack leverages ST’s SensorTile reference board to capture and label the sensor data before the training process. The board can then run inferences of the optimised neural network.
The ST Bluetooth low energy (BLE) Sensor mobile app acts as the SensorTile’s remote control and display.
The toolbox consists of the STM32Cube.AI mapping tool, application software examples running on small form factor, battery-powered SensorTile hardware, together with the partner program and dedicated community support offers a fast and easy path to neural network implementation on STM32 devices.
The extension is supplied with ready-to-use software function packs containing code examples for human activity recognition and audio scene classification that are immediately usable with ST‘s reference sensor board and mobile app.
Developer support is provided through qualified partners in the ST Partner Program and dedicated AI/machine learning (ML) STM32 community, assures the company.
ST explains that STM32Cube.AI can be used by developers to convert pre-trained neural networks into C-code that calls functions in optimised libraries that can run on STM32 microcontrollers.
Accompanying software function packs include example code for human activity recognition and audio scene classification. These code examples are immediately usable with the ST SensorTile reference board and the ST BLE Sensor mobile app.
ST will demonstrate applications developed using STM32Cube.AI running on STM32 microcontrollers in a private suite at CES, the Consumer Electronics Show, in Las Vegas, (8 to 12 January).
Huawei launches server chipset as China pushes to cut reliance on imports
by Reuters
About Smart Cities
This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration


