Smart sensors tackle transportation of the future

RoboSense has partnered with Horizon Robotics, Cainao Network, Sensible 4 and AutoX to introduce the Smart Sensor System designed for autonomous passenger cars, low-speed autonomous vehicles, high-speed robot taxis, and vehicle to road (V2R) infrastructure. The RoboSense Smart Sensor System combines lidar hardware, artificial intelligence (AI) point cloud algorithms, and an IC design for a smart sensor system that collects and interprets environment information.

At the Shanghai Auto Show, RoboSense announced the RS-Bpearl, a super wide angle blind-spot lidar and the RS-Ruby, a super high-resolution lidar for the sensor system.

The smart sensor system combines the RoboSense RS-Lidar-M1 MEMS -based smart sensor with the RS-Bpearl super wide angle blind-spot lidar, RS-Ruby super high-resolution lidar and RoboSense AI perception algorithms. RoboSense chose MEMS technology because of its high integration levels, high performance, high reliability, easy production, and low cost. It uses 905nm lasers for its solid state lidar system. RoboSense’s RS-Lidar-M1Pre debuted at CES 2018, and the upgraded RS-LiDAR-M1 introduced a wide, 120 degrees, field of view (FoV), a long range of 200m and AI algorithm innovations. The RS-lidar-M1 now meets OEM requirements and delivers safety for L3+ autonomous passenger cars.

Scheduled for mass production in 2021, the RoboSense RS-Lidar-M1 for OEMs will be a smart sensor. RoboSense and Horizon Robotics will build customised chips for the lidar environment perception algorithm, called RS-Lidar-Algorithms. These algorithms will be integrated into the IC and embedded in the lidar hardware. The RS-Lidar-M1 will provide instantaneous 3D point cloud data interpretation and output target level environment perception results in real time to autonomous vehicles.

In another project, RoboSense and Cainiao Network launch a smart sensor system for unmanned, low speed vehicles without blind spots. Previously, these unmanned vehicles used single laser beam lidar, which delivered unsatisfactory environment perception because of blind spots caused by the limitations of vertical FoV.

The RS-Bpearl has a wide FoV of 360 x 90 degrees, within the detection range of 30m (10 per cent) for a 100mm blind spot. The compact size of the sensor makes it easy to be applied to the side of the vehicle body to detect the vehicle’s surroundings, including blind spots. The RS-Bpearl’s modular design also dramatically reduces cost while adding the ability for customisation, says RoboSense.

Robot taxis need lidar with a higher vertical resolution to achieve a longer detection range, leading RoboSense to develop the 128 beam RS-Ruby lidar. It has three times higher 0.1 degree resolution and a three times longer detection range, compared with the previous version, RS-Lidar-32. One RS-Ruby Lidar is the core sensor on top of the vehicle to cover the 360 degree perception and two are located on the vehicle’s sides to detect blind spots.

Vehicle to road (V2R) systems allow vehicles to work with road conditions to co-ordinate and optimise the vehicle’s driving path, improving safety and road efficiency. RoboSense partnered with Sensible 4 to launch the world’s first autonomous driving shuttle bus for all weather conditions, the GACHA. Equipped with RoboSense’s cold-resistant 16-beam mechanical lidar environment perception system works on snow and ice covered roads in harsh winter and other severe weather conditions. RoboSense and Finland-based Sensible 4 will continue to collaborate deeper on future autonomous driving applications.

http://www.robosense.ai

> Read More

Sub-miniature tact switch targets IoT in the home

Electromechanical switch manufacturer, C&K, has introduced a sub-miniature PTS815 tact switch aimed at design engineers developing home automation and IoT electronic devices.

It has a footprint of just 4.2 x 3.2mm and is 2.5mm thick to save space in designs with limited space and frees up board space for other components to be added.

With a large actuation surface for easier integration, the switch is suited to applications such as home automation, IoT devices and e-cigarettes, as well as control systems for items such as drones, e-bikes and robot vacuum cleaners.

The PTS815 is manufactured in surface-mount technology (SMT) format and uses a hard actuator, ensuring that it can be fully integrated into standard processes, with no need for an additional interface button, reducing time and costs for the end equipment manufacturer.

It is available in the versions most commonly used by the market to meet the needs of many commercial applications, while retaining rigid quality control. The PTS815 retains high reliability at over 100,000 cycles, reports C&K.

According to the company, the PTS815 offers easier integration due to its variable range of force before the actuator is tripped, audible click from the hard actuator and large actuation surface.

Three operating-force levels are available: 180gf, 250gf and 400gf. The PTS815 switch is rated at 50mA, has a bounce time of less than 10ms and an operating temperature range of -20 to +70 degrees C.

The PTS815 is now available to OEMs.

Founded in 1928, C&K produces electromechanical switches and has custom design capabilities. C&K offers more than 55,000 standard products and 8.5 million switch combinations to companies that design, manufacture and distribute electronics products. Used in automotive, industrial, IoT, wearables, medical, telecomms, consumer products, aerospace and point of sale terminals, C&K products include tactile, pushbutton, snap-acting, toggle, rocker, detect, DIP, keyswitch, navigation, rotary, slide, switch lock, thumbwheel, smart card readers, high-rel connectors and custom assemblies.

http://www.ckswitches.com 

> Read More

Operations and visualisation software maximises system investments

The latest version of Yokogawa Electric’s real-time operations management and visualisation software, Fast/Tools R10.04 is available from today.

Part of Yokogawa’s OpreX Control and Safety System family, the flexible and scalable supervisory control and data acquisition (SCADA) software is suitable for use in smart IoT-enabled applications through to enterprise-wide integrated operations spanning multiple sites and subsystems. Yokogawa adds that Fast/Tools R10.04 will help customers derive maximum value from their investments over the entire system lifecycle.

The IoT, big data analytics and the cloud are driving the convergence of information, operational, and engineering technologies (IT, OT, and ET) in unmanned, remote-controlled, and enterprise-wide operations.

Sharing data on operational capacity/conditions and its use in production/maintenance planning and forecasting can shorten production and delivery cycles as well as help with production/maintenance planning and forecasting.

Fast/Tools R10.04 supports third digital industrial ecosystems and has information models that simplify and enhance sub-system integration with the Centum VP integrated production control system.    

To integrate with Centum VP, Yokogawa provides an enhanced graphics integration tool, together with support for industry standard interfaces and protocols which has additional support for Mitsubishi MELSEC iQ-R series controllers and the FINS protocol used by Omron programmable controllers.

To make it easier to incorporate Fast/Tools into an existing IT infrastructure, authentication and authorisation services are handled as one centralised process for all system users. This means that logged-on/off activities will be visible in one window, to improve the audit trail. The Fast/Tools encryption process has been improved to provide more secure communication between server and clients.

The Fast/Tools graphics editor is able to generate native displays to the HTML5 standard for PC-based and tablet or smartphone applications. An enhanced graphics editor simplifies the task of creating screens that deliver the data operators require, says Yokogawa.

The IIoT-enabled SCADA software improves visualisation, collaboration, decision support, configuration, communication, security, deployment, connectivity, operational alarm management/analysis, and history data storage/management. It can be set up to run on systems that employ Yokogawa’s advanced dual-redundant technology, enabling automatic switch-over to a back-up system in the event of a hardware failure.

https://www.yokogawa.com.eu

> Read More

FPGA claims world-leading CNN performance

Optimised for the Intel Arria 10 GX architecture, the Omnitek deep learning processing unit achieves 135Goperations per second per Watt at full 32-bit floating point accuracy when running the VGG-16 convolutional neural network (CNN) in an Arria 10 GX 1150.  This, says Omnitek, is a world-leading performance for a mid-range SoC FPGA.

Omnitek has designed the deep learning processing unit around a mathematical framework combining low-precision fixed point maths with floating point maths to achieve the high compute density with zero loss of accuracy.

Scalable across a wide range of Arria 10 GX and Stratix 10 GX devices, the processing unit can be tuned for low cost or high performance in either embedded or data centre applications.

It is software programmable in C/C++ or Python using standard frameworks such as TensorFlow, enabling it to be configured for a range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 and also for custom models.  No FPGA design expertise is required to do this, adds Omnitek.

Omnitek’s deep learning processing unit can be configured to provide optimal compute performance for CNNs, RNNs, MLPs and other neural network topologies and for as-yet unknown algorithms and optimisation techniques.

Omnitek was formed in 1998 to design intelligent video and vision systems based on programmable FPGAs and SoCs. Its technology enables customsed vision and artificial intelligence (AI) inferencing capabilities on FPGAs for customers across a range of end markets. Omnitek’s IP addresses demanding application requirements in areas such as video conferencing, projection and display and medical vision systems.

Today, it was announced that Intel had acquired Omnitek. “Omnitek’s technology is a great complement to our FPGA business,” said Dan McNamara, Intel senior vice president and general manager of the Programmable Solutions Group. “Their deep, system-level FPGA expertise and high performance video and vision related technology have made them a trusted partner for many of our most important customers. Together, we will deliver leading FPGA solutions for video, vision and AI inferencing applications on Intel FPGAs and speed time-to-market for our existing customers while winning new ones.”

http://www.omnitek.tv

> Read More

About Smart Cities

This news story is brought to you by smartcitieselectronics.com, the specialist site dedicated to delivering information about what’s new in the Smart City Electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Smart Cities Registration