Sensor Fusion to the Rescue

Sensor fusion for robust obstacle detection

The effects of weather on Sensible 4’s positioning and navigation are, in the end, rather minor. Positioning is based on measuring large, stationary shapes like buildings, which is rather straight-forward even in low visibility.

Generating an accurate snapshot of the traffic situation, specifically obstacle detection and tracking (ODTS), is the biggest challenge caused by weather.

Sensor Fusion Combines the Powers of the Various Sensors

The best way to enable autonomous driving in bad weather is to combine sensor data, technology known as sensor fusion. When the cameras see one thing, LiDAR sensors see another thing, and the radar generates a third image, sensor fusion combines these partial information streams into an image that is more than the sum of its parts.

When dealing with sensor fusion, engineers often end up dealing with neural networks and the constraints of machine learning. For example, if ODTS-classifier – a piece of machine vision software doing the classifying of detected objects around the vehicle – is taught with pictures taken in good weather and lighting conditions, it may not perform well in bad weather or darkness.

Then again if this camera-based classification is aided with proper information of the size of the object provided by LiDAR, the classification is easier to do, but this increases the computational requirements.

Also whatever methods are being used for sensor fusion, they need to be fast for the vehicle to do real-time decisions based on the traffic situation.

In the end, bad weather always makes driving difficult, whether there’s a human or a machine behind the wheel. The reliability of made decisions decreases. When the weather is bad, everyone should simply slow down and be more careful in traffic.