In the centre of it all, is our LiDAR-based positioning that enable self-driving vehicles to operate in any kind of weather or environment. Our technology is truly one of a kind and brings several unique benefits to our customers. The software filters out outliers from the air, such as snow, rain and fog – and allows autonomous vehicles to drive on roads without lane markers and landmarks.
Our full stack solution consists of 4 modules:
positioning stack, obstacle detection, control stack, and fleet operation.
The commercial Autonomous Driving software product will be launched in 2022.
Read more about the DAWN.
The magic behind Sensible 4’s solution is our smart, LiDAR-based positioning software. LiDAR is an optical radar that helps self-driving cars detect the world around them and act accordingly in a safe and smart manner.
To achieve a truly accurate positioning in all conditions, we use our proprietary 3D mapping and map-based localization algorithms. We generate a 3D map of the surroundings from 3D LiDAR data. However, instead of using raw lidar data or recognising physical features from the data, we present the environment as something we call “volumetric probabilistic distributions.”
This gives us a very humanlike positioning. We look at the surroundings with our 3D LiDAR and derive where we are. In practice, this is done by matching the lidar measurements to the probabilistic distributions (the 3D-map). Probabilistic mapping and map matching generate several competitive factors. Outliers in the air such as snow, rain are automatically filtered out without any pre-filtering.
Small changes in the environment such as snow banks, foliage, parked cars and so on are tolerated. The same map that is created in summertime can also be used during winter.
The Sensible 4 Positioning Stack can map and localize in urban, sub-urban, rural and highway surroundings. The positioning is a result of fusing together different inputs such as Global Navigation Satellite System (GNSS), radar and other methods to provide global position accuracy.
Even if 3D-LiDAR is the most important sensor for positioning, our algorithm works also with radar data only.
Our self-driving technology utilizes a set of sensors to detect and recognize obstacles even when the visibility is low. Our Obstacle Detection is based on multimodal sensor data and on our positioning system, which provides both an accurate position of the vehicle and a 3D model of the surrounding environment.
Several sensors – lidars, radars and cameras – accumulate the detection’s probability. Position 3D map helps us to reject outliers – such as snow and rain – as wells as extract obstacles from the background while tracking the speed of moving obstacles.
The detected obstacles are classified through deep learning by their category, position, size and speed while predicting their future motion. Finally, all the acquired observations are integrated into a multi-object traffic tracker, which provides the best available situational awareness prediction for the control system.
Self-driving vehicles need to control traction. In our technology, this is done by getting measurements from all wheels.
The S4 Position stack provides an automatic traversability index map. This makes it possible for the vehicle to deviate from routes when needed. It is performing
As for today, an SAE level 4 self-driving system needs humans. But only as a fallback mechanism. This is why our solution includes a control and monitor system for a whole fleet of self-driving cars. It gives
A secure wireless connection lets the operator communicate with the vehicles in a smooth manner. A visualization of all relevant data is processed locally with only situational awareness view transmitted for the admin to check.
The remote operator station receives a 24/7 fleet view, an alarm system and the possibility for direct remote control of the fleet in safe mode with a possibility to re-route vehicles. We do route optimization for fleet routes in real-time.