Map Data
Ontario, Canada, aims at improving the process of testing for map data validation.
In autonomous vehicles, the raw maps data generated from LIDAR like sensors have gaps; during the collection of data from LIDAR ‘like’ sensors, environmental discontinuities can be generated (e.g. inaccurate road and curvature geometry). For example: discontinuity in roads, curvature banking inaccurate, etc. This is due to limited fidelity of the sensors and the quality of post-processing methods of the raw data. In order for it to be useful, the data has to be examined and validated against other sources e.g. cameras and satellite. A methodology to evaluate the raw map data and improve its quality (identify and fill gaps) would be useful in raising the confidence of the map data for autonomous vehicles experience.
Calibrating Stereo Cameras from LiDAR Readings
Point clouds are collections of 3D points that approximate the surfaces of real-world objects. They're commonly used to represent the inputs of depth sensors such as LiDAR's and setereo cameras. In this respect those two sensor caegories are complementary, in the sense that point clouds generated from LiDAR readings are more reliable but tend to be more sparse (due to the challenges of stacking multiple laser emmitter / sensor pairs in a single device), whereas those derived from stereo camera depth maps can be denser, but are often subject to larger estimate errors.
Combining LiDAR's and stereo cameras in a heterogeneous sensor solution is a cost-effective way to generate reach and robust sensor streams. In particular, matching environment features captured by either sensor makes possible to identify eventual camera calibration issues, allowing them to be corrected. This is exemplified in the picture above, where superimposing point clouds generated by the LiDAR (red lines) and stereo camera reveals a large misalignment (of almost 2 m) between distance estimates for a passing vehicle.
An automated test suite that enabled optimal testing of multiple stereo vision solutions based on their functional similarities and differences would allow quicker improvements in the quality of combined LiDAR / stereo camera sensor setups, and therefore increased performance in sensor-based tasks.
Driving-time map validation
High-fidelity 3D maps containing depth and color information for road lanes, signs, buildings etc. can turn localization into an almost trivial problem of matching current sensor inputs to recorded data. Unfortunately, changes to the environment (e.g. new or closed streets, changes to signs, billboards and other landmarks) may not be reflected on the map by the time a vehicle comes across them. Additionally, map data may sometimes contain gaps and other defects. Both occurrences would impair localization, and therefore the reliability of the autonomous driving system.
The localization system must therefore constantly check map features against current sensor inputs, flagging any large discrepancies for later evaluation (e.g. by sending notifications to a cloud backend). If differences are such that localization cannot be guaranteed, the autonomous driving system must be notified, returning control to the human driver in response.