Safety and formal proofs in robotics

Formal proofs and verification in robotics are a difficult subject to tackle, due to the unclear nature of the environment and the question of what constitutes a sufficient model to even be able to make valid proofs.

I stumbled upon research regarding safety in robotics, in particular about obstacle avoidance and surgery, which I have yet to read in detail, but seems interesting.

Since the NTSB, a US-government agency investigating transportation accidents, released their final report analyzing the causes of Uber’s autonomous car accident from 2018, the relevance of the search for a Driver’s License Test for Robotic Cars, that uses formal proofs, is highlighted again.

Acting as a very brief summary of the technical causes: it seems that unreliable object detection and erroneous trajectory computation were major contributing factors. The Vehicle Automation Report goes into some details about the system design, of which I will highlight some points below.

Using LIDAR and RADAR sensor data the system performed object detection, assigning one of four possible classes (vehicle, cyclist, pedestrian and “other”). Depending on the class it would assume typical behavior, such as if the object could cross a road (pedestrians can only cross in allowed areas), move along a constrained path (vehicles or cyclists, which are in the same lane as the autonomous car, move in the same direction), or if it can move at all (“other” objects are assumed to be in a fixed place).

When the object detection system reclassifies an object, the software discards all the previously recorded waypoints (positions) of that object and begins the trajectory prediction anew. Due to the unexpected silhouette of a person pushing a bike, the system was unable to assign a constant class, and therefore repeatedly changed its trajectory prediction and reevaluated the possibility of a collision, depending on the object’s class and therefore its expected behavior.

On top of that, objects that are not expected to move (“other” object that are not in the same line as the autonomous car) or that are not supposed to cross the road, will not be considered in avoiding collisions, to prevent constant breaking or slow driving.

All of that lead to not detecting a collision in time and therefore not reacting by steering or braking, until collision was imminent.

While Uber claims to have fixed these issues, other less obviously dangerous modeling errors could have been made. A systematic verifiable approach in safety critical domains is certainly desirable.

This should not stop experimentation or exploration, but testing should be done in less safety-critical environments first. Having reliable, simple backup systems, that take over to avoid crashes in a provable way, would make systems based on machine learning and ad-hoc logic much more acceptable.

Failing gracefully in general, also in non-safety critical domains, would make (not only) machine learning based systems much more useful, even if they are not perfect in their predictions. A related read: Why ML interfaces will be more like pets than machines.

Leave a Reply

Your email address will not be published. Required fields are marked *