Researchers from Microsoft and Massachusetts Institute of Technology (MIT) in the US have developed a new model that can find artificial intelligence (AI) blind spots, ultimately helping engineers improve the safety of AI systems, such as driverless vehicles and autonomous robots.
According to MIT, the model can identify instances when autonomous systems have learned from examples that don’t match what is happening in reality and can cause dangerous mistakes as a result.
The AI systems that power driverless cars, for example, are trained in virtual simulations to prepare for almost any event on the real road.
A driverless car that has not been trained and is therefore unable to differentiate between large, white cars and ambulances could cause an accident as it does not know that it should slow down and pull over when it sees an ambulance.
The new model uses human input to uncover these training blind spots. As the AI system goes through its training a human monitors its actions providing feedback when it made, or was about to make, a mistake. Using machine learning techniques, the model pinpoints instances where the system is likely to need more information about how to behave.
“The model helps autonomous systems better know what they don’t know,” said Ramya Ramakrishnan, a PhD student in the Computer Science and Artificial Intelligence Laboratory at MIT.
“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”
Read the full article.
Share this story