Getting artificial intelligence to explain its reasoning
Artificial Intelligence advises people, rather than making decisions for them. It may provide recommendations about giving someone a loan, if it's safe to switch lane in your vehicle, or whether a person is a threat. But what confidence can we have in those recommendations?
AI systems can be hard to make sense of. Their outputs are not easily explainable. It's particularly challenging in systems that combine many data sources to make decisions.
We've devised an interactive system demo that illustrates one possible approach. It's a simple conversational interface that combines learning and reasoning services to provide hierarchical explanations to people who are using the AI system. When it provides a recommendation, you can examine it further to get lower levels of explanation for why the system made its decision.
This work is part of our decade-long commitment to DAIS-ITA - the Distributed Analytics and Information Sciences International Technology Alliance. Along with defence, national security, government, commercial, and academic partners in the UK and the US, we're conducting fundamental research in distributed analytics and information science affecting military coalition operations.
Speaker Dan Cunnington is an IBM Senior Inventor, and part of our Emerging Technology team in Hursley. Most recently, he's been working with IBM Research to develop services running on IBM Cloud that allows clients to take advantage of Bayesian Optimisation. This leading edge optimisation technique enables tuning of complex experiments, such as neural networks, drug discovery, and file system optimisation.
He is also working to develop techniques to allow intelligent vehicles to understand and reason about their environment. He is a familiar face running or contributing to hackathons inside and outside IBM for retail, transport, media, healthcare, and logistics. When he's not running hackathons he's running half-marathons or rock-climbing.