AI systems with explainable results build trust in the system
Explainability is a key component in building trust in a novel AI system, especially when the decisions it makes are important. Decisions that determine whether or not someone receives a bank loan or are targeted by anti-crime agencies require a higher degree of transparency. With the advent of GDPR, black-box decision making is no longer acceptable.
Developing an interpretable model means you can provide customers with potential interventions to change the outcome. For example, the Customer Onboarding (KYC) solution we built for RBS can explain which particular parts of an application it is paying the most attention to when making judgements. We also provide an audit trail of the decisions made, and can run hypothetical experiments to see what impact changes in data have on the algorithmic decision.
What does it mean for our clients? It builds trust in the system and provides assurance from a regulatory and governance standpoint.