Introducing Explainable AI
Subscribe to receive short posts and illustrations about explainable AI.
Artificial intelligence (AI) is playing an increasingly important role in our lives every day. While this capability continues to surround us, it's important to have a foundational understanding of how it's influencing our decisions and impacting our lives. In some instances, it is not necessary that we understand exactly how the AI system is working behind the scenes and why it comes to its particular decision, however, it's critical that we are able to explain the decisions generated by AI systems which could result in more serious negative consequences if the AI is wrong. For example, transparency is necessary when AI systems are making decisions regarding medical treatment, the judicial system, military defense action, approval of loan applications, etc.
Consider this:
Do you want to know why the virtual assistant in your phone gave you a particular answer every time you ask it a question? Probably not - that would be too much information.
Does a doctor want to know why the model that helps detect cancerous cells in medical imagery gave a false positive result? Probably - there's a reason why that incorrect result occurred and there's opportunity to improve (debug) the model if the model's reasoning is understood.
If you are new to AI altogether, it will help to review the Definition of AI, ML, and DL before proceeding so you have a basic understanding of the concepts.
What is explainable AI?
Explainable AI (XAI) is a set of systems where the reasoning behind the AI's actions can be understood and explained by humans.
XAI is not a new concept. Rule-based AI systems - meaning, those programmed with the exact instructions on how to handle every input - have explainability built in. They are deterministic systems with predefined outcomes. We can explain why a rule-based AI system produces particular result; this is "glass box" AI because we know what is going on inside.
The greatest need for explainablity is around machine learning (ML) systems, which do not have explicitly defined rules to follow. They are probabilistic systems that define their own rules for producing results based on the input. We typically lack a strong understanding of why a ML system generates a particular result because it is rare for a deployed ML model to have explainability built into it.
Deep learning (DL) is the most powerful (also the most difficult to understand and explain) category of ML. Some technologies powered by DL include autonomous vehicles, facial recognition, virtual assistants, etc. These AI systems using deep neural networks are often referred to as "black box" AI because their results cannot be easily rationalized by humans.
XAI applications help provide clarity into the inner workings of the model that led to a particular outcome; turning black box AI into glass box AI.
An example of widely-available XAI is Facebook's Why am I seeing this ad? feature to help users understand which factors are considered in ad targeting.
The ideal level of detail and format of the explanation depends on factors such as the problem the AI is solving, the needs of the AI stakeholder (user of the explanation), and the regulatory environment for that industry/application. The greater the risk and impact of the AI being wrong, the greater the need for adopting explainable AI.
Why is Explainable AI Important?
Explainability is necessary for creating transparent, ethical, unbiased, responsible AI. People are less likely to trust an AI system that they do not understand, and the world is becoming increasingly aware of the negative effects of biased AI systems. Depending on the industry and application of AI, there may be regulations which require a certain degree of explainability. As the regulatory world catches up with AI development (or tries to catch up), these requirements will only grow in number and complexity.
Building Trust
Consumers want to transparent, ethical, trustworthy AI. Organizations benefit from recognizing this and putting effort into building trusted AI. Trust is a new competitive advantage.
In the recent report published by Capgemeni Research Institute, Why addressing ethical questions in AI will benefit organizations, over half of survey respondents said that when an organizations offers ethical AI, they'd have greater trust and loyalty toward the organization, and would share their experiences with friends, family, and social media.
Trust is especially important in the industries with high-stakes AI applications such as healthcare, finance, insurance, military defense, autonomous vehicles, etc.
Reducing Bias
Algorithmic bias is a major concern as ML becomes more widespread, specifically biases around race and gender. The consequences of biased AI can range from emotionally disturbing to life-altering.
With XAI, the reason behind an AI's decision can be justified. This helps ensure fair, ethical AI by keeping the end user properly informed and the organization accountable. XAI can be adopted to prevent and detect algorithmic bias in deployed ML systems.
Ensuring Compliance
New regulations around explainable AI are constantly created to help shape a fair, responsible, accountable future of AI. These include the 2016 European Union General Data Protection Regulation (GDPR), the 2019 US Algorithmic Accountability Act, the 2020 US Department of Defense Ethical Principles for Artificial Intelligence, and the 2020 National Institute of Standards and Technology's Four Principles of Explainable Artificial Intelligence.
Proactively adopting XAI is advantageous to organizations across all industries as regulations and standards are established.
There are tradeoffs for organizations choosing to invest the time and resources necessary to create XAI. It may be more profitable in the short term to push forward developing AI without concern for explainablity. A similar conversation is being had by companies investing time and resources to offer environmentally friendly products vs. companies that make more money by taking the easier, less responsible path.