AI Explainability
Understanding how AI models make decisions
Why something happens in the world is something that keeps people busy. We want to know why our partner broke up with us or our neighbor listens to music as loud as a football stadium every Sunday morning at 7 am.
Some questions we will never be able to answer because most of them involve complex people that have too many layers...
Same for AI models. The more layers → the more complex → the more we have absolutely no clue what is going on.
Still, we try to understand it. This effort is called explainability (also called xAI).
Explainability answers the question "Why did the AI decide this?".
Questions we ask (ourselves after a breakup):
- Why this prediction? (or the breakup)
- Which data/features/inputs mattered most? (or what was wrong with us or the partner)
- Can we trust this? (or why this won't happen again with our next partner)
Why It Matters And Is An Art
Many AI models are black boxes: they work but we don’t know why.
Understanding why helps us to trust AI results, detect errors, improve models or processes, and use AI responsibly.
Explainability adds complexity. Many developers have little experience with xAI, it is not always clear how to use explainability effectively and it is more effort. And effort needs to be justified. Most use cases out there with ROI numbers are based on pure AI, not xAI. It will take some time until we have benchmarks on how xAI made the world better.
This is mostly due to the fact that multiple methods must be combined and chained, so it is a very indirect process that needs iteration and again layering. And especially if extra logic is needed to turn explanations into decisions.
It is like: Knowing that a healthy weight is good is easy. Knowing how to reach it in a specific situation is hard.
Examples
Explainability helps in many different fields:
- Blood Sugar Prediction - Reducing sensor costs for diabetes devices
- Therapy Optimization - Creating personalized medical treatments
- Sales Optimization - Improving marketing campaigns
Explanation Methods
- Feature Importance: Which inputs had the biggest impact?
- Attention Mechanisms: What did the model focus on?
- Examples: Which training data is similar?
- Simplified Models: Approximate the complex model.
Explainability builds trust in AI.