Recently, there has been a lot of interest in explainablity of machine learning models. In many situations we have to choose between higher performance in a hard to interpret model and sacrificing performance for an easier to explain model.
There are some standard tricks for coaxing interpretability from black box models and recently there have been other advances such as LIME and Shap values that help us in this trade-off between performance and interpretability.
In this overview we'll go over the concept of explainability, discuss what it means, and introduce techniques to better understand what our models are doing.
Julio Barros is a machine learning consulatant in Portland, Oregon.
He has been developing software for over 20 years and loves all things related to data, AI/ML, technology, teaching and mentoring.
Julio holds a Bachelor's and Master's degree in Computer Science from GMU and UVA respectively, is active in the community and runs the PDX Clojure, Deep Learning and Probabilistic Programming meetups.