Home / Companies / Openlayer / Blog / Post Details
Content Deep Dive

SHAP demystified: understand what Shapley values are and how they work

Blog post from Openlayer

Post Details
Company
Date Published
Author
Gustavo Cid
Word Count
2,302
Language
English
Hacker News Points
-
Summary

Machine learning models, often seen as black boxes, can significantly impact various aspects of human life, prompting the need for a deeper understanding of their inner workings. Tools like LIME and SHAP aim to illuminate these models by explaining their predictions. SHAP, which stands for SHapley Additive exPlanations, is based on Shapley values from game theory, offering a way to fairly attribute the contribution of individual features to a model's prediction. This approach parallels the profit distribution problem in cooperative games, where each player's contribution to the total profit is fairly assessed. SHAP values, which are additive, ensure that the sum matches the model's output, similar to how bonuses sum to total profit in game theory. Challenges in implementing SHAP in machine learning include its computational intensity, as calculating Shapley values usually requires evaluating all possible feature subsets, though practical implementations mitigate this with conditional expectations and sampling. SHAP and LIME, despite their different methodologies, address the same optimization problem and are both considered additive feature attribution methods, as highlighted in the paper "A Unified Approach to Interpreting Model Predictions" by Scott Lundberg and Su-In Lee.