The text provides an in-depth exploration of Global Model Agnostic Methods for model interpretability in machine learning, particularly focusing on Partial Dependence Plots (PDP) and Global Surrogates. It explains that Global Interpretability aims to understand why a model makes certain decisions by examining the average behavior of the model, which is crucial for debugging and comprehending the data and concepts involved. Using the example of predicting cervical cancer risk, the text illustrates how PDPs can reveal the importance of features like age and hormonal contraceptive use by showing their impact on predictions. The concept of feature interaction is introduced to address PDP limitations, with Friedman’s H-statistic applied to measure these interactions, highlighting the significant effect of hormonal contraceptives. Additionally, the text discusses the Global Surrogate method, which involves using a simpler model, like a decision tree, to approximate the predictions of complex black-box models, with effectiveness measured by the R-squared value. The section concludes by noting the advantages and disadvantages of these methods and promises further discussion on Local Model Agnostic Methods in the next installment.