Machine learning algorithms are increasingly integral to decision-making processes in various fields, necessitating an understanding of their interpretability and the factors influencing their decisions. This is particularly important for compliance with data protection regulations like the GDPR, which require transparency in automated decision-making. Elastic has developed explanatory features in its anomaly detection products to help users identify the factors influencing anomalies detected in datasets, such as network data from corporate applications. The method involves using "influencers," or values that affect anomalies, and employs strategies such as counterfactual causation and regularization to discern potential causes of anomalies. These strategies face challenges like the ineffectiveness of pre-aggregated data, emphasizing the need for domain knowledge in configuring anomaly detection systems. Understanding these influencers is crucial for trusting machine learning systems and integrating them effectively into existing decision-making frameworks.