Monitoring model performance with New Relic and Algorithmia
Blog post from New Relic
Deploying a new machine learning (ML) model into production requires effective monitoring to track model drift, data drift, and model bias, ensuring that the model's performance continues to enhance business outcomes. This tutorial explains how data scientists and DevOps engineers can utilize New Relic's integration with Algorithmia to monitor ML model performance metrics seamlessly. Through this collaboration, teams can streamline models into an observability platform, create dashboards, and set up alerts to analyze model performance in real-time, promoting efficient collaboration between data science and DevOps teams. New Relic Alerts and Applied Intelligence provide a centralized notification system that helps detect anomalies and reduce alert noise, while Algorithmia Insights offers a metrics pipeline to monitor ML models. The integration process involves setting up an MLOps-focused dashboard, configuring data flow from Algorithmia to New Relic, and managing alert conditions based on ML metrics, ultimately leading to improved ML model performance and comprehensive observability into ML-powered applications.