Company
Date Published
Author
Charlie Klein
Word count
1320
Language
English
Hacker News points
None

Summary

DevOps teams often use a combination of logs, metrics, and traces to monitor their environments, but not all of this telemetry data is necessary for day-to-day operations, leading to increased data storage costs. To manage these costs effectively, it's crucial to filter out non-essential data by determining which signals are critical for monitoring. Critical signals can vary by team and could include application latency, infrastructure usage, or error rates. Once unnecessary data is identified, filtering techniques are applied, such as removing unnecessary metrics or adjusting trace sampling rates. Popular tools like Metricbeat for metrics, and Fluentd or Filebeat for logs, offer configuration options to filter data. Additionally, some platforms like Logz.io provide features to prevent indexing of redundant data. Proper filtering not only reduces storage costs but also ensures that important data is easily accessible when needed, especially during critical incidents. Storing filtered data in cloud services like S3 or Azure Blob for temporary archiving can provide a balance between data accessibility and cost efficiency.