Sentry's event ingestion system needs to be responsive, fast, and scalable under all types of load while also providing near real-time access to error data. To achieve this, Sentry uses an asynchronous processing pipeline that involves inserting events into a Kafka topic, where multiple consumers read from the topic and write to ClickHouse in batches. However, this approach raises concerns about event persistence and consistency. To mitigate these issues, Sentry developed a system called Synchronized Consumer, which allows a Kafka consumer to pause itself and wait for another consumer to commit an offset before consuming that same message. This ensures that events are processed only when they have been stored in ClickHouse. Additionally, ClickHouse's distributed database with multi-master, eventually-consistent asynchronous replication is used to ensure that reads come from up-to-date replicas by using the in_order load balancing schema and inserting data synchronously. This approach provides a consistency model that resembles sequential consistency while sacrificing some availability, but it has been deemed sufficient for Sentry's needs.