In Apache Flink, watermarks are used to determine when a time-based aggregation operation can be completed. Watermarks define the maximum amount of time a job should wait for out-of-order messages, and they are typically calculated by taking the maximum timestamp observed and subtracting the allowed out-of-orderness. If the watermark strategy is not set up correctly or if the out-of-orderness threshold is too large, jobs may wait forever or produce delayed results, potentially leading to data loss. Additionally, idle partitions can cause issues with watermark calculation, as Flink will wait for an inactive partition to become active before emitting results, which can result in significant latency and data loss. To address these issues, the Data Streaming Platform in Confluent Cloud provides a default watermark strategy that uses a histogram of observed timestamps to determine an appropriate watermark, dropping less than 5% of messages due to out-of-orderness. However, users may need to override this strategy if their use case requires more control over when to drop messages or operates outside the minimum and maximum out-of-orderness thresholds.