Performance and Kafka compression
Blog post from Tinybird
After migrating a client to the newest version of the Kafka connector, the support team provided insights into optimizing performance, particularly focusing on compression techniques and configuration parameters. Despite the optimizations, the client's Kafka cluster reading throughput remains a limiting factor, with significant improvements observed when using zstd compression at higher levels, achieving up to 19 million records per minute. Although higher compression levels increase CPU load, zstd maintains notable speed, demonstrated by a test on a laptop producing and compressing at millions of records per minute. The support team also noted issues with three specific partitions lagging under heavy loads, suggesting further investigation, and recommended utilizing Confluent's optimization guide for additional improvements.