Apache Kafka integration with Logstash 1.5 introduces both input and output plugins, enhancing Logstash's ability to process and handle messages flowing through Kafka's distributed system. Kafka's scalable persistence allows it to act as a message broker, buffering messages between Logstash agents, which is useful in complex, CPU-intensive Logstash processing pipelines. The integration enables data to be written to a Kafka topic using the stdin input plugin and read from Kafka using the elasticsearch output plugin, with consumer groups allowing for scalable read throughput. Kafka's design supports a healthy buffer of events, facilitating the addition of more brokers to handle growing data volumes. Current plugins use the older Kafka producer, with plans to release an updated version in Logstash 1.6, bringing significant API changes that will not be backwards compatible. Future posts will explore serialization with Kafka, including the use of Apache Avro.