In this demo, we showcased using the official MongoDB Connector for Apache Kafka as both a source and sink to collect data from heterogeneous systems, process it in real-time with Apache Kafka, and store it in MongoDB for long-term analysis and reporting. We leveraged Apache Kafka's stream processing capabilities to connect MySQL databases and MongoDB, which provided a robust and reactive data pipeline between the two datastores. The demo used RStudio to calculate moving averages and create visualizations of the stock data, demonstrating the power of integrating heterogeneous data sources with Kafka for real-time analytics and reporting.