There are several applications that generate and use time-series data, which requires a robust data platform to manage and query the data effectively. MongoDB can pre-aggregate data using its Query language and window functions, optimally store large amounts of time-series data in time-series collections, and archive data to cost-effective storage using MongoDB Atlas Online Archive. Apache Kafka is often used as an ingestion point for data due to its scalability, and the MongoDB Connector for Apache Kafka allows easy transfer of data between Kafka topics and MongoDB clusters. The connector can configure a time-series collection type automatically if it doesn't already exist, with parameters such as timeseries.timefield, timeseries.expire.after.seconds, and timeseries.timefield.auto.convert. When data is stored in time-series collections, MongoDB optimizes storage and bucketization of the data behind the scenes, saving storage space compared to regular collections. The new $setWindowFields pipeline can be used to define a window of documents to perform an operation on, such as calculating rolling averages, and many other analytics for complex time series data. Existing collections can be converted to time-series collections using the MongoDB Connector for Apache Kafka, with sample configurations provided for source and sink connectors. Note that in the initial release of time-series collections, only insert operations are supported, and updating or deleting documents on the source will not propagate to the destination.