Home / Companies / InfluxData / Blog / Post Details
Content Deep Dive

Data Pipelining with InfluxDB: Part 1

Blog post from InfluxData

Post Details
Company
Date Published
Author
Anais Dotis-Georgiou
Word Count
1,641
Language
English
Hacker News Points
-
Summary

This blog post discusses how to build a data pipeline using Kafka, Faust, and InfluxDB. It provides an overview of Kafka and Faust, demonstrates how to use the Telegraf Kafka Consumer Input Plugin to read data from a Kafka topic and write it to InfluxDB, and shares an example pipeline that ties everything together. The post also touches on using InfluxDB for real-time monitoring in the Industrial Internet of Things (IIoT) and highlights how traditional modeling techniques can be sufficient without resorting to machine learning or deep learning solutions.