My main use case for InfluxDB involved working on a LEO satellite KPI monitoring application, where I gathered latency, throughput, packet loss, jitter, and various types of network data for several probes. We had around a lakh of probes, and I needed to gather information from all these probes and store it in a database. I chose InfluxDB because it is a highly reliable and purpose-built database used for storing and analyzing real-time network and performance metrics. It served as the core data store for latency, jitter, packet loss, and throughput KPIs I collected using tools such as iperf3, MTR, and custom Python scripts. Its strongest advantage was its ability to ingest high-frequency metric data with JSON-based metric payloads generated by automation scripts written efficiently using the Influx line protocol, enabling near real-time visibility through performance bottlenecks.
Regarding further integration of InfluxDB with my tools and scripts, I used Telegraf and Chronograf as well since InfluxDB was the database where I ingested all the data, including throughput, latency, packet loss, and jitter. Although I don't exactly remember all the network data types involved, the main problem was the amount of data. Although InfluxDB is a highly scalable database, the main challenge with InfluxDB, which is common with all databases, was handling very high throughput systems and high throughput message flow. Thus, I had to use Kafka as well, which generated Kafka topics and resolved the high throughput problem.