MQTT and Redis Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider MQTT and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The MQTT plugin reads from the specified topics and creates metrics using the supported input data formats.

The Redis Time Series output plugin is designed to publish metrics to a Redis efficiently.

Integration details

MQTT

This plugin allows Telegraf to consume metrics from specified MQTT topics. It supports a variety of configuration options to connect to MQTT brokers and manage message subscriptions, including features for handling startup errors and using TLS for secure connections.

Redis

The Redis output plugin writes metrics to the Redis server.

Configuration

MQTT


[[inputs.mqtt_consumer]]
  servers = ["tcp://127.0.0.1:1883"]
  topics = [
    "telegraf/host01/cpu",
    "telegraf/+/mem",
    "sensors/#",
  ]
  # topic_tag = "topic"
  # qos = 0
  # connection_timeout = "30s"
  # keepalive = "60s"
  # ping_timeout = "10s"
  # max_undelivered_messages = 1000
  # persistent_session = false
  # client_id = ""
  # username = "telegraf"
  # password = "metricsmetricsmetricsmetrics"
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  # insecure_skip_verify = false
  # client_trace = false
  data_format = "influx"
  # [[inputs.mqtt_consumer.topic_parsing]]
  #   topic = ""
  #   measurement = ""
  #   tags = ""
  #   fields = ""
  #   [inputs.mqtt_consumer.topic_parsing.types]
  #      key = type

Redis

[[outputs.redistimeseries]]
  ## The address of the RedisTimeSeries server.
  address = "127.0.0.1:6379"

  ## Redis ACL credentials
  # username = ""
  # password = ""
  # database = 0

  ## Timeout for operations such as ping or sending metrics
  # timeout = "10s"

  ## Enable attempt to convert string fields to numeric values
  ## If "false" or in case the string value cannot be converted the string
  ## field will be dropped.
  # convert_string_fields = true

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  # insecure_skip_verify = false

Input and output integration examples

MQTT

  1. Basic Configuration: This example connects to a local MQTT broker, subscribes to specific topics for CPU and memory metrics, and outputs using the Influx data format.

  2. Topic Parsing: Extracts tag values from MQTT topics for better data organization and analysis, allowing metrics to be categorized based on their topics.

  3. Field Pivoting: Demonstrates how to pivot single-valued metrics into a multi-field metric. This is useful for consolidating data from multiple sensors into a single metric.

Redis

  1. Metrics Storage: Utilize the Redis output plugin to store time-series metrics collected from various sources directly into a Redis database for quick retrieval and analysis.
  2. Dynamic Configuration: Adjust the address and other settings dynamically to publish metrics to different Redis instances based on the deployment environment.
  3. String Field Conversion: Leverage the convert_string_fields option to automatically convert string metrics to numeric formats, ensuring that data is stored in the desired type for analytics.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration