NATS and Loki Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider NATS and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The NATS Consumer Input Plugin enables real-time data consumption from NATS messaging subjects, integrating seamlessly into the Telegraf data pipeline for monitoring and metrics gathering.

The Loki plugin allows users to send logs to Loki for aggregation and querying, leveraging Loki’s efficient storage capabilities.

Integration details

NATS

The NATS Consumer Plugin allows Telegraf to read metrics from specified NATS subjects and create metrics based on supported input data formats. Utilizing a Queue Group allows multiple instances of Telegraf to read from a NATS cluster in parallel, enhancing throughput and reliability. This plugin also supports various authentication methods, including username/password, NATS credentials files, and nkey seed files, ensuring secure communication with the NATS servers. It is particularly useful in environments where data persistence and message reliability are critical, thanks to features such as JetStream that facilitate the consumption of historical messages. Additionally, the ability to configure various operational parameters makes this plugin suitable for high-throughput scenarios while maintaining performance integrity.

Loki

This Loki plugin integrates with Grafana Loki, a powerful log aggregation system. By sending logs in a format compatible with Loki, this plugin allows for efficient storage and querying of logs. Each log entry is structured in a key-value format where keys represent the field names and values represent the corresponding log information. The sorting of logs by timestamp ensures that the log streams maintain chronological order when queried through Loki. This plugin’s support for secrets makes it easier to manage authentication parameters securely, while options for HTTP headers, gzip encoding, and TLS configuration enhance the adaptability and security of log transmission, fitting various deployment needs.

Configuration

NATS

[[inputs.nats_consumer]]
  ## urls of NATS servers
  servers = ["nats://localhost:4222"]

  ## subject(s) to consume
  ## If you use jetstream you need to set the subjects
  ## in jetstream_subjects
  subjects = ["telegraf"]

  ## jetstream subjects
  ## jetstream is a streaming technology inside of nats.
  ## With jetstream the nats-server persists messages and
  ## a consumer can consume historical messages. This is
  ## useful when telegraf needs to restart it don't miss a
  ## message. You need to configure the nats-server.
  ## https://docs.nats.io/nats-concepts/jetstream.
  jetstream_subjects = ["js_telegraf"]

  ## name a queue group
  queue_group = "telegraf_consumers"

  ## Optional authentication with username and password credentials
  # username = ""
  # password = ""

  ## Optional authentication with NATS credentials file (NATS 2.0)
  # credentials = "/etc/telegraf/nats.creds"

  ## Optional authentication with nkey seed file (NATS 2.0)
  # nkey_seed = "/etc/telegraf/seed.txt"

  ## Use Transport Layer Security
  # secure = false

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Sets the limits for pending msgs and bytes for each subscription
  ## These shouldn't need to be adjusted except in very high throughput scenarios
  # pending_message_limit = 65536
  # pending_bytes_limit = 67108864

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

Loki

[[outputs.loki]]
  ## The domain of Loki
  domain = "https://loki.domain.tld"

  ## Endpoint to write api
  # endpoint = "/loki/api/v1/push"

  ## Connection timeout, defaults to "5s" if not set.
  # timeout = "5s"

  ## Basic auth credential
  # username = "loki"
  # password = "pass"

  ## Additional HTTP headers
  # http_headers = {"X-Scope-OrgID" = "1"}

  ## If the request must be gzip encoded
  # gzip_request = false

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"

  ## Sanitize Tag Names
  ## If true, all tag names will have invalid characters replaced with
  ## underscores that do not match the regex: ^[a-zA-Z_:][a-zA-Z0-9_:]*.
  # sanitize_label_names = false

  ## Metric Name Label
  ## Label to use for the metric name to when sending metrics. If set to an
  ## empty string, this will not add the label. This is NOT suggested as there
  ## is no way to differentiate between multiple metrics.
  # metric_name_label = "__name"

Input and output integration examples

NATS

  1. Real-Time Analytics Dashboard: Utilize the NATS plugin to gather metrics from various NATS subjects in real time and feed them into a centralized analytics dashboard. This setup allows for immediate visibility into live application performance, enabling teams to react swiftly to operational issues or performance degradation.

  2. Distributed System Monitoring: Deploy multiple instances of Telegraf configured with the NATS plugin across a distributed architecture. This approach allows teams to aggregate metrics from various microservices efficiently, providing a holistic view of system health and performance while ensuring no messages are dropped during transmission.

  3. Historical Message Recovery: Leverage the capabilities of NATS JetStream along with this plugin to recover and process historical messages after Telegraf has been restarted. This feature is particularly beneficial for applications that require high reliability, ensuring that no critical metrics are lost even in case of service disruptions.

  4. Dynamic Load Balancing: Implement a dynamic load balancing scenario where Telegraf instances consume messages from a NATS cluster based on load. Adjust the queue group settings to control the number of active consumers, allowing for better resource utilization and performance scaling as demand fluctuations occur.

Loki

  1. Centralized Logging for Microservices: Utilize the Loki plugin to gather logs from multiple microservices running in a Kubernetes cluster. By directing logs to a centralized Loki instance, developers can monitor, search, and analyze logs from all services in one place, facilitating easier troubleshooting and performance monitoring. This setup streamlines operations and supports rapid response to issues across distributed applications.

  2. Real-Time Log Anomaly Detection: Combine Loki with monitoring tools to analyze log outputs in real-time for unusual patterns that could indicate system errors or security threats. Implementing anomaly detection on log streams enables teams to proactively identify and respond to incidents, thereby improving system reliability and enhancing security postures.

  3. Enhanced Log Processing with Gzip Compression: Configure the Loki plugin to utilize gzip compression for log transmission. This approach can reduce bandwidth usage and improve transmission speeds, especially beneficial in environments where network bandwidth may be a constraint. It’s particularly useful for high-volume logging applications where every byte counts and performance is critical.

  4. Multi-Tenancy Support with Custom Headers: Leverage the ability to add custom HTTP headers to segregate logs from different tenants in a multi-tenant application environment. By using the Loki plugin to send different headers for each tenant, operators can ensure proper log management and compliance with data isolation requirements, making it a versatile solution for SaaS applications.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration