NSQ and PostgreSQL Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider NSQ and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The NSQ Telegraf plugin reads metrics from the NSQD messaging system, allowing for real-time data processing and monitoring.

The Telegraf PostgreSQL plugin allows you to efficiently write metrics to a PostgreSQL database while automatically managing the database schema.

Integration details

NSQ

The NSQ plugin interfaces with NSQ, a real-time messaging platform, enabling the reading of messages from NSQD. This plugin is categorized as a service plugin, meaning it actively listens for metrics and events rather than polling them at regular intervals. With an emphasis on reliability, it prevents data loss by tracking undelivered messages until they are acknowledged by outputs. The plugin allows for configurations such as specifying NSQLookupd endpoints, topics, and channels, and it supports multiple data formats for flexibility in data handling.

PostgreSQL

The PostgreSQL plugin enables users to write metrics to a PostgreSQL database or a compatible database, providing robust support for schema management by automatically updating missing columns. The plugin is designed to facilitate integration with monitoring solutions, allowing users to efficiently store and manage time series data. It offers configurable options for connection settings, concurrency, and error handling, and supports advanced features such as JSONB storage for tags and fields, foreign key tagging, templated schema modifications, and support for unsigned integer data types through the pguint extension.

Configuration

NSQ

# Read metrics from NSQD topic(s)
[[inputs.nsq_consumer]]
  ## Server option still works but is deprecated, we just prepend it to the nsqd array.
  # server = "localhost:4150"

  ## An array representing the NSQD TCP HTTP Endpoints
  nsqd = ["localhost:4150"]

  ## An array representing the NSQLookupd HTTP Endpoints
  nsqlookupd = ["localhost:4161"]
  topic = "telegraf"
  channel = "consumer"
  max_in_flight = 100

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

PostgreSQL

# Publishes metrics to a postgresql database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://www.postgresql.org/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
  ## containing fields for which there is no column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
  ## unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
  ## controls the maximum backoff duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
  ## This is an optimization to skip inserting known tag IDs.
  ## Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

Input and output integration examples

NSQ

  1. Real-Time Analytics Dashboard: Integrate this plugin with a visualization tool to create a dashboard that displays real-time metrics from various topics in NSQ. By subscribing to specific topics, users can monitor system health and application performance dynamically, allowing for immediate insights and timely responses to any anomalies.

  2. Event-Driven Automation: Combine NSQ with a serverless architecture to trigger automated workflows based on incoming messages. This use case could involve processing data for machine learning models or responding to user actions in applications, thus streamlining operations and enhancing user experience through rapid processing.

  3. Multi-Service Communication Hub: Use the NSQ plugin to act as a centralized messaging hub among different microservices in a distributed architecture. By enabling services to communicate through NSQ, developers can ensure reliable message delivery while maintaining decoupled service interactions, significantly improving scalability and resilience.

  4. Metrics Aggregation for Enhanced Monitoring: Implement the NSQ plugin to aggregate metrics from multiple sources before sending them to an analytics tool. This setup enables businesses to consolidate data from various applications and services, creating a unified view for better decision-making and strategic planning.

PostgreSQL

  1. Real-Time Analytics with Complex Queries: Leverage the PostgreSQL plugin to store metrics from various sources in a PostgreSQL database, enabling real-time analytics using complex queries. This setup can help data scientists and analysts uncover patterns and trends, as they manipulate relational data across multiple tables while utilizing PostgreSQL’s robust query optimization features. Specifically, users can create sophisticated reports with JOIN operations across different metric tables, revealing insights that would typically remain hidden in embedded systems.

  2. Integrating with TimescaleDB for Time-Series Data: Utilize the PostgreSQL plugin within a TimescaleDB instance to efficiently handle and analyze time-series data. By implementing hypertables, users can achieve greater performance and partitioning of topics over the time dimension. This integration allows users to run analytical queries over large amounts of time-series data while retaining the full power of PostgreSQL’s SQL queries, ensuring reliability and efficiency in metrics analysis.

  3. Data Versioning and Historical Analysis: Implement a strategy using the PostgreSQL plugin to maintain different versions of metrics over time. Users can set up an immutable data table structure where older versions of tables are retained, enabling easy historical analysis. This approach not only provides insights into data evolution but also aids compliance with data retention policies, ensuring that the historical integrity of the datasets remains intact.

  4. Dynamic Schema Management for Evolving Metrics: Use the plugin’s templating capabilities to create a dynamically changing schema that responds to metric variations. This use case allows organizations to adapt their data structure as metrics evolve, adding necessary fields and ensuring adherence to data integrity policies. By leveraging templated SQL commands, users can extend their database without manual intervention, facilitating agile data management practices.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration