Syslog and PostgreSQL Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Syslog and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Syslog plugin enables the collection of syslog messages from various sources using standard networking protocols. This functionality is critical for environments where systems need to be monitored and logged efficiently.

The Telegraf PostgreSQL plugin allows you to efficiently write metrics to a PostgreSQL database while automatically managing the database schema.

Integration details

Syslog

The Syslog plugin for Telegraf captures syslog messages transmitted over various protocols such as TCP, UDP, and TLS. It supports both RFC 5424 (the newer syslog protocol) and the older RFC 3164 (BSD syslog protocol). This plugin operates as a service input, effectively starting a service that listens for incoming syslog messages. Unlike traditional plugins, service inputs may not function with standard interval settings or CLI options like --once. It includes options for setting network configurations, socket permissions, message handling, and connection handling. Furthermore, the integration with Rsyslog allows forwarding of logging messages, making it a powerful tool for collecting and relaying system logs in real-time, thus seamlessly integrating into monitoring and logging systems.

PostgreSQL

The PostgreSQL plugin enables users to write metrics to a PostgreSQL database or a compatible database, providing robust support for schema management by automatically updating missing columns. The plugin is designed to facilitate integration with monitoring solutions, allowing users to efficiently store and manage time series data. It offers configurable options for connection settings, concurrency, and error handling, and supports advanced features such as JSONB storage for tags and fields, foreign key tagging, templated schema modifications, and support for unsigned integer data types through the pguint extension.

Configuration

Syslog

[[inputs.syslog]]
  ## Protocol, address and port to host the syslog receiver.
  ## If no host is specified, then localhost is used.
  ## If no port is specified, 6514 is used (RFC5425#section-4.1).
  ##   ex: server = "tcp://localhost:6514"
  ##       server = "udp://:6514"
  ##       server = "unix:///var/run/telegraf-syslog.sock"
  ## When using tcp, consider using 'tcp4' or 'tcp6' to force the usage of IPv4
  ## or IPV6 respectively. There are cases, where when not specified, a system
  ## may force an IPv4 mapped IPv6 address.
  server = "tcp://127.0.0.1:6514"

  ## Permission for unix sockets (only available on unix sockets)
  ## This setting may not be respected by some platforms. To safely restrict
  ## permissions it is recommended to place the socket into a previously
  ## created directory with the desired permissions.
  ##   ex: socket_mode = "777"
  # socket_mode = ""

  ## Maximum number of concurrent connections (only available on stream sockets like TCP)
  ## Zero means unlimited.
  # max_connections = 0

  ## Read timeout (only available on stream sockets like TCP)
  ## Zero means unlimited.
  # read_timeout = "0s"

  ## Optional TLS configuration (only available on stream sockets like TCP)
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key  = "/etc/telegraf/key.pem"
  ## Enables client authentication if set.
  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

  ## Maximum socket buffer size (in bytes when no unit specified)
  ## For stream sockets, once the buffer fills up, the sender will start
  ## backing up. For datagram sockets, once the buffer fills up, metrics will
  ## start dropping. Defaults to the OS default.
  # read_buffer_size = "64KiB"

  ## Period between keep alive probes (only applies to TCP sockets)
  ## Zero disables keep alive probes. Defaults to the OS configuration.
  # keep_alive_period = "5m"

  ## Content encoding for message payloads
  ## Can be set to "gzip" for compressed payloads or "identity" for no encoding.
  # content_encoding = "identity"

  ## Maximum size of decoded packet (in bytes when no unit specified)
  # max_decompression_size = "500MB"

  ## Framing technique used for messages transport
  ## Available settings are:
  ##   octet-counting  -- see RFC5425#section-4.3.1 and RFC6587#section-3.4.1
  ##   non-transparent -- see RFC6587#section-3.4.2
  # framing = "octet-counting"

  ## The trailer to be expected in case of non-transparent framing (default = "LF").
  ## Must be one of "LF", or "NUL".
  # trailer = "LF"

  ## Whether to parse in best effort mode or not (default = false).
  ## By default best effort parsing is off.
  # best_effort = false

  ## The RFC standard to use for message parsing
  ## By default RFC5424 is used. RFC3164 only supports UDP transport (no streaming support)
  ## Must be one of "RFC5424", or "RFC3164".
  # syslog_standard = "RFC5424"

  ## Character to prepend to SD-PARAMs (default = "_").
  ## A syslog message can contain multiple parameters and multiple identifiers within structured data section.
  ## Eg., [id1 name1="val1" name2="val2"][id2 name1="val1" nameA="valA"]
  ## For each combination a field is created.
  ## Its name is created concatenating identifier, sdparam_separator, and parameter name.
  # sdparam_separator = "_"

PostgreSQL

# Publishes metrics to a postgresql database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://www.postgresql.org/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
  ## containing fields for which there is no column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
  ## unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
  ## controls the maximum backoff duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
  ## This is an optimization to skip inserting known tag IDs.
  ## Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

Input and output integration examples

Syslog

  1. Centralized Log Management: Use the Syslog plugin to aggregate log messages from multiple servers into a central logging system. This setup can help in monitoring overall system health, troubleshooting issues effectively, and maintaining audit trails by collecting syslog data from different sources.

  2. Real-Time Alerting: Integrate the Syslog plugin with alerting tools to trigger real-time notifications when specific log patterns or errors are detected. For example, if a critical system error appears in the logs, an alert can be sent to the operations team, minimizing downtime and performing proactive maintenance.

  3. Security Monitoring: Leverage the Syslog plugin for security monitoring by capturing logs from firewalls, intrusion detection systems, and other security devices. This logging capability enhances security visibility and helps in investigating potentially malicious activities by analyzing the captured syslog data.

  4. Application Performance Tracking: Utilize the Syslog plugin to monitor application performance by collecting logs from various applications. This integration helps in analyzing the application’s behavior and performance trends, thus aiding in optimizing application processes and ensuring smoother operation.

PostgreSQL

  1. Real-Time Analytics with Complex Queries: Leverage the PostgreSQL plugin to store metrics from various sources in a PostgreSQL database, enabling real-time analytics using complex queries. This setup can help data scientists and analysts uncover patterns and trends, as they manipulate relational data across multiple tables while utilizing PostgreSQL’s robust query optimization features. Specifically, users can create sophisticated reports with JOIN operations across different metric tables, revealing insights that would typically remain hidden in embedded systems.

  2. Integrating with TimescaleDB for Time-Series Data: Utilize the PostgreSQL plugin within a TimescaleDB instance to efficiently handle and analyze time-series data. By implementing hypertables, users can achieve greater performance and partitioning of topics over the time dimension. This integration allows users to run analytical queries over large amounts of time-series data while retaining the full power of PostgreSQL’s SQL queries, ensuring reliability and efficiency in metrics analysis.

  3. Data Versioning and Historical Analysis: Implement a strategy using the PostgreSQL plugin to maintain different versions of metrics over time. Users can set up an immutable data table structure where older versions of tables are retained, enabling easy historical analysis. This approach not only provides insights into data evolution but also aids compliance with data retention policies, ensuring that the historical integrity of the datasets remains intact.

  4. Dynamic Schema Management for Evolving Metrics: Use the plugin’s templating capabilities to create a dynamically changing schema that responds to metric variations. This use case allows organizations to adapt their data structure as metrics evolve, adding necessary fields and ensuring adherence to data integrity policies. By leveraging templated SQL commands, users can extend their database without manual intervention, facilitating agile data management practices.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration