Fluentd and Snowflake Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Fluentd and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Fluentd Input Plugin gathers metrics from Fluentd’s in_monitor plugin endpoint. It provides insights into various plugin metrics while allowing for custom configurations to reduce series cardinality.

Telegraf’s SQL output plugin allows seamless metric storage in SQL databases. When configured for Snowflake, it employs a specialized DSN format and dynamic table creation to map metrics to the appropriate schema.

Integration details

Fluentd

This plugin gathers metrics from the Fluentd plugin endpoint provided by the in_monitor plugin. It reads data from the /api/plugin.json resource and allows exclusion of specific plugins based on their type.

Snowflake

The SQL output plugin enables Telegraf to send metrics to an SQL database using a dynamic schema. For Snowflake, the plugin utilizes the Go snowflake driver and a DSN that includes connection details such as username, password, and account identifiers. Note that this integration is experimental due to limited unit testing for the Go snowflake driver.

Configuration

Fluentd

[[inputs.fluentd]]
  ## This plugin reads information exposed by fluentd (using /api/plugins.json endpoint).
  ##
  ## Endpoint:
  ## - only one URI is allowed
  ## - https is not supported
  endpoint = "http://localhost:24220/api/plugins.json"

  ## Define which plugins have to be excluded (based on "type" field - e.g. monitor_agent)
  exclude = [
    "monitor_agent",
    "dummy",
  ]

Snowflake

[[outputs.sql]]
  ## Database driver
  ## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
  ## sqlite (SQLite3), snowflake (snowflake.com), clickhouse (ClickHouse)
  driver = "snowflake"

  ## Data source name
  ## For Snowflake, the DSN format typically includes the username, password, account identifier, and optional warehouse, database, and schema.
  ## Example DSN: "username:password@account/warehouse/db/schema"
  data_source_name = "username:password@account/warehouse/db/schema"

  ## Timestamp column name
  timestamp_column = "timestamp"

  ## Table creation template
  ## Available template variables:
  ##  {TABLE}        - table name as a quoted identifier
  ##  {TABLELITERAL} - table name as a quoted string literal
  ##  {COLUMNS}      - column definitions (list of quoted identifiers and types)
  table_template = "CREATE TABLE {TABLE} ({COLUMNS})"

  ## Table existence check template
  ## Available template variables:
  ##  {TABLE} - table name as a quoted identifier
  table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"

  ## Initialization SQL (optional)
  init_sql = ""

  ## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
  connection_max_idle_time = "0s"

  ## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
  connection_max_lifetime = "0s"

  ## Maximum number of connections in the idle connection pool. 0 means unlimited.
  connection_max_idle = 2

  ## Maximum number of open connections to the database. 0 means unlimited.
  connection_max_open = 0

  ## Metric type to SQL type conversion
  ## Defaults to ANSI/ISO SQL types unless overridden. Adjust if needed for Snowflake compatibility.
  #[outputs.sql.convert]
  #  integer       = "INT"
  #  real          = "DOUBLE"
  #  text          = "TEXT"
  #  timestamp     = "TIMESTAMP"
  #  defaultvalue  = "TEXT"
  #  unsigned      = "UNSIGNED"
  #  bool          = "BOOL"

Input and output integration examples

Fluentd

  1. Basic Configuration: Set up the Fluentd Input Plugin to gather metrics from your Fluentd instance’s monitoring endpoint, ensuring you are able to track performance and usage statistics.
  2. Excluding Plugins: Use the exclude option to ignore specific plugins’ metrics that are not necessary for your monitoring needs, streamlining data collection and focusing on what matters.
  3. Custom Plugin ID: Implement the @id parameter in your Fluentd configuration to maintain a consistent plugin_id, which helps avoid issues with high series cardinality during frequent restarts.

Snowflake

  1. Basic Snowflake Integration: Set the driver to ‘snowflake’ and configure the DSN with your Snowflake account details to start ingesting metrics.
  2. Custom Schema Management: Modify the table creation template to predefine specific column types or indexes that align with your data model in Snowflake.
  3. Initialization Commands: Utilize the init_sql setting to run any necessary Snowflake-specific SQL commands upon connection initialization.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration