Kinesis and MySQL Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Kinesis and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Kinesis plugin enables you to read from Kinesis data streams, supporting various data formats and configurations.

The Telegraf SQL plugin allows you to store metrics from Telegraf directly into a MySQL database, making it easier to analyze and visualize the collected metrics.

Integration details

Kinesis

The Kinesis Telegraf plugin is designed to read from Amazon Kinesis data streams, enabling users to gather metrics in real-time. As a service input plugin, it operates by listening for incoming data rather than polling at regular intervals. The configuration specifies various options including the AWS region, stream name, authentication credentials, and data formats. It supports tracking of undelivered messages to prevent data loss, and users can utilize DynamoDB for maintaining checkpoints of the last processed records. This plugin is particularly useful for applications requiring reliable and scalable stream processing alongside other monitoring needs.

MySQL

Telegraf’s SQL output plugin is designed to seamlessly write metric data to a SQL database by dynamically creating tables and columns based on the incoming metrics. When configured for MySQL, the plugin leverages the go-sql-driver/mysql, which requires enabling the ANSI_QUOTES SQL mode to ensure proper handling of quoted identifiers. This dynamic schema creation approach ensures that each metric is stored in its own table with a structure derived from its fields and tags, providing a detailed, timestamped record of system performance. The flexibility of the plugin allows it to handle high-throughput environments, making it ideal for scenarios that demand robust, granular metric logging and historical data analysis.

Configuration

Kinesis


# Configuration for the AWS Kinesis input.
[[inputs.kinesis_consumer]]
  ## Amazon REGION of kinesis endpoint.
  region = "ap-southeast-2"

  ## Amazon Credentials
  ## Credentials are loaded in the following order
  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
  ## 2) Assumed credentials via STS if role_arn is specified
  ## 3) explicit credentials from 'access_key' and 'secret_key'
  ## 4) shared profile from 'profile'
  ## 5) environment variables
  ## 6) shared credentials file
  ## 7) EC2 Instance Profile
  # access_key = ""
  # secret_key = ""
  # token = ""
  # role_arn = ""
  # web_identity_token_file = ""
  # role_session_name = ""
  # profile = ""
  # shared_credential_file = ""

  ## Endpoint to make request against, the correct endpoint is automatically
  ## determined and this option should only be set if you wish to override the
  ## default.
  ##   ex: endpoint_url = "http://localhost:8000"
  # endpoint_url = ""

  ## Kinesis StreamName must exist prior to starting telegraf.
  streamname = "StreamName"

  ## Shard iterator type (only 'TRIM_HORIZON' and 'LATEST' currently supported)
  # shard_iterator_type = "TRIM_HORIZON"

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

  ##
  ## The content encoding of the data from kinesis
  ## If you are processing a cloudwatch logs kinesis stream then set this to "gzip"
  ## as AWS compresses cloudwatch log data before it is sent to kinesis (aws
  ## also base64 encodes the zip byte data before pushing to the stream.  The base64 decoding
  ## is done automatically by the golang sdk, as data is read from kinesis)
  ##
  # content_encoding = "identity"

  ## Optional
  ## Configuration for a dynamodb checkpoint
  [inputs.kinesis_consumer.checkpoint_dynamodb]
    ## unique name for this consumer
    app_name = "default"
    table_name = "default"

MySQL

[[outputs.sql]]
  ## Database driver
  ## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
  ##  sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
  driver = "mysql"

  ## Data source name
  ## The format of the data source name is different for each database driver.
  ## See the plugin readme for details.
  data_source_name = "username:password@tcp(host:port)/dbname"

  ## Timestamp column name
  timestamp_column = "timestamp"

  ## Table creation template
  ## Available template variables:
  ##  {TABLE} - table name as a quoted identifier
  ##  {TABLELITERAL} - table name as a quoted string literal
  ##  {COLUMNS} - column definitions (list of quoted identifiers and types)
  table_template = "CREATE TABLE {TABLE}({COLUMNS})"

  ## Table existence check template
  ## Available template variables:
  ##  {TABLE} - tablename as a quoted identifier
  table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"

  ## Initialization SQL
  init_sql = "SET sql_mode='ANSI_QUOTES';"

  ## Maximum amount of time a connection may be idle. "0s" means connections are
  ## never closed due to idle time.
  connection_max_idle_time = "0s"

  ## Maximum amount of time a connection may be reused. "0s" means connections
  ## are never closed due to age.
  connection_max_lifetime = "0s"

  ## Maximum number of connections in the idle connection pool. 0 means unlimited.
  connection_max_idle = 2

  ## Maximum number of open connections to the database. 0 means unlimited.
  connection_max_open = 0

  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
  ## plugin definition, otherwise additional config options are read as part of the
  ## table

  ## Metric type to SQL type conversion
  ## The values on the left are the data types Telegraf has and the values on
  ## the right are the data types Telegraf will use when sending to a database.
  ##
  ## The database values used must be data types the destination database
  ## understands. It is up to the user to ensure that the selected data type is
  ## available in the database they are using. Refer to your database
  ## documentation for what data types are available and supported.
  #[outputs.sql.convert]
  #  integer              = "INT"
  #  real                 = "DOUBLE"
  #  text                 = "TEXT"
  #  timestamp            = "TIMESTAMP"
  #  defaultvalue         = "TEXT"
  #  unsigned             = "UNSIGNED"
  #  bool                 = "BOOL"
  #  ## This setting controls the behavior of the unsigned value. By default the
  #  ## setting will take the integer value and append the unsigned value to it. The other
  #  ## option is "literal", which will use the actual value the user provides to
  #  ## the unsigned option. This is useful for a database like ClickHouse where
  #  ## the unsigned value should use a value like "uint64".
  #  # conversion_style = "unsigned_suffix"

Input and output integration examples

Kinesis

  1. Real-Time Data Processing with Kinesis: This use case involves integrating the Kinesis plugin with a monitoring dashboard to analyze incoming data metrics in real-time. For instance, an application could consume logs from multiple services and present them visually, allowing operations teams to quickly identify trends and react to anomalies as they occur.

  2. Serverless Log Aggregation: Utilize this plugin in a serverless architecture where Kinesis streams aggregate logs from various microservices. The plugin can create metrics that help detect issues in the system, automating alerting processes through third-party integrations, enabling teams to minimize downtime and improve reliability.

  3. Dynamic Scaling Based on Stream Metrics: Implement a solution where stream metrics consumed by the Kinesis plugin could be used to adjust resources dynamically. For example, if the number of records processed spikes, corresponding scale-up actions could be triggered to handle the increased load, ensuring optimal resource allocation and performance.

  4. Data Pipeline to S3 with Checkpointing: Create a robust data pipeline where Kinesis stream data is processed through the Telegraf Kinesis plugin, with checkpoints stored in DynamoDB. This approach can ensure data consistency and reliability, as it manages the state of processed data, enabling seamless integration with downstream data lakes or storage solutions.

MySQL

  1. Real-Time Web Analytics Storage: Leverage the plugin to capture website performance metrics and store them in MySQL. This setup enables teams to monitor user interactions, analyze traffic patterns, and dynamically adjust site features based on real-time data insights.

  2. IoT Device Monitoring: Utilize the plugin to collect metrics from a network of IoT sensors and log them into a MySQL database. This use case supports continuous monitoring of device health and performance, allowing for predictive maintenance and immediate response to anomalies.

  3. Financial Transaction Logging: Record high-frequency financial transaction data with precise timestamps. This approach supports robust audit trails, real-time fraud detection, and comprehensive historical analysis for compliance and reporting purposes.

  4. Application Performance Benchmarking: Integrate the plugin with application performance monitoring systems to log metrics into MySQL. This facilitates detailed benchmarking and trend analysis over time, enabling organizations to identify performance bottlenecks and optimize resource allocation effectively.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration