OpenTelemetry and MySQL Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The OpenTelemetry Input Plugin enables the collection of observed data for analysis and monitoring.
The Telegraf SQL output plugin allows you to store metrics from Telegraf directly into a MySQL database, making it easier to analyze and visualize the collected metrics.
Integration details
OpenTelemetry
This plugin receives traces, metrics, and logs from OpenTelemetry clients and agents via gRPC. It supports configuration options for service address, connection timeout, message size, and attributes to be included as tags.
MySQL
This plugin saves Telegraf metric data to a MySQL database using a hard-coded database schema, which includes a table for each metric type. The plugin utilizes Golang’s generic database/sql interface to interact with various supported database drivers, including MySQL.
Configuration
OpenTelemetry
[[inputs.opentelemetry]]
## Override the default (0.0.0.0:4317) destination OpenTelemetry gRPC service
## address:port
# service_address = "0.0.0.0:4317"
## Override the default (5s) new connection timeout
# timeout = "5s"
## gRPC Maximum Message Size
# max_msg_size = "4MB"
## Override the default span attributes to be used as line protocol tags.
## These are always included as tags:
## - trace ID
## - span ID
## Common attributes can be found here:
## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
# span_dimensions = ["service.name", "span.name"]
## Override the default log record attributes to be used as line protocol tags.
## These are always included as tags, if available:
## - trace ID
## - span ID
## Common attributes can be found here:
## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
## When using InfluxDB for both logs and traces, be certain that log_record_dimensions
## matches the span_dimensions value.
# log_record_dimensions = ["service.name"]
## Override the default profile attributes to be used as line protocol tags.
## These are always included as tags, if available:
## - profile_id
## - address
## - sample
## - sample_name
## - sample_unit
## - sample_type
## - sample_type_unit
## Common attributes can be found here:
## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
# profile_dimensions = []
## Override the default (prometheus-v1) metrics schema.
## Supports: "prometheus-v1", "prometheus-v2"
## For more information about the alternatives, read the Prometheus input
## plugin notes.
# metrics_schema = "prometheus-v1"
## Optional TLS Config.
## For advanced options: https://github.com/influxdata/telegraf/blob/v1.18.3/docs/TLS.md
##
## Set one or more allowed client CA certificate file names to
## enable mutually authenticated TLS connections.
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
## Add service certificate and key.
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
MySQL
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = "mysql"
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
data_source_name = "username:password@tcp(host:port)/dbname"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
init_sql = "SET sql_mode='ANSI_QUOTES';"
## Maximum amount of time a connection may be idle. "0s" means connections are
## never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections
## are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of the
## table
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
# ## This setting controls the behavior of the unsigned value. By default the
# ## setting will take the integer value and append the unsigned value to it. The other
# ## option is "literal", which will use the actual value the user provides to
# ## the unsigned option. This is useful for a database like ClickHouse where
# ## the unsigned value should use a value like "uint64".
# # conversion_style = "unsigned_suffix"
Input and output integration examples
OpenTelemetry
- Basic Setup: Use this plugin to gather metrics from your OpenTelemetry-enabled applications running in a microservices architecture.
- Comprehensive Monitoring: Combine logs and traces to provide full visibility of your application performance and detect issues quickly.
- Data Enrichment: Enhance your metrics by including additional span dimensions and attributes, which can provide valuable context for analysis.
MySQL
- Basic Configuration: Set up the MySQL output plugin by specifying the driver as mysql and the data source name to connect to your MySQL database.
- Data Storage: Use this plugin to regularly save temperature and humidity metrics from a sensor into a MySQL database for later retrieval and analysis.
- Custom Schema: Modify the table creation template to include additional metadata columns for more comprehensive data logging.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration