Hashicorp Vault and Clickhouse Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The Hashicorp Vault plugin for Telegraf allows for the collection of metrics from Hashicorp Vault services, facilitating monitoring and operational insights.
Telegraf’s SQL plugin sends collected metrics to an SQL database using a straightforward table schema and dynamic column generation. When configured for ClickHouse, it adjusts DSN formatting and type conversion settings to ensure seamless data integration.
Integration details
Hashicorp Vault
The Hashicorp Vault plugin is designed to collect metrics from Vault agents running within a cluster. It enables Telegraf, an agent for collecting and reporting metrics, to interface with the Vault services, typically listening on a local address such as http://127.0.0.1:8200
. This plugin requires a valid token for authorization, ensuring secure access to the Vault API. Users must configure either a token directly or provide a path to a token file, enhancing flexibility in authentication methods. Proper configuration of the timeout and optional TLS settings further relates to the security and responsiveness of the metrics collection process. As Vault is a critical tool in managing secrets and protecting sensitive data, monitoring its performance and health through this plugin is essential for maintaining operational security and efficiency.
Clickhouse
Telegraf’s SQL plugin is engineered to write metric data into an SQL database by dynamically creating tables and columns based on incoming metrics. When configured for ClickHouse, it utilizes the clickhouse-go v1.5.4 driver, which employs a unique DSN format and a set of specialized type conversion rules to map Telegraf’s data types directly to ClickHouse’s native types. This approach ensures optimal storage and retrieval performance in high-throughput environments, making it well-suited for real-time analytics and large-scale data warehousing. The dynamic schema creation and precise type mapping enable detailed time-series data logging, crucial for monitoring modern, distributed systems.
Configuration
Hashicorp Vault
[[inputs.vault]]
## URL for the Vault agent
# url = "http://127.0.0.1:8200"
## Use Vault token for authorization.
## Vault token configuration is mandatory.
## If both are empty or both are set, an error is thrown.
# token_file = "/path/to/auth/token"
## OR
token = "s.CDDrgg5zPv5ssI0Z2P4qxJj2"
## Set response_timeout (default 5 seconds)
# response_timeout = "5s"
## Optional TLS Config
# tls_ca = /path/to/cafile
# tls_cert = /path/to/certfile
# tls_key = /path/to/keyfile
Clickhouse
[[outputs.sql]]
## Database driver
## Valid options include mssql, mysql, pgx, sqlite, snowflake, clickhouse
driver = "clickhouse"
## Data source name
## For ClickHouse, the DSN follows the clickhouse-go v1.5.4 format.
## Example DSN: "tcp://localhost:9000?debug=true"
data_source_name = "tcp://localhost:9000?debug=true"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE} ({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - table name as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL (optional)
init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## Metric type to SQL type conversion for ClickHouse.
## The conversion maps Telegraf metric types to ClickHouse native data types.
[outputs.sql.convert]
conversion_style = "literal"
integer = "Int64"
text = "String"
timestamp = "DateTime"
defaultvalue = "String"
unsigned = "UInt64"
bool = "UInt8"
real = "Float64"
Input and output integration examples
Hashicorp Vault
-
Centralized Secret Management Monitoring: Utilize the Vault plugin to monitor multiple Vault instances across a distributed system, allowing for a unified view of secret access patterns and system health. This setup can help DevOps teams quickly identify any anomalies in secret access, providing essential insights into security postures across different environments.
-
Audit Logging Integration: Configure this plugin to feed monitoring metrics into an audit logging system, enabling organizations to have a comprehensive view of their Vault interactions. By correlating audit logs with metrics, teams can investigate issues, optimize performance, and ensure compliance with security policies more effectively.
-
Performance Benchmarking During Deployments: During application deployments that interact with Vault, use the plugin to monitor the effects of those deployments on Vault performance. This allows engineering teams to understand how changes impact secret management workflows and to proactively address performance bottlenecks, ensuring smooth deployment processes.
-
Alerting for Threshold Exceedance: Integrate this plugin with alerting mechanisms to notify administrators when metrics exceed predefined thresholds. This proactive monitoring can help teams respond swiftly to potential issues, maintaining system reliability and uptime by allowing them to take action before any serious incidents arise.
Clickhouse
-
Real-Time Analytics for High-Volume Data: Use the plugin to feed streaming metrics from large-scale systems into ClickHouse. This setup supports ultra-fast query performance and near real-time analytics, ideal for monitoring high-traffic applications.
-
Time-Series Data Warehousing: Integrate the plugin with ClickHouse to create a robust time-series data warehouse. This use case allows organizations to store detailed historical metrics and perform complex queries for trend analysis and capacity planning.
-
Scalable Monitoring in Distributed Environments: Leverage the plugin to dynamically create tables per metric type in ClickHouse, making it easier to manage and query data from a multitude of distributed systems without prior schema definitions.
-
Optimized Storage for IoT Deployments: Deploy the plugin to ingest data from IoT sensors into ClickHouse. Its efficient schema creation and native type mapping facilitate the handling of massive volumes of data, enabling real-time monitoring and predictive maintenance.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration