VMware vSphere and Snowflake Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The VMware vSphere Telegraf plugin provides a means to collect metrics from VMware vCenter servers, allowing for comprehensive monitoring and management of virtual resources in a vSphere environment.
Telegraf’s SQL plugin allows seamless metric storage in SQL databases. When configured for Snowflake, it employs a specialized DSN format and dynamic table creation to map metrics to the appropriate schema.
Integration details
VMware vSphere
This plugin connects to VMware vSphere servers to gather a variety of metrics from virtual environments, enabling efficient monitoring and management of virtual resources. It interfaces with the vSphere API to collect statistics regarding clusters, hosts, resource pools, VMs, datastores, and vSAN entities, presenting them in a format suitable for analysis and visualization. The plugin is particularly valuable for administrators who manage VMware-based infrastructures, as it helps to track system performance, resource usage, and operational issues in real-time. By aggregating data from multiple sources, the plugin empowers users with insights that facilitate informed decision-making regarding resource allocation, troubleshooting, and ensuring optimal system performance. Additionally, the support for secret-store integration allows secure handling of sensitive credentials, promoting best practices in security and compliance assessments.
Snowflake
Telegraf’s SQL plugin is engineered to dynamically write metrics into an SQL database by creating tables and columns based on the incoming data. When configured for Snowflake, it employs the gosnowflake driver, which uses a DSN that encapsulates credentials, account details, and database configuration in a compact format. This setup allows for the automatic generation of tables where each metric is recorded with precise timestamps, thereby ensuring detailed historical tracking. Although the integration is considered experimental, it leverages Snowflake’s powerful data warehousing capabilities, making it suitable for scalable, cloud-based analytics and reporting solutions.
Configuration
VMware vSphere
[[inputs.vsphere]]
vcenters = [ "https://vcenter.local/sdk" ]
username = "[email protected]"
password = "secret"
vm_metric_include = [
"cpu.demand.average",
"cpu.idle.summation",
"cpu.latency.average",
"cpu.readiness.average",
"cpu.ready.summation",
"cpu.run.summation",
"cpu.usagemhz.average",
"cpu.used.summation",
"cpu.wait.summation",
"mem.active.average",
"mem.granted.average",
"mem.latency.average",
"mem.swapin.average",
"mem.swapinRate.average",
"mem.swapout.average",
"mem.swapoutRate.average",
"mem.usage.average",
"mem.vmmemctl.average",
"net.bytesRx.average",
"net.bytesTx.average",
"net.droppedRx.summation",
"net.droppedTx.summation",
"net.usage.average",
"power.power.average",
"virtualDisk.numberReadAveraged.average",
"virtualDisk.numberWriteAveraged.average",
"virtualDisk.read.average",
"virtualDisk.readOIO.latest",
"virtualDisk.throughput.usage.average",
"virtualDisk.totalReadLatency.average",
"virtualDisk.totalWriteLatency.average",
"virtualDisk.write.average",
"virtualDisk.writeOIO.latest",
"sys.uptime.latest",
]
host_metric_include = [
"cpu.coreUtilization.average",
"cpu.costop.summation",
"cpu.demand.average",
"cpu.idle.summation",
"cpu.latency.average",
"cpu.readiness.average",
"cpu.ready.summation",
"cpu.swapwait.summation",
"cpu.usage.average",
"cpu.usagemhz.average",
"cpu.used.summation",
"cpu.utilization.average",
"cpu.wait.summation",
"disk.deviceReadLatency.average",
"disk.deviceWriteLatency.average",
"disk.kernelReadLatency.average",
"disk.kernelWriteLatency.average",
"disk.numberReadAveraged.average",
"disk.numberWriteAveraged.average",
"disk.read.average",
"disk.totalReadLatency.average",
"disk.totalWriteLatency.average",
"disk.write.average",
"mem.active.average",
"mem.latency.average",
"mem.state.latest",
"mem.swapin.average",
"mem.swapinRate.average",
"mem.swapout.average",
"mem.swapoutRate.average",
"mem.totalCapacity.average",
"mem.usage.average",
"mem.vmmemctl.average",
"net.bytesRx.average",
"net.bytesTx.average",
"net.droppedRx.summation",
"net.droppedTx.summation",
"net.errorsRx.summation",
"net.errorsTx.summation",
"net.usage.average",
"power.power.average",
"storageAdapter.numberReadAveraged.average",
"storageAdapter.numberWriteAveraged.average",
"storageAdapter.read.average",
"storageAdapter.write.average",
"sys.uptime.latest",
]
datacenter_metric_include = [] ## if omitted or empty, all metrics are collected
datacenter_metric_exclude = [ "*" ] ## Datacenters are not collected by default.
vsan_metric_include = [] ## if omitted or empty, all metrics are collected
vsan_metric_exclude = [ "*" ] ## vSAN are not collected by default.
separator = "_"
max_query_objects = 256
max_query_metrics = 256
collect_concurrency = 1
discover_concurrency = 1
object_discovery_interval = "300s"
timeout = "60s"
use_int_samples = true
custom_attribute_include = []
custom_attribute_exclude = ["*"]
metric_lookback = 3
ssl_ca = "/path/to/cafile"
ssl_cert = "/path/to/certfile"
ssl_key = "/path/to/keyfile"
insecure_skip_verify = false
historical_interval = "5m"
disconnected_servers_behavior = "error"
use_system_proxy = true
http_proxy_url = ""
Snowflake
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com), clickhouse (ClickHouse)
driver = "snowflake"
## Data source name
## For Snowflake, the DSN format typically includes the username, password, account identifier, and optional warehouse, database, and schema.
## Example DSN: "username:password@account/warehouse/db/schema"
data_source_name = "username:password@account/warehouse/db/schema"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE} ({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - table name as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL (optional)
init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## Metric type to SQL type conversion
## Defaults to ANSI/ISO SQL types unless overridden. Adjust if needed for Snowflake compatibility.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
Input and output integration examples
VMware vSphere
-
Dynamic Resource Allocation: Utilize this plugin to monitor resource usage across a fleet of VMs and automatically adjust resource allocations based on performance metrics. This scenario could involve triggering scaling actions in real time based on CPU and memory usage metrics collected from the vSphere API, ensuring optimal performance and cost-efficiency.
-
Capacity Planning and Forecasting: Leverage the historical metrics gathered from vSphere to conduct capacity planning. Analyzing the trends of CPU, memory, and storage usage over time helps administrators anticipate when additional resources will be needed, avoiding outages and ensuring that the virtual infrastructure can handle growth.
-
Automated Alerting and Incident Response: Integrate this plugin with alerting tools to set up automated notifications based on the metrics gathered. For example, if the CPU usage on a host exceeds a specified threshold, it could trigger alerts and automatically initiate predefined remediation steps, such as migrating VMs to less utilized hosts.
-
Performance Benchmarking Across Clusters: Use the metrics collected to compare the performance of clusters in different vCenters. This benchmarking provides insights into which cluster configurations yield the best resource efficiency and can guide future infrastructure enhancements.
Snowflake
-
Cloud-Based Data Lake Integration: Utilize the plugin to stream real-time metrics from various sources into Snowflake, enabling the creation of a centralized data lake. This integration supports complex analytics and machine learning workflows on cloud data.
-
Dynamic Business Intelligence Dashboards: Leverage the plugin to automatically generate tables from incoming metrics and feed them into BI tools. This allows businesses to create dynamic dashboards that visualize performance trends and operational insights without manual schema management.
-
Scalable IoT Analytics: Deploy the plugin to capture high-frequency data from IoT devices into Snowflake. This use case facilitates the aggregation and analysis of sensor data, enabling predictive maintenance and real-time monitoring at scale.
-
Historical Trend Analysis for Compliance: Use the plugin to log and archive detailed metric data in Snowflake, which can then be queried for long-term trend analysis and compliance reporting. This setup ensures that organizations can maintain a robust audit trail and perform forensic analysis if needed.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration