VMware vSphere and OpenSearch Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider VMware vSphere and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The VMware vSphere Telegraf plugin provides a means to collect metrics from VMware vCenter servers, allowing for comprehensive monitoring and management of virtual resources in a vSphere environment.

The OpenSearch Output Plugin allows users to send metrics directly to an OpenSearch instance using HTTP, thus facilitating effective data management and analytics within the OpenSearch ecosystem.

Integration details

VMware vSphere

This plugin connects to VMware vSphere servers to gather a variety of metrics from virtual environments, enabling efficient monitoring and management of virtual resources. It interfaces with the vSphere API to collect statistics regarding clusters, hosts, resource pools, VMs, datastores, and vSAN entities, presenting them in a format suitable for analysis and visualization. The plugin is particularly valuable for administrators who manage VMware-based infrastructures, as it helps to track system performance, resource usage, and operational issues in real-time. By aggregating data from multiple sources, the plugin empowers users with insights that facilitate informed decision-making regarding resource allocation, troubleshooting, and ensuring optimal system performance. Additionally, the support for secret-store integration allows secure handling of sensitive credentials, promoting best practices in security and compliance assessments.

OpenSearch

The OpenSearch Telegraf Plugin integrates with the OpenSearch database via HTTP, allowing for the streamlined collection and storage of metrics. As a powerful tool designed specifically for OpenSearch releases from 2.x, the plugin provides robust features while offering compatibility with 1.x through the original Elasticsearch plugin. This plugin facilitates the creation and management of indexes in OpenSearch, automatically managing templates and ensuring that data is structured efficiently for analysis. The plugin supports various configuration options such as index names, authentication, health checks, and value handling, allowing it to be tailored to diverse operational requirements. Its capabilities make it essential for organizations looking to harness the power of OpenSearch for metrics storage and querying.

Configuration

VMware vSphere

[[inputs.vsphere]]
  vcenters = [ "https://vcenter.local/sdk" ]
  username = "[email protected]"
  password = "secret"

  vm_metric_include = [
    "cpu.demand.average",
    "cpu.idle.summation",
    "cpu.latency.average",
    "cpu.readiness.average",
    "cpu.ready.summation",
    "cpu.run.summation",
    "cpu.usagemhz.average",
    "cpu.used.summation",
    "cpu.wait.summation",
    "mem.active.average",
    "mem.granted.average",
    "mem.latency.average",
    "mem.swapin.average",
    "mem.swapinRate.average",
    "mem.swapout.average",
    "mem.swapoutRate.average",
    "mem.usage.average",
    "mem.vmmemctl.average",
    "net.bytesRx.average",
    "net.bytesTx.average",
    "net.droppedRx.summation",
    "net.droppedTx.summation",
    "net.usage.average",
    "power.power.average",
    "virtualDisk.numberReadAveraged.average",
    "virtualDisk.numberWriteAveraged.average",
    "virtualDisk.read.average",
    "virtualDisk.readOIO.latest",
    "virtualDisk.throughput.usage.average",
    "virtualDisk.totalReadLatency.average",
    "virtualDisk.totalWriteLatency.average",
    "virtualDisk.write.average",
    "virtualDisk.writeOIO.latest",
    "sys.uptime.latest",
  ]

  host_metric_include = [
    "cpu.coreUtilization.average",
    "cpu.costop.summation",
    "cpu.demand.average",
    "cpu.idle.summation",
    "cpu.latency.average",
    "cpu.readiness.average",
    "cpu.ready.summation",
    "cpu.swapwait.summation",
    "cpu.usage.average",
    "cpu.usagemhz.average",
    "cpu.used.summation",
    "cpu.utilization.average",
    "cpu.wait.summation",
    "disk.deviceReadLatency.average",
    "disk.deviceWriteLatency.average",
    "disk.kernelReadLatency.average",
    "disk.kernelWriteLatency.average",
    "disk.numberReadAveraged.average",
    "disk.numberWriteAveraged.average",
    "disk.read.average",
    "disk.totalReadLatency.average",
    "disk.totalWriteLatency.average",
    "disk.write.average",
    "mem.active.average",
    "mem.latency.average",
    "mem.state.latest",
    "mem.swapin.average",
    "mem.swapinRate.average",
    "mem.swapout.average",
    "mem.swapoutRate.average",
    "mem.totalCapacity.average",
    "mem.usage.average",
    "mem.vmmemctl.average",
    "net.bytesRx.average",
    "net.bytesTx.average",
    "net.droppedRx.summation",
    "net.droppedTx.summation",
    "net.errorsRx.summation",
    "net.errorsTx.summation",
    "net.usage.average",
    "power.power.average",
    "storageAdapter.numberReadAveraged.average",
    "storageAdapter.numberWriteAveraged.average",
    "storageAdapter.read.average",
    "storageAdapter.write.average",
    "sys.uptime.latest",
  ]

  datacenter_metric_include = [] ## if omitted or empty, all metrics are collected
  datacenter_metric_exclude = [ "*" ] ## Datacenters are not collected by default.

  vsan_metric_include = [] ## if omitted or empty, all metrics are collected
  vsan_metric_exclude = [ "*" ] ## vSAN are not collected by default.

  separator = "_"
  max_query_objects = 256
  max_query_metrics = 256
  collect_concurrency = 1
  discover_concurrency = 1
  object_discovery_interval = "300s"
  timeout = "60s"
  use_int_samples = true
  custom_attribute_include = []
  custom_attribute_exclude = ["*"]
  metric_lookback = 3
  ssl_ca = "/path/to/cafile"
  ssl_cert = "/path/to/certfile"
  ssl_key = "/path/to/keyfile"
  insecure_skip_verify = false
  historical_interval = "5m"
  disconnected_servers_behavior = "error"
  use_system_proxy = true
  http_proxy_url = ""

OpenSearch

[[outputs.opensearch]]
  ## URLs
  ## The full HTTP endpoint URL for your OpenSearch instance. Multiple URLs can
  ## be specified as part of the same cluster, but only one URLs is used to
  ## write during each interval.
  urls = ["http://node1.os.example.com:9200"]

  ## Index Name
  ## Target index name for metrics (OpenSearch will create if it not exists).
  ## This is a Golang template (see https://pkg.go.dev/text/template)
  ## You can also specify
  ## metric name (`{{.Name}}`), tag value (`{{.Tag "tag_name"}}`), field value (`{{.Field "field_name"}}`)
  ## If the tag does not exist, the default tag value will be empty string "".
  ## the timestamp (`{{.Time.Format "xxxxxxxxx"}}`).
  ## For example: "telegraf-{{.Time.Format \"2006-01-02\"}}-{{.Tag \"host\"}}" would set it to telegraf-2023-07-27-HostName
  index_name = ""

  ## Timeout
  ## OpenSearch client timeout
  # timeout = "5s"

  ## Sniffer
  ## Set to true to ask OpenSearch a list of all cluster nodes,
  ## thus it is not necessary to list all nodes in the urls config option
  # enable_sniffer = false

  ## GZIP Compression
  ## Set to true to enable gzip compression
  # enable_gzip = false

  ## Health Check Interval
  ## Set the interval to check if the OpenSearch nodes are available
  ## Setting to "0s" will disable the health check (not recommended in production)
  # health_check_interval = "10s"

  ## Set the timeout for periodic health checks.
  # health_check_timeout = "1s"
  ## HTTP basic authentication details.
  # username = ""
  # password = ""
  ## HTTP bearer token authentication details
  # auth_bearer_token = ""

  ## Optional TLS Config
  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
  ## enable TLS only if any of the other options are specified.
  # tls_enable =
  ## Trusted root certificates for server
  # tls_ca = "/path/to/cafile"
  ## Used for TLS client certificate authentication
  # tls_cert = "/path/to/certfile"
  ## Used for TLS client certificate authentication
  # tls_key = "/path/to/keyfile"
  ## Send the specified TLS server name via SNI
  # tls_server_name = "kubernetes.example.com"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Template Config
  ## Manage templates
  ## Set to true if you want telegraf to manage its index template.
  ## If enabled it will create a recommended index template for telegraf indexes
  # manage_template = true

  ## Template Name
  ## The template name used for telegraf indexes
  # template_name = "telegraf"

  ## Overwrite Templates
  ## Set to true if you want telegraf to overwrite an existing template
  # overwrite_template = false

  ## Document ID
  ## If set to true a unique ID hash will be sent as
  ## sha256(concat(timestamp,measurement,series-hash)) string. It will enable
  ## data resend and update metric points avoiding duplicated metrics with
  ## different id's
  # force_document_id = false

  ## Value Handling
  ## Specifies the handling of NaN and Inf values.
  ## This option can have the following values:
  ##    none    -- do not modify field-values (default); will produce an error
  ##               if NaNs or infs are encountered
  ##    drop    -- drop fields containing NaNs or infs
  ##    replace -- replace with the value in "float_replacement_value" (default: 0.0)
  ##               NaNs and inf will be replaced with the given number, -inf with the negative of that number
  # float_handling = "none"
  # float_replacement_value = 0.0

  ## Pipeline Config
  ## To use a ingest pipeline, set this to the name of the pipeline you want to use.
  # use_pipeline = "my_pipeline"

  ## Pipeline Name
  ## Additionally, you can specify a tag name using the notation (`{{.Tag "tag_name"}}`)
  ## which will be used as the pipeline name (e.g. "{{.Tag \"os_pipeline\"}}").
  ## If the tag does not exist, the default pipeline will be used as the pipeline.
  ## If no default pipeline is set, no pipeline is used for the metric.
  # default_pipeline = ""

Input and output integration examples

VMware vSphere

  1. Dynamic Resource Allocation: Utilize this plugin to monitor resource usage across a fleet of VMs and automatically adjust resource allocations based on performance metrics. This scenario could involve triggering scaling actions in real time based on CPU and memory usage metrics collected from the vSphere API, ensuring optimal performance and cost-efficiency.

  2. Capacity Planning and Forecasting: Leverage the historical metrics gathered from vSphere to conduct capacity planning. Analyzing the trends of CPU, memory, and storage usage over time helps administrators anticipate when additional resources will be needed, avoiding outages and ensuring that the virtual infrastructure can handle growth.

  3. Automated Alerting and Incident Response: Integrate this plugin with alerting tools to set up automated notifications based on the metrics gathered. For example, if the CPU usage on a host exceeds a specified threshold, it could trigger alerts and automatically initiate predefined remediation steps, such as migrating VMs to less utilized hosts.

  4. Performance Benchmarking Across Clusters: Use the metrics collected to compare the performance of clusters in different vCenters. This benchmarking provides insights into which cluster configurations yield the best resource efficiency and can guide future infrastructure enhancements.

OpenSearch

  1. Dynamic Indexing for Time-Series Data: Utilize the OpenSearch Telegraf plugin to dynamically create indexes for time-series metrics, ensuring that data is stored in an organized manner conducive to time-based queries. By defining index patterns using Go templates, users can leverage the plugin to create daily or monthly indexes, which can greatly simplify data management and retrieval over time, thus enhancing analytical performance.

  2. Centralized Logging for Multi-Tenant Applications: Implement the OpenSearch plugin in a multi-tenant application where each tenant’s logs are sent to separate indexes. This enables targeted analysis and monitoring for each tenant while maintaining data isolation. By utilizing the index name templating feature, users can automatically create tenant-specific indexes, which not only streamlines the process but also enhances security and accessibility for tenant data.

  3. Integration with Machine Learning for Anomaly Detection: Leverage the OpenSearch plugin alongside machine learning tools to automatically detect anomalies in metrics data. By configuring the plugin to send real-time metrics to OpenSearch, users can apply machine learning models on the incoming data streams to identify outliers or unusual patterns, facilitating proactive monitoring and swift remedial actions.

  4. Enhanced Monitoring Dashboards with OpenSearch: Use the metrics collected from OpenSearch to create real-time dashboards that provide insights into system performance. By feeding metrics into OpenSearch, organizations can utilize OpenSearch Dashboards to visualize key performance indicators, allowing operations teams to quickly assess health and performance, and making data-driven decisions.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration