HTTP and OpenSearch Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
The HTTP plugin allows for the collection of metrics from specified HTTP endpoints, handling various data formats and authentication methods.
The OpenSearch Output Plugin allows users to send metrics directly to an OpenSearch instance using HTTP, thus facilitating effective data management and analytics within the OpenSearch ecosystem.
Integration details
HTTP
The HTTP plugin collects metrics from one or more HTTP(S) endpoints, which should have metrics formatted in one of the supported input data formats. It also supports secrets from secret-stores for various authentication options and includes globally supported configuration settings.
OpenSearch
The OpenSearch Telegraf Plugin integrates with the OpenSearch database via HTTP, allowing for the streamlined collection and storage of metrics. As a powerful tool designed specifically for OpenSearch releases from 2.x, the plugin provides robust features while offering compatibility with 1.x through the original Elasticsearch plugin. This plugin facilitates the creation and management of indexes in OpenSearch, automatically managing templates and ensuring that data is structured efficiently for analysis. The plugin supports various configuration options such as index names, authentication, health checks, and value handling, allowing it to be tailored to diverse operational requirements. Its capabilities make it essential for organizations looking to harness the power of OpenSearch for metrics storage and querying.
Configuration
HTTP
[[inputs.http]]
## One or more URLs from which to read formatted metrics.
urls = [
"http://localhost/metrics",
"http+unix:///run/user/420/podman/podman.sock:/d/v4.0.0/libpod/pods/json"
]
## HTTP method
# method = "GET"
## Optional HTTP headers
# headers = {"X-Special-Header" = "Special-Value"}
## HTTP entity-body to send with POST/PUT requests.
# body = ""
## HTTP Content-Encoding for write request body, can be set to "gzip" to
## compress body or "identity" to apply no encoding.
# content_encoding = "identity"
## Optional Bearer token settings to use for the API calls.
## Use either the token itself or the token file if you need a token.
# token = "eyJhbGc...Qssw5c"
# token_file = "/path/to/file"
## Optional HTTP Basic Auth Credentials
# username = "username"
# password = "pa$$word"
## OAuth2 Client Credentials. The options 'client_id', 'client_secret', and 'token_url' are required to use OAuth2.
# client_id = "clientid"
# client_secret = "secret"
# token_url = "https://indentityprovider/oauth2/v1/token"
# scopes = ["urn:opc:idm:__myscopes__"]
## HTTP Proxy support
# use_system_proxy = false
# http_proxy_url = ""
## Optional TLS Config
## Set to true/false to enforce TLS being enabled/disabled. If not set,
## enable TLS only if any of the other options are specified.
# tls_enable =
## Trusted root certificates for server
# tls_ca = "/path/to/cafile"
## Used for TLS client certificate authentication
# tls_cert = "/path/to/certfile"
## Used for TLS client certificate authentication
# tls_key = "/path/to/keyfile"
## Password for the key file if it is encrypted
# tls_key_pwd = ""
## Send the specified TLS server name via SNI
# tls_server_name = "kubernetes.example.com"
## Minimal TLS version to accept by the client
# tls_min_version = "TLS12"
## List of ciphers to accept, by default all secure ciphers will be accepted
## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
## Use "all", "secure" and "insecure" to add all support ciphers, secure
## suites or insecure suites respectively.
# tls_cipher_suites = ["secure"]
## Renegotiation method, "never", "once" or "freely"
# tls_renegotiation_method = "never"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Optional Cookie authentication
# cookie_auth_url = "https://localhost/authMe"
# cookie_auth_method = "POST"
# cookie_auth_username = "username"
# cookie_auth_password = "pa$$word"
# cookie_auth_headers = { Content-Type = "application/json", X-MY-HEADER = "hello" }
# cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
# cookie_auth_renewal = "5m"
## Amount of time allowed to complete the HTTP request
# timeout = "5s"
## List of success status codes
# success_status_codes = [200]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
OpenSearch
[[outputs.opensearch]]
## URLs
## The full HTTP endpoint URL for your OpenSearch instance. Multiple URLs can
## be specified as part of the same cluster, but only one URLs is used to
## write during each interval.
urls = ["http://node1.os.example.com:9200"]
## Index Name
## Target index name for metrics (OpenSearch will create if it not exists).
## This is a Golang template (see https://pkg.go.dev/text/template)
## You can also specify
## metric name (`{{.Name}}`), tag value (`{{.Tag "tag_name"}}`), field value (`{{.Field "field_name"}}`)
## If the tag does not exist, the default tag value will be empty string "".
## the timestamp (`{{.Time.Format "xxxxxxxxx"}}`).
## For example: "telegraf-{{.Time.Format \"2006-01-02\"}}-{{.Tag \"host\"}}" would set it to telegraf-2023-07-27-HostName
index_name = ""
## Timeout
## OpenSearch client timeout
# timeout = "5s"
## Sniffer
## Set to true to ask OpenSearch a list of all cluster nodes,
## thus it is not necessary to list all nodes in the urls config option
# enable_sniffer = false
## GZIP Compression
## Set to true to enable gzip compression
# enable_gzip = false
## Health Check Interval
## Set the interval to check if the OpenSearch nodes are available
## Setting to "0s" will disable the health check (not recommended in production)
# health_check_interval = "10s"
## Set the timeout for periodic health checks.
# health_check_timeout = "1s"
## HTTP basic authentication details.
# username = ""
# password = ""
## HTTP bearer token authentication details
# auth_bearer_token = ""
## Optional TLS Config
## Set to true/false to enforce TLS being enabled/disabled. If not set,
## enable TLS only if any of the other options are specified.
# tls_enable =
## Trusted root certificates for server
# tls_ca = "/path/to/cafile"
## Used for TLS client certificate authentication
# tls_cert = "/path/to/certfile"
## Used for TLS client certificate authentication
# tls_key = "/path/to/keyfile"
## Send the specified TLS server name via SNI
# tls_server_name = "kubernetes.example.com"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Template Config
## Manage templates
## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes
# manage_template = true
## Template Name
## The template name used for telegraf indexes
# template_name = "telegraf"
## Overwrite Templates
## Set to true if you want telegraf to overwrite an existing template
# overwrite_template = false
## Document ID
## If set to true a unique ID hash will be sent as
## sha256(concat(timestamp,measurement,series-hash)) string. It will enable
## data resend and update metric points avoiding duplicated metrics with
## different id's
# force_document_id = false
## Value Handling
## Specifies the handling of NaN and Inf values.
## This option can have the following values:
## none -- do not modify field-values (default); will produce an error
## if NaNs or infs are encountered
## drop -- drop fields containing NaNs or infs
## replace -- replace with the value in "float_replacement_value" (default: 0.0)
## NaNs and inf will be replaced with the given number, -inf with the negative of that number
# float_handling = "none"
# float_replacement_value = 0.0
## Pipeline Config
## To use a ingest pipeline, set this to the name of the pipeline you want to use.
# use_pipeline = "my_pipeline"
## Pipeline Name
## Additionally, you can specify a tag name using the notation (`{{.Tag "tag_name"}}`)
## which will be used as the pipeline name (e.g. "{{.Tag \"os_pipeline\"}}").
## If the tag does not exist, the default pipeline will be used as the pipeline.
## If no default pipeline is set, no pipeline is used for the metric.
# default_pipeline = ""
Input and output integration examples
HTTP
- Collecting Metrics from Localhost: The plugin can fetch metrics from an HTTP endpoint like
http://localhost/metrics
, allowing for easy local monitoring. - Using Unix Domain Sockets: You can specify metrics collection from services over Unix domain sockets by using the http+unix scheme, for example,
http+unix:///path/to/service.sock:/api/endpoint
.
OpenSearch
-
Dynamic Indexing for Time-Series Data: Utilize the OpenSearch Telegraf plugin to dynamically create indexes for time-series metrics, ensuring that data is stored in an organized manner conducive to time-based queries. By defining index patterns using Go templates, users can leverage the plugin to create daily or monthly indexes, which can greatly simplify data management and retrieval over time, thus enhancing analytical performance.
-
Centralized Logging for Multi-Tenant Applications: Implement the OpenSearch plugin in a multi-tenant application where each tenant’s logs are sent to separate indexes. This enables targeted analysis and monitoring for each tenant while maintaining data isolation. By utilizing the index name templating feature, users can automatically create tenant-specific indexes, which not only streamlines the process but also enhances security and accessibility for tenant data.
-
Integration with Machine Learning for Anomaly Detection: Leverage the OpenSearch plugin alongside machine learning tools to automatically detect anomalies in metrics data. By configuring the plugin to send real-time metrics to OpenSearch, users can apply machine learning models on the incoming data streams to identify outliers or unusual patterns, facilitating proactive monitoring and swift remedial actions.
-
Enhanced Monitoring Dashboards with OpenSearch: Use the metrics collected from OpenSearch to create real-time dashboards that provide insights into system performance. By feeding metrics into OpenSearch, organizations can utilize OpenSearch Dashboards to visualize key performance indicators, allowing operations teams to quickly assess health and performance, and making data-driven decisions.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration