Kubernetes and Datadog Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
This plugin captures metrics for Kubernetes pods and containers by communicating with the Kubelet API.
The Datadog Telegraf Plugin enables the submission of metrics to the Datadog Metrics API, facilitating efficient monitoring and data analysis through a reliable metric ingestion process.
Integration details
Kubernetes
The Kubernetes input plugin interfaces with the Kubelet API to gather metrics for running pods and containers on a single host, ideally as part of a daemonset in a Kubernetes installation. By operating on each node within the cluster, it collects metrics from the locally running kubelet, ensuring that the data reflects the real-time state of the environment. Being a rapidly evolving project, Kubernetes sees frequent updates, and this plugin adheres to the major cloud providers’ supported versions, maintaining compatibility across multiple releases within a limited time span. Significant consideration is given to the potential high series cardinality, which can burden the database; thus, users are advised to implement filtering techniques and retention policies to manage this load effectively. Configuration options provide flexible customization of the plugin’s behavior to integrate seamlessly into different setups, enhancing its utility in monitoring Kubernetes environments.
Datadog
This plugin writes to the Datadog Metrics API, enabling users to send metrics for monitoring and performance analysis. By utilizing the Datadog API key, users can configure the plugin to establish a connection with Datadog’s v1 API. The plugin supports various configuration options including connection timeouts, HTTP proxy settings, and data compression methods, ensuring adaptability to different deployment environments. The ability to transform count metrics into rates enhances the integration of Telegraf with Datadog agents, particularly beneficial for applications that rely on real-time performance metrics.
Configuration
Kubernetes
[[inputs.kubernetes]]
## URL for the kubelet, if empty read metrics from all nodes in the cluster
url = "http://127.0.0.1:10255"
## Use bearer token for authorization. ('bearer_token' takes priority)
## If both of these are empty, we'll use the default serviceaccount:
## at: /var/run/secrets/kubernetes.io/serviceaccount/token
##
## To re-read the token at each interval, please use a file with the
## bearer_token option. If given a string, Telegraf will always use that
## token.
# bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
## OR
# bearer_token_string = "abc_123"
## Kubernetes Node Metric Name
## The default Kubernetes node metric name (i.e. kubernetes_node) is the same
## for the kubernetes and kube_inventory plugins. To avoid conflicts, set this
## option to a different value.
# node_metric_name = "kubernetes_node"
## Pod labels to be added as tags. An empty array for both include and
## exclude will include all labels.
# label_include = []
# label_exclude = ["*"]
## Set response_timeout (default 5 seconds)
# response_timeout = "5s"
## Optional TLS Config
# tls_ca = /path/to/cafile
# tls_cert = /path/to/certfile
# tls_key = /path/to/keyfile
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
Datadog
[[outputs.datadog]]
## Datadog API key
apikey = "my-secret-key"
## Connection timeout.
# timeout = "5s"
## Write URL override; useful for debugging.
## This plugin only supports the v1 API currently due to the authentication
## method used.
# url = "https://app.datadoghq.com/api/v1/series"
## Set http_proxy
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
## Override the default (none) compression used to send data.
## Supports: "zlib", "none"
# compression = "none"
## When non-zero, converts count metrics submitted by inputs.statsd
## into rate, while dividing the metric value by this number.
## Note that in order for metrics to be submitted simultaenously alongside
## a Datadog agent, rate_interval has to match the interval used by the
## agent - which defaults to 10s
# rate_interval = 0s
Input and output integration examples
Kubernetes
-
Dynamic Resource Allocation Monitoring: By utilizing the Kubernetes plugin, teams can set up alerts for resource usage patterns across various pods and containers. This proactive monitoring approach enables automatic scaling of resources in response to specific thresholds—helping to optimize performance while minimizing costs during peak usage.
-
Multi-tenancy Resource Isolation Analysis: Organizations using Kubernetes can leverage this plugin to track resource consumption per namespace. In a multi-tenant scenario, understanding the resource allocations and usages across different teams becomes critical for ensuring fair access and performance guarantees, leading to better resource management strategies.
-
Real-time Health Dashboards: Integrate the data captured by the Kubernetes plugin into visualization tools like Grafana to create real-time dashboards. These dashboards provide insights into the overall health and performance of the Kubernetes environment, allowing teams to quickly identify and rectify issues across clusters, pods, and containers.
-
Automated Incident Response Workflows: By combining the Kubernetes plugin with alert management systems, teams can automate incident response procedures based on real-time metrics. If a pod’s resource usage exceeds predefined limits, an automated workflow can trigger remediation actions, such as restarting the pod or reallocating resources—all of which can help improve system resilience.
Datadog
-
Real-Time Infrastructure Monitoring: Use the Datadog plugin to monitor server metrics in real-time by sending CPU usage and memory statistics directly to Datadog. This integration allows IT teams to visualize and analyze system performance metrics in a centralized dashboard, enabling proactive response to any emerging issues, such as resource bottlenecks or server overloads.
-
Application Performance Tracking: Leverage this plugin to submit application-specific metrics, such as request counts and error rates, to Datadog. By integrating with application monitoring tools, teams can correlate infrastructure metrics with application performance, providing insights that enable them to optimize code performance and improve user experience.
-
Anomaly Detection in Metrics: Configure the Datadog plugin to send metrics that can trigger alerts and notifications based on unusual patterns detected by Datadog’s machine learning features. This proactive monitoring helps teams swiftly react to potential outages or performance degradation before customers are impacted.
-
Integrating with Cloud Services: By utilizing the Datadog plugin to send metrics from cloud resources, IT teams can gain visibility into cloud application performance. Monitoring metrics like latency and error rates helps with ensuring service-level agreements (SLAs) are met and also assists in optimizing resource allocation across cloud environments.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration