AWS Data Firehose and Sumo Logic Integration
Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.
5B+
Telegraf downloads
#1
Time series database
Source: DB Engines
1B+
Downloads of InfluxDB
2,800+
Contributors
Table of Contents
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Input and output integration overview
This plugin listens for metrics sent via HTTP from AWS Data Firehose in supported data formats, providing real-time data ingestion capabilities.
The Sumo Logic plugin is designed to facilitate the sending of metrics from Telegraf to Sumo Logic’s HTTP Source. By utilizing this plugin, users can analyze their metric data in the Sumo Logic platform, leveraging various output data formats.
Integration details
AWS Data Firehose
The AWS Data Firehose Telegraf plugin is designed to receive metrics from AWS Data Firehose via HTTP. This plugin listens for incoming data in various formats and processes it according to the request-response schema outlined in the official AWS documentation. Unlike standard input plugins that operate on a fixed interval, this service plugin initializes a listener that remains active, waiting for incoming metrics. This allows for real-time data ingestion from AWS Data Firehose, making it suitable for scenarios where immediate data processing is required. Key features include the ability to specify service addresses, paths, and support for TLS connections for secure data transmission. Additionally, the plugin accommodates optional authentication keys and custom tags, enhancing its flexibility in various use cases involving data streaming and processing.
Sumo Logic
This plugin facilitates the transmission of metrics to Sumo Logic’s HTTP Source, employing specified data formats for HTTP messages. Telegraf, which must be version 1.16.0 or higher, can send metrics encoded in several formats, including graphite
, carbon2
, and prometheus
. These formats correspond to different content types recognized by Sumo Logic, ensuring that the metrics are correctly interpreted for analysis. Integration with Sumo Logic allows users to leverage a comprehensive analytics platform, enabling rich visualizations and insights from their metric data. The plugin provides configuration options such as setting URLs for the HTTP Metrics Source, choosing the data format, and specifying additional parameters like timeout and request size, which enhance flexibility and control in data monitoring workflows.
Configuration
AWS Data Firehose
[[inputs.firehose]]
## Address and port to host HTTP listener on
service_address = ":8080"
## Paths to listen to.
# paths = ["/telegraf"]
## maximum duration before timing out read of the request
# read_timeout = "5s"
## maximum duration before timing out write of the response
# write_timeout = "5s"
## Set one or more allowed client CA certificate file names to
## enable mutually authenticated TLS connections
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
## Add service certificate and key
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Minimal TLS version accepted by the server
# tls_min_version = "TLS12"
## Optional access key to accept for authentication.
## AWS Data Firehose uses "x-amz-firehose-access-key" header to set the access key.
## If no access_key is provided (default), authentication is completely disabled and
## this plugin will accept all request ignoring the provided access-key in the request!
# access_key = "foobar"
## Optional setting to add parameters as tags
## If the http header "x-amz-firehose-common-attributes" is not present on the
## request, no corresponding tag will be added. The header value should be a
## json and should follow the schema as describe in the official documentation:
## https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html#requestformat
# parameter_tags = ["env"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
Sumo Logic
[[outputs.sumologic]]
## Unique URL generated for your HTTP Metrics Source.
## This is the address to send metrics to.
# url = "https://events.sumologic.net/receiver/v1/http/"
## Data format to be used for sending metrics.
## This will set the "Content-Type" header accordingly.
## Currently supported formats:
## * graphite - for Content-Type of application/vnd.sumologic.graphite
## * carbon2 - for Content-Type of application/vnd.sumologic.carbon2
## * prometheus - for Content-Type of application/vnd.sumologic.prometheus
##
## More information can be found at:
## https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source#content-type-headers-for-metrics
##
## NOTE:
## When unset, telegraf will by default use the influx serializer which is currently unsupported
## in HTTP Source.
data_format = "carbon2"
## Timeout used for HTTP request
# timeout = "5s"
## Max HTTP request body size in bytes before compression (if applied).
## By default 1MB is recommended.
## NOTE:
## Bear in mind that in some serializer a metric even though serialized to multiple
## lines cannot be split any further so setting this very low might not work
## as expected.
# max_request_body_size = 1000000
## Additional, Sumo specific options.
## Full list can be found here:
## https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source#supported-http-headers
## Desired source name.
## Useful if you want to override the source name configured for the source.
# source_name = ""
## Desired host name.
## Useful if you want to override the source host configured for the source.
# source_host = ""
## Desired source category.
## Useful if you want to override the source category configured for the source.
# source_category = ""
## Comma-separated key=value list of dimensions to apply to every metric.
## Custom dimensions will allow you to query your metrics at a more granular level.
# dimensions = ""
</code></pre>
Input and output integration examples
AWS Data Firehose
-
Real-Time Data Analytics: Using the AWS Data Firehose plugin, organizations can stream data in real-time from various sources, such as application logs or IoT devices, directly into analytics platforms. This allows data teams to analyze incoming data as it is generated, enabling rapid insights and operational adjustments based on fresh metrics.
-
Profile Access Patterns for Optimization: By collecting data about how clients interact with applications through AWS Data Firehose, businesses can gain valuable insights into user behavior. This can drive content personalization strategies or optimize server architecture for better performance based on traffic patterns.
-
Automated Alerting Mechanism: Integrating AWS Data Firehose with alerting systems via this plugin allows teams to set up automated alerts based on specific metrics collected. For example, if a particular threshold is reached in the input data, alerts can trigger operations teams to investigate potential issues before they escalate.
Sumo Logic
-
Real-Time System Monitoring Dashboard: Utilize the Sumo Logic plugin to continuously feed performance metrics from your servers into a Sumo Logic dashboard. This setup allows tech teams to visualize system health and load in real-time, enabling quicker identification of any performance bottlenecks or system failures through detailed graphs and metrics.
-
Automated Alerting System: Configure the plugin to send metrics that trigger alerts in Sumo Logic for specific thresholds such as CPU usage or memory consumption. By setting up automated alerts, teams can proactively address issues before they escalate into critical failures, significantly improving response times and overall system reliability.
-
Cross-System Metrics Aggregation: Integrate multiple Telegraf instances across different environments (development, testing, production) and funnel all metrics to a central Sumo Logic instance using this plugin. This aggregation enables comprehensive analysis across environments, facilitating better monitoring and informed decision-making across the software development lifecycle.
-
Custom Metrics with Dimensions Tracking: Use the Sumo Logic plugin to send customized metrics that include dimensions identifying various aspects of your infrastructure (e.g., environment, service type). This granular tracking allows for more tailored analytics, enabling your team to dissect performance across different application layers or business functions.
Feedback
Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.
Powerful Performance, Limitless Scale
Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.
See Ways to Get Started
Related Integrations
Related Integrations
HTTP and InfluxDB Integration
The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.
View IntegrationKafka and InfluxDB Integration
This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.
View IntegrationKinesis and InfluxDB Integration
The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.
View Integration