AWS Data Firehose and IoTDB Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider AWS Data Firehose and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

This plugin listens for metrics sent via HTTP from AWS Data Firehose in supported data formats, providing real-time data ingestion capabilities.

This plugin saves Telegraf metrics to an Apache IoTDB backend, supporting session connection and data insertion.

Integration details

AWS Data Firehose

The AWS Data Firehose Telegraf plugin is designed to receive metrics from AWS Data Firehose via HTTP. This plugin listens for incoming data in various formats and processes it according to the request-response schema outlined in the official AWS documentation. Unlike standard input plugins that operate on a fixed interval, this service plugin initializes a listener that remains active, waiting for incoming metrics. This allows for real-time data ingestion from AWS Data Firehose, making it suitable for scenarios where immediate data processing is required. Key features include the ability to specify service addresses, paths, and support for TLS connections for secure data transmission. Additionally, the plugin accommodates optional authentication keys and custom tags, enhancing its flexibility in various use cases involving data streaming and processing.

IoTDB

Apache IoTDB (Database for Internet of Things) is an IoT native database with high performance for data management and analysis, deployable on the edge and the cloud. Its light-weight architecture, high performance, and rich feature set create a perfect fit for massive data storage, high-speed data ingestion, and complex analytics in the IoT industrial fields. IoTDB deeply integrates with Apache Hadoop, Spark, and Flink, which further enhances its capabilities in handling large scale data and sophisticated processing tasks.

Configuration

AWS Data Firehose

[[inputs.firehose]]
  ## Address and port to host HTTP listener on
  service_address = ":8080"

  ## Paths to listen to.
  # paths = ["/telegraf"]

  ## maximum duration before timing out read of the request
  # read_timeout = "5s"
  ## maximum duration before timing out write of the response
  # write_timeout = "5s"

  ## Set one or more allowed client CA certificate file names to
  ## enable mutually authenticated TLS connections
  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

  ## Add service certificate and key
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"

  ## Minimal TLS version accepted by the server
  # tls_min_version = "TLS12"

  ## Optional access key to accept for authentication.
  ## AWS Data Firehose uses "x-amz-firehose-access-key" header to set the access key.
  ## If no access_key is provided (default), authentication is completely disabled and
  ## this plugin will accept all request ignoring the provided access-key in the request!
  # access_key = "foobar"

  ## Optional setting to add parameters as tags
  ## If the http header "x-amz-firehose-common-attributes" is not present on the
  ## request, no corresponding tag will be added. The header value should be a
  ## json and should follow the schema as describe in the official documentation:
  ## https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html#requestformat
  # parameter_tags = ["env"]

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  # data_format = "influx"

IoTDB

[[outputs.iotdb]]
  ## Configuration of IoTDB server connection
  host = "127.0.0.1"
  # port = "6667"

  ## Configuration of authentication
  # user = "root"
  # password = "root"

  ## Timeout to open a new session.
  ## A value of zero means no timeout.
  # timeout = "5s"

  ## Configuration of type conversion for 64-bit unsigned int
  ## IoTDB currently DOES NOT support unsigned integers (version 13.x).
  ## 32-bit unsigned integers are safely converted into 64-bit signed integers by the plugin,
  ## however, this is not true for 64-bit values in general as overflows may occur.
  ## The following setting allows to specify the handling of 64-bit unsigned integers.
  ## Available values are:
  ##   - "int64"       --  convert to 64-bit signed integers and accept overflows
  ##   - "int64_clip"  --  convert to 64-bit signed integers and clip the values on overflow to 9,223,372,036,854,775,807
  ##   - "text"        --  convert to the string representation of the value
  # uint64_conversion = "int64_clip"

  ## Configuration of TimeStamp
  ## TimeStamp is always saved in 64bits int. timestamp_precision specifies the unit of timestamp.
  ## Available value:
  ## "second", "millisecond", "microsecond", "nanosecond"(default)
  # timestamp_precision = "nanosecond"

  ## Handling of tags
  ## Tags are not fully supported by IoTDB.
  ## A guide with suggestions on how to handle tags can be found here:
  ##     https://iotdb.apache.org/UserGuide/Master/API/InfluxDB-Protocol.html
  ##
  ## Available values are:
  ##   - "fields"     --  convert tags to fields in the measurement
  ##   - "device_id"  --  attach tags to the device ID
  ##
  ## For Example, a metric named "root.sg.device" with the tags `tag1: "private"`  and  `tag2: "working"` and
  ##  fields `s1: 100`  and `s2: "hello"` will result in the following representations in IoTDB
  ##   - "fields"     --  root.sg.device, s1=100, s2="hello", tag1="private", tag2="working"
  ##   - "device_id"  --  root.sg.device.private.working, s1=100, s2="hello"
  # convert_tags_to = "device_id"

  ## Handling of unsupported characters
  ## Some characters in different versions of IoTDB are not supported in path name
  ## A guide with suggetions on valid paths can be found here:
  ## for iotdb 0.13.x           -> https://iotdb.apache.org/UserGuide/V0.13.x/Reference/Syntax-Conventions.html#identifiers
  ## for iotdb 1.x.x and above  -> https://iotdb.apache.org/UserGuide/V1.3.x/User-Manual/Syntax-Rule.html#identifier
  ##
  ## Available values are:
  ##   - "1.0", "1.1", "1.2", "1.3"  -- enclose in `` the world having forbidden character 
  ##                                    such as @ $ # : [ ] { } ( ) space
  ##   - "0.13"                      -- enclose in `` the world having forbidden character 
  ##                                    such as space
  ##
  ## Keep this section commented if you don't want to sanitize the path
  # sanitize_tag = "1.3"

Input and output integration examples

AWS Data Firehose

  1. Real-Time Data Analytics: Using the AWS Data Firehose plugin, organizations can stream data in real-time from various sources, such as application logs or IoT devices, directly into analytics platforms. This allows data teams to analyze incoming data as it is generated, enabling rapid insights and operational adjustments based on fresh metrics.

  2. Profile Access Patterns for Optimization: By collecting data about how clients interact with applications through AWS Data Firehose, businesses can gain valuable insights into user behavior. This can drive content personalization strategies or optimize server architecture for better performance based on traffic patterns.

  3. Automated Alerting Mechanism: Integrating AWS Data Firehose with alerting systems via this plugin allows teams to set up automated alerts based on specific metrics collected. For example, if a particular threshold is reached in the input data, alerts can trigger operations teams to investigate potential issues before they escalate.

IoTDB

  1. Real-Time IoT Monitoring: Utilize the IoTDB plugin to gather sensor data from various IoT devices and save it in an Apache IoTDB backend, facilitating real-time monitoring of environmental conditions such as temperature and humidity. This use case enables organizations to analyze trends over time and make informed decisions based on historical data, while also utilizing IoTDB’s efficient storage and querying capabilities.

  2. Smart Agriculture Data Collection: Use the IoTDB plugin to collect metrics from smart agriculture sensors deployed in fields. By transmitting moisture levels, nutrient content, and atmospheric conditions to IoTDB, farmers can access detailed insights into optimal planting and watering schedules, thus improving crop yields and resource management.

  3. Energy Consumption Analytics: Leverage the IoTDB plugin to track energy consumption metrics from smart meters across a utility network. This integration enables analytics to identify peaks in usage and predict future consumption patterns, ultimately supporting energy conservation initiatives and improved utility management.

  4. Automated Industrial Equipment Monitoring: Use this plugin to gather operational metrics from machinery in a manufacturing plant and store them in IoTDB for analysis. This setup can help identify inefficiencies, predictive maintenance needs, and operational anomalies, ensuring optimal performance and minimizing unexpected downtimes.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration