Azure Event Hubs and Snowflake Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Azure Event Hubs and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

The Azure Event Hubs Input Plugin allows Telegraf to consume data from Azure Event Hubs and Azure IoT Hub, enabling efficient data processing and monitoring of event streams from these cloud services.

Telegraf’s SQL plugin allows seamless metric storage in SQL databases. When configured for Snowflake, it employs a specialized DSN format and dynamic table creation to map metrics to the appropriate schema.

Integration details

Azure Event Hubs

This plugin serves as a consumer for Azure Event Hubs and Azure IoT Hub, allowing users to ingest data streams from these platforms efficiently. Azure Event Hubs is a highly scalable data streaming platform and event ingestion service capable of receiving and processing millions of events per second, while Azure IoT Hub enables secure device-to-cloud and cloud-to-device communication in IoT applications. The Event Hub Input Plugin interacts seamlessly with these services, providing reliable message consumption and stream processing capabilities. Key features include dynamic management of consumer groups, message tracking to prevent data loss, and customizable settings for prefetch counts, user agents, and metadata handling. This plugin is designed to support a range of use cases, including real-time telemetry data collection, IoT data processing, and integration with various data analysis and monitoring tools within the broader Azure ecosystem.

Snowflake

Telegraf’s SQL plugin is engineered to dynamically write metrics into an SQL database by creating tables and columns based on the incoming data. When configured for Snowflake, it employs the gosnowflake driver, which uses a DSN that encapsulates credentials, account details, and database configuration in a compact format. This setup allows for the automatic generation of tables where each metric is recorded with precise timestamps, thereby ensuring detailed historical tracking. Although the integration is considered experimental, it leverages Snowflake’s powerful data warehousing capabilities, making it suitable for scalable, cloud-based analytics and reporting solutions.

Configuration

Azure Event Hubs

[[inputs.eventhub_consumer]]
  ## The default behavior is to create a new Event Hub client from environment variables.
  ## This requires one of the following sets of environment variables to be set:
  ##
  ## 1) Expected Environment Variables:
  ##    - "EVENTHUB_CONNECTION_STRING"
  ##
  ## 2) Expected Environment Variables:
  ##    - "EVENTHUB_NAMESPACE"
  ##    - "EVENTHUB_NAME"
  ##    - "EVENTHUB_KEY_NAME"
  ##    - "EVENTHUB_KEY_VALUE"

  ## 3) Expected Environment Variables:
  ##    - "EVENTHUB_NAMESPACE"
  ##    - "EVENTHUB_NAME"
  ##    - "AZURE_TENANT_ID"
  ##    - "AZURE_CLIENT_ID"
  ##    - "AZURE_CLIENT_SECRET"

  ## Uncommenting the option below will create an Event Hub client based solely on the connection string.
  ## This can either be the associated environment variable or hard coded directly.
  ## If this option is uncommented, environment variables will be ignored.
  ## Connection string should contain EventHubName (EntityPath)
  # connection_string = ""

  ## Set persistence directory to a valid folder to use a file persister instead of an in-memory persister
  # persistence_dir = ""

  ## Change the default consumer group
  # consumer_group = ""

  ## By default the event hub receives all messages present on the broker, alternative modes can be set below.
  ## The timestamp should be in https://github.com/toml-lang/toml#offset-date-time format (RFC 3339).
  ## The 3 options below only apply if no valid persister is read from memory or file (e.g. first run).
  # from_timestamp =
  # latest = true

  ## Set a custom prefetch count for the receiver(s)
  # prefetch_count = 1000

  ## Add an epoch to the receiver(s)
  # epoch = 0

  ## Change to set a custom user agent, "telegraf" is used by default
  # user_agent = "telegraf"

  ## To consume from a specific partition, set the partition_ids option.
  ## An empty array will result in receiving from all partitions.
  # partition_ids = ["0","1"]

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Set either option below to true to use a system property as timestamp.
  ## You have the choice between EnqueuedTime and IoTHubEnqueuedTime.
  ## It is recommended to use this setting when the data itself has no timestamp.
  # enqueued_time_as_ts = true
  # iot_hub_enqueued_time_as_ts = true

  ## Tags or fields to create from keys present in the application property bag.
  ## These could for example be set by message enrichments in Azure IoT Hub.
  # application_property_tags = []
  # application_property_fields = []

  ## Tag or field name to use for metadata
  ## By default all metadata is disabled
  # sequence_number_field = "SequenceNumber"
  # enqueued_time_field = "EnqueuedTime"
  # offset_field = "Offset"
  # partition_id_tag = "PartitionID"
  # partition_key_tag = "PartitionKey"
  # iot_hub_device_connection_id_tag = "IoTHubDeviceConnectionID"
  # iot_hub_auth_generation_id_tag = "IoTHubAuthGenerationID"
  # iot_hub_connection_auth_method_tag = "IoTHubConnectionAuthMethod"
  # iot_hub_connection_module_id_tag = "IoTHubConnectionModuleID"
  # iot_hub_enqueued_time_field = "IoTHubEnqueuedTime"

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

Snowflake

[[outputs.sql]]
  ## Database driver
  ## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
  ## sqlite (SQLite3), snowflake (snowflake.com), clickhouse (ClickHouse)
  driver = "snowflake"

  ## Data source name
  ## For Snowflake, the DSN format typically includes the username, password, account identifier, and optional warehouse, database, and schema.
  ## Example DSN: "username:password@account/warehouse/db/schema"
  data_source_name = "username:password@account/warehouse/db/schema"

  ## Timestamp column name
  timestamp_column = "timestamp"

  ## Table creation template
  ## Available template variables:
  ##  {TABLE}        - table name as a quoted identifier
  ##  {TABLELITERAL} - table name as a quoted string literal
  ##  {COLUMNS}      - column definitions (list of quoted identifiers and types)
  table_template = "CREATE TABLE {TABLE} ({COLUMNS})"

  ## Table existence check template
  ## Available template variables:
  ##  {TABLE} - table name as a quoted identifier
  table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"

  ## Initialization SQL (optional)
  init_sql = ""

  ## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
  connection_max_idle_time = "0s"

  ## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
  connection_max_lifetime = "0s"

  ## Maximum number of connections in the idle connection pool. 0 means unlimited.
  connection_max_idle = 2

  ## Maximum number of open connections to the database. 0 means unlimited.
  connection_max_open = 0

  ## Metric type to SQL type conversion
  ## Defaults to ANSI/ISO SQL types unless overridden. Adjust if needed for Snowflake compatibility.
  #[outputs.sql.convert]
  #  integer       = "INT"
  #  real          = "DOUBLE"
  #  text          = "TEXT"
  #  timestamp     = "TIMESTAMP"
  #  defaultvalue  = "TEXT"
  #  unsigned      = "UNSIGNED"
  #  bool          = "BOOL"

Input and output integration examples

Azure Event Hubs

  1. Real-Time IoT Device Monitoring: Use the Azure Event Hubs Plugin to monitor telemetry data from IoT devices like sensors and actuators. By streaming device data into monitoring dashboards, organizations can gain insights into system performances, track usage patterns, and quickly respond to irregularities. This setup allows for proactive management of devices, improving operational efficiency and reducing downtime.

  2. Event-Driven Data Processing Workflows: Leverage this plugin to trigger data processing workflows in response to events received from Azure Event Hubs. For instance, when a new event arrives, it can initiate data transformation, aggregation, or storage processes, allowing businesses to automate their workflows more effectively. This integration enhances responsiveness and streamlines operations across systems.

  3. Integration with Analytics Platforms: Implement the plugin to funnel event data into analytics platforms like Azure Synapse or Power BI. By integrating real-time streaming data into analytics tools, organizations can perform comprehensive data analysis, drive business intelligence efforts, and create interactive visualizations that inform decision-making.

  4. Cross-Platform Data Sync: Utilize the Azure Event Hubs Plugin to synchronize data streams across diverse systems or platforms. By consuming data from Azure Event Hubs and forwarding it to other systems like databases or cloud storage, organizations can maintain consistent and up-to-date information across their entire architecture, enabling cohesive data strategies.

Snowflake

  1. Cloud-Based Data Lake Integration: Utilize the plugin to stream real-time metrics from various sources into Snowflake, enabling the creation of a centralized data lake. This integration supports complex analytics and machine learning workflows on cloud data.

  2. Dynamic Business Intelligence Dashboards: Leverage the plugin to automatically generate tables from incoming metrics and feed them into BI tools. This allows businesses to create dynamic dashboards that visualize performance trends and operational insights without manual schema management.

  3. Scalable IoT Analytics: Deploy the plugin to capture high-frequency data from IoT devices into Snowflake. This use case facilitates the aggregation and analysis of sensor data, enabling predictive maintenance and real-time monitoring at scale.

  4. Historical Trend Analysis for Compliance: Use the plugin to log and archive detailed metric data in Snowflake, which can then be queried for long-term trend analysis and compliance reporting. This setup ensures that organizations can maintain a robust audit trail and perform forensic analysis if needed.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration