Google Cloud PubSub and TimescaleDB Integration

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

info

This is not the recommended configuration for real-time query at scale. For query and compression optimization, high-speed ingest, and high availability, you may want to consider Google Cloud PubSub and InfluxDB.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Input and output integration overview

This plugin ingests metrics from Google Cloud PubSub, allowing for real-time data processing and integration into monitoring setups.

This output plugin delivers a reliable and efficient mechanism for routing Telegraf collected metrics directly into TimescaleDB. By leveraging PostgreSQL’s robust ecosystem combined with TimescaleDB’s time series optimizations, it supports high-performance data ingestion and advanced querying capabilities.

Integration details

Google Cloud PubSub

The Google Cloud PubSub input plugin is designed to ingest metrics from Google Cloud PubSub, a messaging service that facilitates real-time communication between different systems. It allows users to create and process metrics by pulling messages from a specified subscription in a Google Cloud Project. One of the critical features of this plugin is its ability to operate as a service input, actively listening for incoming messages rather than merely polling for metrics at set intervals. Through various configuration options, users can customize the behavior of message ingestion, such as handling credentials, managing message sizes, and tuning the acknowledgment settings to ensure that messages are only acknowledged after successful processing. By leveraging the strengths of Google PubSub, this plugin integrates seamlessly with cloud-native architectures, enabling users to build robust and scalable applications that can react to events in real-time.

TimescaleDB

TimescaleDB is an open source time series database built as an extension to PostgreSQL, designed to handle large scale, time-oriented data efficiently. Launched in 2017, TimescaleDB emerged in response to the growing need for a robust, scalable solution that could manage vast volumes of data with high insert rates and complex queries. By leveraging PostgreSQL’s familiar SQL interface and enhancing it with specialized time series capabilities, TimescaleDB quickly gained popularity among developers looking to integrate time series functionality into existing relational databases. Its hybrid approach allows users to benefit from PostgreSQL’s flexibility, reliability, and ecosystem while providing optimized performance for time series data.

The database is particularly effective in environments that demand fast ingestion of data points combined with sophisticated analytical queries over historical periods. TimescaleDB has a number of innovative features like hypertables which transparently partition data into manageable chunks and built-in continuous aggregation. These allow for significantly improved query speed and resource efficiency.

Configuration

Google Cloud PubSub

[[inputs.cloud_pubsub]]
  project = "my-project"
  subscription = "my-subscription"
  data_format = "influx"
  # credentials_file = "path/to/my/creds.json"
  # retry_delay_seconds = 5
  # max_message_len = 1000000
  # max_undelivered_messages = 1000
  # max_extension = 0
  # max_outstanding_messages = 0
  # max_outstanding_bytes = 0
  # max_receiver_go_routines = 0
  # base64_data = false
  # content_encoding = "identity"
  # max_decompression_size = "500MB"

TimescaleDB

# Publishes metrics to a TimescaleDB database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://www.postgresql.org/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is
  ## no column will be skipped. Points containing fields for which there is no
  ## column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is
  ## no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values
  ## (Postgres does not have a native unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns > 1, and a temporary error occurs, the query is
  ## retried with an incremental backoff. This controls the maximum duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using
  ## tags_as_foreign_keys). This is an optimization to skip inserting known
  ## tag IDs. Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Cut column names at the given length to not exceed PostgreSQL's
  ## 'identifier length' limit (default: no limit)
  ## (see https://www.postgresql.org/docs/current/limits.html)
  ## Be careful to not create duplicate column names!
  # column_name_length_limit = 0

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

Input and output integration examples

Google Cloud PubSub

  1. Real-Time Analytics for IoT Devices: Utilize the Google Cloud PubSub plugin to aggregate metrics from IoT devices scattered across various locations. By streaming data from devices to Google PubSub and using this plugin to ingest metrics, organizations can create a centralized dashboard for real-time monitoring and alerting. This setup allows for immediate insights into device performance, facilitating proactive maintenance and operational efficiency.

  2. Dynamic Log Processing and Monitoring: Ingest logs from numerous sources via Google Cloud PubSub into a Telegraf pipeline, utilizing the plugin to parse and analyze log messages. This can help teams quickly identify anomalies or patterns in logs and streamline the process of troubleshooting issues across distributed systems. By consolidating log data, organizations can enhance their observability and response capabilities.

  3. Event-Driven Workflow Integrations: Use the Google Cloud PubSub plugin to connect various cloud functions or services. Each time a new message is pushed to a subscription, actions can be triggered in other parts of the cloud architecture, such as starting data processing jobs, notifications, or even updates to reports. This event-driven approach allows for a more reactive system architecture that can adapt to changing business needs.

TimescaleDB

  1. Real-Time IoT Data Ingestion: Use the plugin to collect and store sensor data from thousands of IoT devices in real time. This setup facilitates immediate analysis, helping organizations monitor operational efficiency and respond quickly to changing conditions.

  2. Cloud Application Performance Monitoring: Leverage the plugin to feed detailed performance metrics from distributed cloud applications into TimescaleDB. This integration supports real-time dashboards and alerts, enabling teams to swiftly identify and mitigate performance bottlenecks.

  3. Historical Data Analysis and Reporting: Implement a system where long-term metrics are stored in TimescaleDB for comprehensive historical analysis. This approach allows businesses to perform trend analysis, generate detailed reports, and make data-driven decisions based on archived time-series data.

  4. Adaptive Alerting and Anomaly Detection: Integrate the plugin with automated anomaly detection workflows. By continuously streaming metrics to TimescaleDB, machine learning models can analyze data patterns and trigger alerts when anomalies occur, enhancing system reliability and proactive maintenance.

Feedback

Thank you for being part of our community! If you have any general feedback or found any bugs on these pages, we welcome and encourage your input. Please submit your feedback in the InfluxDB community Slack.

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations

HTTP and InfluxDB Integration

The HTTP plugin collects metrics from one or more HTTP(S) endpoints. It supports various authentication methods and configuration options for data formats.

View Integration

Kafka and InfluxDB Integration

This plugin reads messages from Kafka and allows the creation of metrics based on those messages. It supports various configurations including different Kafka settings and message processing options.

View Integration

Kinesis and InfluxDB Integration

The Kinesis plugin allows for reading metrics from AWS Kinesis streams. It supports multiple input data formats and offers checkpointing features with DynamoDB for reliable message processing.

View Integration