Beat Telegraf Plugin

Powerful performance with an easy integration, powered by Telegraf, the open source data connector built by InfluxData.

5B+

Telegraf downloads

#1

Time series database
Source: DB Engines

1B+

Downloads of InfluxDB

2,800+

Contributors

Table of Contents

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Beats are shippers that help send data to Logstash, Elasticsearch, or another set endpoint. They can run on servers, within IoT containers, or be used as functions. Filebeat is designed to work with log data from cloud applications, operational technology, IoT devices, and more. Kafkabeat collects events from Kafka topics.

Why use the Beat Telegraf Input Plugin?

This Telegraf plugin gathers metrics from beat instances, including Filebeat and Kafkabeat. It lets you collect information on CPU and memory usage within the beat instance, renamed files, event fails, and more. Collecting this information with Telegraf lets you send it to InfluxDB or another endpoint of your choice so you can monitor the performance of your Beat instances. This helps you identify where errors are coming from and make changes to optimize your setup.

How to monitor Beat using the Telegraf plugin

To use this plugin, you need a URL from which can read Beat formatted JSON files. The default url is

url = "http://127.0.0.1:5066"

You can then create a list of the statistics you want to collect, for example

include = ["beat", "libbeat", "filebeat"]

The available statistics currently are "beat," "libbeat," "filebeat," and "system." You can also set "include" to an empty list, and it will collect all possible statistics.

You then need to set the HTTP method with

method= "GET"

and optionally you can set HTTP headers with

headers = {"X-Special-Header" = "Special-Value"}

To override the HTTP "Host" header you can use

host_header = "logstash.example.com"

And you can set the timeout for HTTP requests with

timeout = "5s"

You can optionally set HTTP authorization credentials with

username = "username" password = "pa$$word"

And you can optionally configure transport layer security with

tls_ca = "/etc/telegraf/ca.pem" tls_cert = "/etc/telegraf/cert.pem" tls_key = "/etc/telegraf/key.pem" Use TLS but skip chain & host verification insecure_skip_verify = false

Key Beat metrics to use for monitoring

Some of the important Beat metrics that you should proactively monitor include:

  • beat
    • Fields:
      • cpu_system_ticks
      • cpu_system_time_ms
      • cpu_total_ticks
      • cpu_total_time_ms
      • cpu_total_value
      • cpu_user_ticks
      • cpu_user_time_ms
      • info_uptime_ms
      • memstats_gc_next
      • memstats_memory_alloc
      • memstats_memory_total
      • memstats_rss
    • Tags:
      • beat_beat
      • beat_host
      • beat_id
      • beat_name
      • beat_version
  • beat_filebeat
    • Fields:
      • events_active
      • events_added
      • events_done
      • harvester_closed
      • harvester_open_files
      • harvester_running
      • harvester_skipped
      • harvester_started
      • input_log_files_renamed
      • input_log_files_truncated
    • Tags:
      • beat_beat
      • beat_host
      • beat_id
      • beat_name
      • beat_version
  • beat_libbeat
    • Fields:
      • config_module_running
      • config_module_starts
      • config_module_stops
      • config_reloads
      • output_events_acked
      • output_events_active
      • output_events_batches
      • output_events_dropped
      • output_events_duplicates
      • output_events_failed
      • output_events_total
      • output_type
      • output_read_bytes
      • output_read_errors
      • output_write_bytes
      • output_write_errors
      • outputs_kafka_bytes_read
      • outputs_kafka_bytes_write
      • pipeline_clients
      • pipeline_events_active
      • pipeline_events_dropped
      • pipeline_events_failed
      • pipeline_events_filtered
      • pipeline_events_published
      • pipeline_events_retry
      • pipeline_events_total
      • pipeline_queue_acked
    • Tags:
      • beat_beat
      • beat_host
      • beat_id
      • beat_name
      • beat_version
  • beat_system
    • Field:
      • cpu_cores
      • load_1
      • load_15
      • load_5
      • load_norm_1
      • load_norm_15
      • load_norm_5
    • Tags:
      • beat_beat
      • beat_host
      • beat_id
      • beat_name
      • beat_version
For more information, please check out the documentation.

Project URL   Documentation

Powerful Performance, Limitless Scale

Collect, organize, and act on massive volumes of high-velocity data. Any data is more valuable when you think of it as time series data. with InfluxDB, the #1 time series platform built to scale with Telegraf.

See Ways to Get Started

Related Integrations