Deadman Alerts with the Python Processing Engine
By
Anais Dotis-Georgiou /
Developer
Apr 09, 2025
Navigate to:
Sometimes silence isn’t golden; it’s a red flag. Whether you’re monitoring IoT sensors, system logs, or application metrics, missing data can be just as critical as abnormal data. Without visibility into these gaps, you risk overlooking potential failures, security threats, or operational inefficiencies. In time series workflows, detecting silence is often the first sign of trouble—whether it’s a network issue, device failure, sensor failure, or stalled process.
In a previous blog post, we learned to use the Python Processing Engine with InfluxDB 3 Core and Enterprise to build threshold alerts and send notifications to Slack, Discord, or HTTP endpoint using a Wall Sync trigger. In this post, we’ll learn how to build a deadman check and alert by leveraging a schedule trigger known as a deadman trigger. Deadman triggers are a powerful alerting strategy that notifies you immediately when expected data stops arriving.
The deadman check plugin can be found here. The plugin monitors a target table for recent writes and sends a Slack alert if no new data is received within a configurable time threshold.
This blog post will cover:
- Requirements and Setup for InfluxDB 3 Core and Enterprise (this post works with both Core and Enterprise)
- Getting a Slack Webhook URL
- Creating InfluxDB 3 Core and Enterprise resources
- Testing the plugin and sending a deadman alert
Requirements and setup
Download InfluxDB 3 Core or Enterprise and follow the appropriate Core or Enterprise installation guides to work alongside this tutorial. You can either run this example locally or within a Docker container but I recommend using Docker for ease of setup, isolation, and cleanup. That said, this post assumes you’re running InfluxDB 3 in a Docker containerized environment.
Before you can run this in Docker, make sure Docker is installed on your system and pull the latest InfluxDB 3 image for your chosen edition (Core or Enterprise). I’m going to use InfluxDB 3 Core, as it’s the OSS version. You can use Enterprise for no cost by specifying “at-home use.”
Once you pull this repo, save the file as deadman_alert.py in your configured plugin directory (e.g., /path/to/plugins/). Then run the following command:
docker run -it --rm --name test_influx -v
~/influxdb3/data:/var/lib/influxdb3 -v /path/to/plugins/:/plugins -p
8181:8181 quay.io/influxdb/influxdb3-core:latest serve --node-id
my_host --object-store file --data-dir /var/lib/influxdb3 --plugin-dir
/plugin
This command runs a temporary InfluxDB 3 Core container named test_influx
using the latest image. It mounts your local data directory to persist database files and mounts the plugin directory containing the deadman check plugin. It also exposes port 8181 so you can access the database locally and start the server using the serve command with file-based object storage (you could also easily use an S2 bucket for file-based storage), a custom node ID, and the mounted plugin directory.
Follow this documentation on how to create a Slack webhook URL. You’ll need to include the webhook as an argument during the trigger creation process. Alternatively, you can use acpublic webhook that offers users an opportunity to test out InfluxDB-related notifications. It’s pinned to the #notifications-testing channel in the InfluxDB Slack.
Creating InfluxDB 3 resources and generating a deadman alert
Create a database to monitor for a heartbeat:
influxdb3 create database my_database
Now, write some data to it:
influxdb3 write --database my_database "sensor_data temp=20"
Next, create and enable the trigger:
influxdb3 create trigger \’
--trigger-spec "every:10s" \
--plugin-filename "deadman_alert.py" \
--trigger-arguments table=sensor_data,threshold_minutes=1,slack_webhook=https://hooks.slack.com/services/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 \
--database my_database \
sensor_deadman
The deadman check plugin runs every 10 seconds. It checks for data written to the sensor_data table in the my_database within the last minute. If data was written, you’ll see the following output in the InfluxDB logs:
INFO influxdb3_py_api::system_py: processing engine: Data exists in 'sensor_data' in the last 1 minutes.
If no data written in the last minute, you’ll receive the following notification in Slack:
The trigger will continue firing until you disable it with:
influxdb3 disable trigger --database my_database sensor_deadman
Trigger sensor_deadman disabled successfully
Final thoughts and considerations
This deadman check and alert plugin for InfluxDB 3 Core and Enterprise provides a powerful and flexible way to monitor data pipeline durability in real-time. I hope this tutorial helps you start alerting with Python Plugins and enabling triggers in InfluxDB 3 Core and Enterprise with Docker. I encourage you to look at the InfluxData/influxdb3_plugins as we add examples and plugins there. Also, please contribute your own! To learn more about building your own plugin, check out these resources:
- Transform Data with the New Python Processing Engine in InfluxDB 3
- Get started: Python Plugins and the Processing Engine
- Processing engine and Python plugins
Additionally, check out these resources for more information and examples of other alert plugins:
- Preventing Alert Storms with InfluxDB 3’s Processing Engine Cache
- How to Set Up Real-Time SMS/WhatsApp Alerts with InfluxDB 3 Processing Engine
- Alerting with InfluxDB 3 Core and Enterprise
I invite you to contribute any plugin that you create there. Check out our Getting Started Guide for Core and Enterprise, and share your feedback with our development team on Discord in the #influxdb3_core channel, Slack in the #influxdb3_core channel, or our Community Forums.