Build a Time Series Forecasting Pipeline in InfluxDB 3 Without Writing Code
By
Anais Dotis-Georgiou /
Developer
Apr 17, 2025
Navigate to:
Curious how time series forecasting fits into your InfluxDB 3 workflows? Let’s build a complete forecasting pipeline together using InfluxDB 3 Core’s Python Processing Engine and Facebook’s Prophet library.
InfluxDB 3 Core’s Python Processing Engine dramatically lowers the barrier to entry—not just for experienced developers but for anyone with a basic understanding of time series data and Python. It turns what used to be a complex, multi-day task into something you can prototype in a few hours, making advanced forecasting and data processing far more accessible and accelerating the path from idea to insight.
One of the most exciting aspects of this project is how quickly it came together using a large language model (LLM). By simply providing InfluxDB 3 Core’s Python Processing Engine the documentation and FB Prophet quick start guide, the LLM generated working plugin code, wired up the full pipeline, and even suggested improvements.
Now, let’s build a pipeline that predicts daily pageviews for the Wikipedia article on Peyton Manning over an entire year, starting with historical data and ending with an interactive Plotly visualization. To build alongside this project, download InfluxDB 3 Core or Enterprise. This project will cover:
- Details on what I provided my LLM (ChatGPT-4o) with to execute this
- The requirements and setup required to run this end-to-end forecasting pipeline examples
- Creating necessary InfluxDB 3 Core and Enterprise resources
- Writing data, making a forecast, and visualizing the results
The forecasting pipeline includes three purpose-built plugins: one to load historical data, another to generate daily forecasts, and one to visualize results on demand:
- load_peyton (load_peyton_data.py): An HTTP-triggered plugin that loads sample Wikipedia pageview data from a CSV and writes it to the peyton_views table.
- peyton_forecast (forecast_peyton.py): A scheduled plugin (e.g., every:1d) that queries the peyton_views table, fits a Prophet model and writes a full 365-day forecast to the prophet_forecast table.
- forecast_plot (plot_forecast_http.py): An HTTP-triggered plugin that queries both peyton_views and prophet_forecast, merges them, and returns an interactive Plotly graph as HTML.
A visualization of the historical page view data (blue) and forecasted page view data (green) that you can generate by following this tutorial. Accessible by visiting http://localhost:8181/api/v3/engine/plot_forecast.
From coding to AI validation
I generated the code for this project by providing the following resources to ChatGPT-4o: The Processing engine and Python plugins documentation and the Prophet Quickstart example. From there, I gave it a handful of natural-language prompts to build the components I needed iteratively. Here are the core prompts I used:
- “Can you write a plugin for InfluxDB 3 that uses Facebook Prophet to forecast time series data?” This got the conversation started and helped ChatGPT understand the target use case and library.
- “Create a plugin that loads historical data from a public CSV (like the Peyton Manning Wikipedia views) and writes it to InfluxDB.” This became the HTTP-triggered loader plugin that populates the peyton_views table.
- “Now, write a scheduled plugin that queries that table, fits a Prophet model, and writes the forecast to another table. Make sure the logic writes all forecast rows.” This became the daily forecasting plugin, which outputs yhat, yhat_lower, and yhat_upper to prophet_forecast.
- “Can you build a third plugin that reads both the historical and forecasted data, plots it using Plotly, and returns the result over HTTP as an HTML page?” This plugin allows me to visualize the entire pipeline in my browser without setting up an external dashboard.
By layering the prompts above, I built a fully functioning, interactive forecasting pipeline inside InfluxDB without writing any code manually. What began as traditional coding quickly became a process of prompt engineering and AI validation.
What began as traditional coding quickly became a process of prompt engineering and AI validation. I described intent, reviewed generated code, and iterated until everything worked end-to-end.
That said, using the test command for the Processing Engine was incredibly helpful—not just for this project, but for building and validating any plugin. The test command quickly triggers a plugin with cURL or verifies results with SQL queries. It made the feedback loop tight and efficient, especially when paired with AI-generated code.
Requirements and setup
You can either run this example locally or within a Docker container. Follow the InfluxDB 3 Core or Enterprise installation guides, as this post applies to both products. I recommend using Docker, so this post will assume you’re running InfluxDB 3 in a containerized environment for ease of setup, isolation, and cleanup. Make sure Docker is installed on your system and you’ve pulled the latest InfluxDB 3 image for your chosen edition. I’ll use InfluxDB 3 Core as it’s the OSS version. Enterprise is also freely available for local development and testing if you select at-home use.
Once you pull this repo, save the file in your configured plugin directory (e.g., /path/to/plugins/). Then run the following command:
docker run -it --rm --name test_influx -v ~/influxdb3/data:/var/lib/influxdb3 -v /path/to/plugins/:/plugins -p 8181:8181 quay.io/influxdb/influxdb3-core:latest serve --node-id my_host --object-store file --data-dir /var/lib/influxdb3 --plugin-dir /plugin
This command runs a temporary InfluxDB 3 Core container named test_influx
using the latest image. It mounts your local data directory to persist database files and the plugin directory containing the deadman check plugin. It also exposes port 8181 so you can access the database locally, and start the server using the serve command with file-based object storage (you could also easily use an S2 bucket for file-based storage), a custom node ID, and the mounted plugin directory.
Writing data, making a forecast, and visualizing our data
Before we create and enable any triggers that use the plugins mentioned above, we need to install all dependencies. This project depends on Plotly and Prophet. Install them using:
influxdb3 install package plotly
influxdb3 install package prophet
Next, create a database to write page view data to:
influxdb3 create database prophet
Now, we can create our triggers. First, we’ll create Plugin 1 and load data via HTTP:
influxdb3 create trigger \
--trigger-spec "request:load_peyton" \
--plugin-filename "load_peyton_data.py" \
--database prophet \
load_peyton
The load_peyton plugin:
- Is a HTTP-triggered plugin
- Downloads a public CSV of daily Wikipedia views
- Writes rows to the peyton_views table in InfluxDB 3 Core
We can then execute the following cURL command to trigger the execution of the load_peyton plugin:
curl http://localhost:8181/api/v3/engine/load_peyton
You should see the following output, which confirms a successful write of the data:
{"status": "success", "rows_written": 2905}
Next, we can create forecasts on schedule:
influxdb3 create trigger \
--trigger-spec "every:1m" \
--plugin-filename "forecast_peyton.py" \
--database prophet \
peyton_forecast
The peyton_forecast plugin:
- Scheduled plugin (runs daily or on your schedule)
- Reads data from peyton_views table
- Fits a Prophet model
- Forecasts 365 days into the future
- Writes summary forecast results to prophet_forecast table
I’ve set the peyton_forecast plugin to run every minute to get a forecast quickly. However, since we’re looking at daily data, you’d more likely run this type of pipeline on a daily interval. After you’ve successfully generated and written a forecast, you should see the following output:
processing engine: Running Prophet forecast on 'peyton_views'INFO - Chain [1] start processing
INFO - Chain [1] done processing
To prevent the plugin from running indefinitely and recruiting the same forecasting workload, disable it with:
inflxudb3 disable trigger --database prophet peyton_forecast
Finally, we can visualize our data by enabling the final plugin with:
influxdb3 create trigger \
--trigger-spec "request:plot_forecast" \
--plugin-filename "plot_forecast_http.py" \
--database prophet \
forecast_plot
The forecast_plot plugin:
- Is a HTTP-triggered plugin
- Reads data from both peyton_views and prophet_forecast
- Creates an interactive Plotly chart combining historical data and forecasts
- Returns the chart as HTML for browser viewing
Visit http://localhost:8181/api/v3/engine/plot_forecast to view the historical data and forecast shared in the first section of this tutorial.
Final thoughts and considerations
This collection of plugins for InfluxDB 3 Core and Enterprise provides a basic example of how to build an end-to-end data collection, forecasting, and visualization pipeline using the InfluxDB 3 Python Processing Engine. While this project focuses on a simple FB Prophet-based forecast, it can easily be extended into a more robust and production-ready system. For example, you could load a pre-trained forecasting model from Hugging Face for faster inference, to monitor forecast accuracy over time to detect model drift, and to schedule automated retraining when performance degrades.
Pairing this pipeline with InfluxDB 3’s Processing Engine alerting capabilities allows you to proactively respond to anomalies or drift events by sending notifications or triggering remediation workflows. With these building blocks, you can create intelligent, self-healing time series pipelines tailored to your use case.
I encourage you to look at the InfluxData/influxdb3_plugins as we add examples and plugins. Also, please contribute your own! To learn more about building your plugin, check out these resources:
- Transform Data with the New Python Processing Engine in InfluxDB 3
- Get started: Python Plugins and the Processing Engine
- Processing engine and Python plugins
Additionally, check out these resources for more information and examples of other alert plugins:
- Preventing Alert Storms with InfluxDB 3’s Processing Engine Cache
- How to Set Up Real-Time SMS/WhatsApp Alerts with InfluxDB 3 Processing Engine
- Alerting with InfluxDB 3 Core and Enterprise
I invite you to contribute any plugin that you create. Check out our Getting Started Guide for Core and Enterprise, and share your feedback with our development team on Discord in the #influxdb3_core channel, Slack in the #influxdb3_core channel, or our Community Forums.