<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog</title>
    <description>The place for technical guides, customer observability &amp; IoT use cases, product info, and news on leading time series platform InfluxDB, Telegraf, SQL, &amp; more.</description>
    <link>https://www.influxdata.com/blog/</link>
    <language>en-us</language>
    <lastBuildDate>Wed, 22 Apr 2026 08:00:00 +0000</lastBuildDate>
    <pubDate>Wed, 22 Apr 2026 08:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>How to Use Time Series Autoregression (With Examples)</title>
      <description>&lt;p&gt;Time series autoregression is a powerful statistical technique that uses past values of a variable to predict its future values. This approach is particularly valuable for forecasting applications where historical patterns can inform future trends.&lt;/p&gt;

&lt;p&gt;In this hands-on tutorial, you’ll learn how to implement autoregressive (AR) models using Python and see how InfluxDB can enhance your time series analysis workflow.&lt;/p&gt;

&lt;h2 id="understanding-time-series-autoregression"&gt;Understanding time series autoregression&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/think/topics/autoregressive-model"&gt;Autoregression models&lt;/a&gt; represent one of the fundamental approaches to time series forecasting, based on the principle that past behavior can predict future outcomes. The “auto” in &lt;a href="https://www.influxdata.com/blog/guide-regression-analysis-time-series-data/"&gt;autoregression&lt;/a&gt; means the variable is regressed on itself—essentially, we’re using the variable’s own historical values as predictors.&lt;/p&gt;

&lt;p&gt;This concept is intuitive: yesterday’s temperature influences today’s temperature and last month’s sales figures can indicate this month’s performance.&lt;/p&gt;

&lt;p&gt;An autoregressive model of order p, denoted as AR(p), uses the previous p observations to predict the next value:
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/50y9E1BxjOVQKkCJINlRHt/7988c5c42a7e5913447a4dab7253c9a3/Screenshot_2026-04-09_at_12.36.02â__PM.png" alt="AR SS 1" /&gt;
X(t) = c + φ₁X(t-1) + φ₂X(t-2) + … + φₚX(t-p) + ε(t)&lt;/p&gt;

&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;X(t) is the value at time t&lt;/li&gt;
  &lt;li&gt;c is a constant term representing the baseline level&lt;/li&gt;
  &lt;li&gt;φ₁, φ₂, …, φₚ are the autoregressive coefficients indicating the influence of each lag&lt;/li&gt;
  &lt;li&gt;ε(t) is white noise representing random, unpredictable fluctuations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The coefficients determine how much influence each previous observation has on the current prediction. Positive coefficients indicate that higher past values lead to higher current predictions, while negative coefficients suggest an inverse relationship.&lt;/p&gt;

&lt;h2 id="types-of-autoregressive-models-and-their-applications"&gt;Types of autoregressive models and their applications&lt;/h2&gt;

&lt;h4 id="ar1-first-order-autoregression"&gt;AR(1) First-Order Autoregression&lt;/h4&gt;

&lt;p&gt;The simplest autoregressive model uses only the immediately previous value:
X(t) = c + φ₁X(t-1) + ε(t)&lt;/p&gt;

&lt;p&gt;AR(1) models are particularly effective for data with strong short-term dependencies, such as daily stock returns or temperature variations. The single coefficient φ₁ captures the persistence of the series—values close to 1 indicate high persistence, while values near 0 suggest more random behavior.&lt;/p&gt;

&lt;h4 id="arp-higher-order-models"&gt;AR(p) Higher-Order Models&lt;/h4&gt;

&lt;p&gt;More complex temporal patterns often require multiple lags:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;AR(2) models: Capture oscillating patterns where the current value depends on both the previous value and the value two periods ago.&lt;/li&gt;
  &lt;li&gt;AR(3) and beyond: Useful for data with complex patterns that extend beyond immediate past values.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="seasonal-autoregressive-models"&gt;Seasonal Autoregressive Models&lt;/h4&gt;

&lt;p&gt;Real-world time series often exhibit seasonal patterns that repeat at regular intervals. Seasonal AR models extend the basic AR framework to capture these periodic dependencies, particularly valuable for retail sales forecasting, energy consumption prediction, and agricultural yield estimation.&lt;/p&gt;

&lt;h4 id="model-selection-and-diagnostic-considerations"&gt;Model Selection and Diagnostic Considerations&lt;/h4&gt;

&lt;p&gt;Selecting the appropriate AR model order requires careful analysis of the data’s autocorrelation structure. The &lt;a href="https://www.influxdata.com/blog/autocorrelation-in-time-series-data/"&gt;autocorrelation&lt;/a&gt; function (ACF) shows how correlated the series is with its own lagged values, while the partial autocorrelation function (PACF) reveals the direct relationship between observations at different lags.&lt;/p&gt;

&lt;p&gt;For AR models, the PACF is particularly informative because it cuts off sharply after the true model order. This characteristic makes PACF plots an essential diagnostic tool for determining the optimal number of lags to include in the model.&lt;/p&gt;

&lt;h2 id="setting-up-your-environment"&gt;Setting up your environment&lt;/h2&gt;

&lt;p&gt;Before implementing our AR model, let’s set up the necessary tools and data infrastructure to analyze time series data with InfluxDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb-core/?utm_source=website&amp;amp;utm_medium=time_series_autoregression&amp;amp;utm_content=blog"&gt;InfluxDB Core&lt;/a&gt; is designed to handle time-series data with an optimized storage engine and powerful query capabilities. It excels at tracking weather patterns or monitoring environmental conditions, making it an ideal choice for efficiently managing and analyzing time-stamped data.&lt;/p&gt;

&lt;h4 id="installing-required-libraries"&gt;Installing Required Libraries&lt;/h4&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;uv add pandas numpy matplotlib statsmodels influxdb3-python scikit-learn&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or setup a python virtual environment and install with the following:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;python -m venv .venv&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For Mac or Linux activate your virtual environment with the following:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;source .venv/bin/activate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For Window run this:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;.venv\Scripts\activate.bat # Windows (PowerShell) .venv\Scripts\Activate.ps1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And finally, install the required libraries:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;pip install pandas numpy matplotlib statsmodels influxdb3-python scikit-learn&lt;/code&gt;&lt;/p&gt;

&lt;h4 id="connecting-to-influxdb"&gt;Connecting to InfluxDB&lt;/h4&gt;

&lt;p&gt;First, let’s establish a connection to your local InfluxDB instance:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;from influxdb_client_3 import InfluxDBClient3, Point
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.tsa.ar_model import AutoReg
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from sklearn.metrics import mean_squared_error, mean_absolute_error

# InfluxDB connection parameters
INFLUXDB_HOST = "localhost:8181"
INFLUXDB_TOKEN = "your_token_here"  # Replace with your actual token
INFLUXDB_DATABASE = "weather"       # Database name for InfluxDB 3

# Initialize client
client = InfluxDBClient3(
    host=INFLUXDB_HOST,
    database=INFLUXDB_DATABASE,
    token=INFLUXDB_TOKEN
)&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="implementing-ar-models-for-predicting-temperature"&gt;Implementing AR models for predicting temperature&lt;/h2&gt;

&lt;p&gt;Let’s walk through a practical example using temperature data to demonstrate autoregressive modeling.&lt;/p&gt;

&lt;h4 id="loading-and-preprocessing-the-data"&gt;Loading and Preprocessing the Data&lt;/h4&gt;

&lt;p&gt;First, we’ll generate sample temperature data and store it in InfluxDB, then retrieve it for analysis:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;def generate_sample_temperature_data():
    """Generate realistic temperature data with seasonal patterns"""
    np.random.seed(42)
    dates = pd.date_range(start='2023-01-01', end='2024-01-01', freq='D')

    # Create temperature data with trend and seasonality
    trend = np.linspace(15, 18, len(dates))
    seasonal = 10 * np.sin(2 * np.pi * np.arange(len(dates)) / 365.25)
    noise = np.random.normal(0, 2, len(dates))
    temperature = trend + seasonal + noise

    return pd.DataFrame({
        'timestamp': dates,
        'temperature': temperature
    })

def store_data_in_influxdb(df):
    """Store temperature data in InfluxDB"""
    records = [
        Point("temperature")
            .field("value", row['temperature'])
            .time(row['timestamp'])
        for _, row in df.iterrows()
    ]
    client.write(record=records)
    print(f"Stored {len(df)} temperature readings in InfluxDB")

def load_data_from_influxdb():
    """Retrieve temperature data from InfluxDB"""
    query = """
        SELECT time, value
        FROM temperature
        WHERE time &amp;gt;= now() - INTERVAL '1 year'
        ORDER BY time
    """
    table = client.query(query=query, mode="pandas")
    table['time'] = pd.to_datetime(table['time'])
    table = table.set_index('time').sort_index()
    return table['value']

# Generate and store sample data
sample_data = generate_sample_temperature_data()
store_data_in_influxdb(sample_data)

# Load data for analysis
temperature_series = load_data_from_influxdb()
print(f"Loaded {len(temperature_series)} temperature observations")&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="exploring-autocorrelation-and-determining-model-order"&gt;Exploring Autocorrelation and Determining Model Order&lt;/h4&gt;

&lt;p&gt;Before fitting an AR model, we need to understand the autocorrelation structure:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1if3YOBZ3cdnk2Mm0jSqkl/76ce3e78181ab2336a0d9635037d39b2/Screenshot_2026-04-09_at_12.44.09â__PM.png" alt="autocorrelation SS" /&gt;&lt;/p&gt;

&lt;p&gt;The Partial Autocorrelation Function (PACF) helps determine the optimal AR order by showing the correlation between observations at different lags, controlling for shorter lags.&lt;/p&gt;

&lt;h4 id="building-and-training-the-ar-model"&gt;Building and Training the AR Model&lt;/h4&gt;

&lt;p&gt;Now let’s implement the autoregressive model:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3G2y0GY250RZSOEL7zJgTj/e43ca0040107d949fe7e760a3824654c/Screenshot_2026-04-09_at_12.45.52â__PM.png" alt="AR Model SS" /&gt;&lt;/p&gt;

&lt;p&gt;Visualization is crucial for understanding model performance:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3GXiWDP36MjuLhMHHHs3HI/f1cd3397f608d8ad02ed6ff1b493ce95/Screenshot_2026-04-09_at_12.47.57â__PM.png" alt="Visualization SS 1" /&gt;
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4P3vmJqDvTMx1ny8DSwuxF/c9916f312c2c9c1fe05c401195023a9b/Screenshot_2026-04-09_at_12.48.12â__PM.png" alt="Visulization SS 2" /&gt;&lt;/p&gt;

&lt;h2 id="benefits-and-limitations-of-autoregressive-models"&gt;Benefits and limitations of autoregressive models&lt;/h2&gt;

&lt;h4 id="advantages-of-ar-models"&gt;Advantages of AR Models&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Computational Efficiency&lt;/strong&gt;: AR models are computationally lightweight compared to complex machine learning approaches. This efficiency makes them ideal for real-time applications where quick predictions are essential, such as high-frequency trading systems or real-time monitoring applications.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Interpretability&lt;/strong&gt;: Unlike black-box machine learning models, AR models provide clear, interpretable coefficients that reveal the influence of each lagged value. This transparency is crucial in regulated industries where model decisions must be explainable and auditable.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Strong Theoretical Foundation&lt;/strong&gt;: AR models rest on well-established statistical theory with known properties and assumptions. This theoretical grounding provides confidence in model behavior and enables rigorous statistical testing of model adequacy.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Excellent Baseline Performance&lt;/strong&gt;: AR models often serve as effective baseline models against which more complex approaches are compared. Their simplicity makes them robust to overfitting, and they frequently provide competitive performance for many forecasting tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="limitations-and-challenges"&gt;Limitations and Challenges&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Linear Relationship Assumptions&lt;/strong&gt;: AR models assume linear relationships between past and future values, which may not capture complex nonlinear patterns present in many real-world time series.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Stationarity Requirements&lt;/strong&gt;: The assumption of stationarity can be restrictive for many practical applications. Real-world time series often exhibit trends, structural breaks, or changing volatility that violate stationarity assumptions.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Limited Complexity Handling&lt;/strong&gt;: AR models struggle with complex seasonal patterns, multiple interacting factors, or regime changes. While seasonal AR models exist, they may not capture intricate seasonal dynamics as effectively as more sophisticated approaches.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="practical-implementation-considerations"&gt;Practical Implementation Considerations&lt;/h4&gt;

&lt;p&gt;When implementing AR models in practice, several key considerations ensure successful deployment. Data preprocessing often requires careful attention to stationarity testing and transformation.&lt;/p&gt;

&lt;p&gt;Model validation requires time-aware cross-validation techniques that respect the temporal structure of the data. Traditional random sampling approaches can introduce data leakage, where future information inadvertently influences past predictions.&lt;/p&gt;

&lt;p&gt;Parameter selection involves balancing model complexity with predictive accuracy. Information criteria like AIC and BIC provide systematic approaches to order selection, while out-of-sample testing validates the chosen specification.&lt;/p&gt;

&lt;h2 id="time-series-analysis-with-influxdb"&gt;Time series analysis with InfluxDB&lt;/h2&gt;

&lt;p&gt;InfluxDB provides several critical advantages for time series autoregression workflows that extend beyond simple data storage. As a purpose-built time series database, InfluxDB addresses many challenges associated with managing and analyzing temporal data at scale.&lt;/p&gt;

&lt;h4 id="optimized-storage-and-performance"&gt;Optimized Storage and Performance&lt;/h4&gt;

&lt;p&gt;InfluxDB’s columnar storage format and specialized compression algorithms reduce storage requirements for time series data. This efficiency becomes crucial when working with high-frequency data or maintaining long historical records necessary for robust AR model training.&lt;/p&gt;

&lt;h4 id="real-time-data-processing"&gt;Real-time Data Processing&lt;/h4&gt;

&lt;p&gt;Modern forecasting applications often require real-time model updates as new data arrives. InfluxDB’s streaming capabilities enable continuous data ingestion, allowing AR models to incorporate the latest observations immediately.&lt;/p&gt;

&lt;h4 id="scalable-query-operations"&gt;Scalable Query Operations&lt;/h4&gt;

&lt;p&gt;As time series datasets grow, query performance becomes a limiting factor. InfluxDB’s indexing strategies and query optimization target temporal queries, enabling fast aggregations and data retrieval operations common in AR model preprocessing.&lt;/p&gt;

&lt;h4 id="native-time-series-functions"&gt;Native Time Series Functions&lt;/h4&gt;

&lt;p&gt;InfluxDB includes built-in functions for common time series operations like moving averages and lag calculations. These functions can preprocess data directly within the database.&lt;/p&gt;

&lt;h2 id="production-deployment-and-best-practices"&gt;Production deployment and best practices&lt;/h2&gt;

&lt;p&gt;Deploying AR models in production environments requires attention to several operational aspects. Model monitoring becomes crucial as data patterns evolve over time, potentially degrading model performance. InfluxDB’s ability to store both input data and model predictions simplifies the creation of monitoring dashboards.&lt;/p&gt;

&lt;p&gt;Performance considerations include monitoring prediction accuracy over time and detecting concept drift.&lt;/p&gt;

&lt;h2 id="capping-it-off"&gt;Capping it off&lt;/h2&gt;

&lt;p&gt;Time series autoregression provides a powerful and interpretable foundation for forecasting applications across diverse domains. The combination of statistical rigor, computational efficiency, and clear interpretability makes AR models an essential tool in the time series analyst’s toolkit.&lt;/p&gt;

&lt;p&gt;While AR models have limitations in handling complex nonlinear patterns, their strengths in capturing temporal dependencies make them invaluable for both standalone applications and as components in more complex forecasting systems.&lt;/p&gt;

&lt;p&gt;The integration of AR modeling with modern time series infrastructure like &lt;a href="https://www.influxdata.com/?utm_source=website&amp;amp;utm_medium=time_series_autoregression&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt; creates opportunities for robust, scalable forecasting solutions. By leveraging InfluxDB’s specialized capabilities alongside the proven statistical foundations of autoregressive modeling, practitioners can build production-ready forecasting systems that deliver reliable predictions.&lt;/p&gt;
</description>
      <pubDate>Wed, 22 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/time-series-autoregression/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/time-series-autoregression/</guid>
      <category>Developer</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
    <item>
      <title>From Edge to Enterprise: How Litmus and InfluxDB Are Modernizing the Industrial Data Stack</title>
      <description>&lt;p&gt;Today at Hannover Messe, InfluxData is announcing a strategic partnership with Litmus to address one of the most persistent challenges in industrial data: &lt;strong&gt;getting reliable, contextualized telemetry from the shop floor into production systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Litmus bridges the gap between OT systems and modern IT infrastructure, while InfluxDB serves as the industrial data hub, giving organizations both real-time operational visibility and enterprise-scale historical analysis in a unified architecture.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/ZK8Y3Nel8ihgcMLPyAleL/171b1f00ed9918d40f48afdab4c87199/Screenshot_2026-04-17_at_2.00.54â__PM.png" alt="Influx + Litmus logo" /&gt;&lt;/p&gt;

&lt;p&gt;By integrating &lt;a href="https://litmus.io/litmus-edge"&gt;Litmus Edge&lt;/a&gt; with &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=litmus_and_influxdata_partnership&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt;, teams can collect and contextualize data at the source, then write it into a time series engine built for high-resolution data. Litmus handles connectivity and data normalization at the edge. InfluxDB provides high-throughput ingestion, real-time querying, and cost-efficient long-term storage, deployable at the edge, in the enterprise layer, or both.&lt;/p&gt;

&lt;p&gt;The result is a system that captures every signal, retains its context, and makes it immediately usable&lt;/p&gt;

&lt;h2 id="the-industrial-data-problem"&gt;The industrial data problem&lt;/h2&gt;

&lt;p&gt;Something has shifted in industrial sectors. Modernization is no longer a roadmap item, but it’s starting to hit real constraints. The pull: industrial AI initiatives, predictive maintenance, cross-site analytics, digital twins, offer attractive value propositions. The push: legacy data historians are buckling under the demands of modern industrial operations, and the cost of extension is becoming harder to justify.&lt;/p&gt;

&lt;p&gt;OT environments are notoriously fragmented. PLCs, CNCs, SCADA systems, and sensors operate across different protocols, vendors, and network boundaries. Getting that data into a usable, consistent format still requires heavy integration, time, and cost.&lt;/p&gt;

&lt;p&gt;Traditional Historians made progress on the industrial data problem, but they weren’t built for what comes next. They struggle to preserve context across systems, degrade under high-frequency ingest and query load, and make cross-site analysis slow and expensive. This forces teams into trade-offs between fidelity, scale, and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s the core issue: the value of industrial data is in its resolution and context. Most systems weren’t designed to retain either at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id="how-litmus-and-influxdb-work-together"&gt;How Litmus and InfluxDB work together&lt;/h2&gt;

&lt;p&gt;To move forward, teams need an architecture built for how industrial data actually behaves: high-frequency, distributed, and context-dependent. Litmus Edge and InfluxDB 3 Enterprise provide that foundation by collecting and structuring data at the edge, then making it available centrally without losing resolution or context.&lt;/p&gt;

&lt;p&gt;Here’s how that looks in practice:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5OMDcrZFgEbU1ZBcZ8Uy8G/870217aff5fd191fde503594b80db336/Screenshot_2026-04-17_at_2.03.15â__PM.png" alt="Litmus + IDB architecture" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;250+ prebuilt industrial connectors&lt;/strong&gt;. Out-of-the-box connectivity to industrial data sources, including legacy systems and proprietary protocols. No custom integration required.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Collect and contextualize at scale&lt;/strong&gt;. Normalize and contextualize telemetry from the source, with unlimited cardinality that preserves full context without compromising query performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Centralized data, not silos&lt;/strong&gt;. Bring telemetry from tools, teams, and sites into a single architecture, from single-site monitoring to cross-plant analytics, without a data consolidation project.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Buffered, store-and-forward data transfer&lt;/strong&gt;. Buffer and transmit data from remote sites with intermittent connectivity, with no loss or manual recovery.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Retain more, spend less&lt;/strong&gt;. Keeps high-resolution data accessible long-term with object storage, without driving up storage costs as you scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7fPG6jqxIE4VktLXwV8SbR/4520cfd13bd2e3f1b503de0ef732f5ea/Screenshot_2026-04-17_at_2.04.58â__PM.png" alt="Litmus quote 1" /&gt;&lt;/p&gt;

&lt;h2 id="the-edge-collect-contextualize-buffer"&gt;The edge: collect, contextualize, buffer&lt;/h2&gt;

&lt;p&gt;Litmus Edge acts as the intelligence layer between your machines and the rest of your data architecture. With 250+ native connectors spanning OPC-UA, Modbus, MQTT, FANUC, Siemens S7, and more, it connects directly to industrial sources (PLCs, CNCs, DCS, SCADA systems, sensors, and beyond) without custom integration.&lt;/p&gt;

&lt;p&gt;But connectivity alone isn’t enough. Raw signals without context aren’t useful. Litmus Edge tags, enriches, and structures data at the point of collection so a temperature reading is tied to an asset, production line, facility, and product run. By the time it leaves the edge, it’s already queryable.&lt;/p&gt;

&lt;h2 id="the-industrial-data-hub-centralize-scale-retain"&gt;The industrial data hub: Centralize, scale, retain&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 serves as the system of record for industrial time series data, whether deployed at the edge, centralized in the enterprise layer, or both.&lt;/p&gt;

&lt;p&gt;At the site level, InfluxDB runs locally alongside Litmus Edge, ingesting full-resolution telemetry and serving low-latency queries for real-time operations. It operates autonomously, so if connectivity to the central hub is interrupted, data is buffered locally and automatically forwarded when the connection is restored. There’s no data loss or manual intervention.&lt;/p&gt;

&lt;p&gt;At the enterprise level, a centralized InfluxDB cluster aggregates data from every site into a single query layer across assets, plants, and time horizons. This creates a consistent, high-resolution data layer that can be used across operations, analytics, and industrial AI.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/27iTqGpIQNfbNF1D1C9PUU/b6a34c5dc5099af641a34a9f803cf32f/Screenshot_2026-04-17_at_2.05.49â__PM.png" alt="Litmus quote 2" /&gt;&lt;/p&gt;

&lt;h2 id="the-bridge-to-higher-level-analytics"&gt;The bridge to higher-level analytics&lt;/h2&gt;

&lt;p&gt;With high-resolution, contextualized data available across systems, teams can move beyond basic monitoring. Predictive maintenance, anomaly detection, and cross-site analytics all depend on full-fidelity data. Industrial AI at the edge depends on low-latency access to it. Without that foundation, these systems don’t operate reliably. That’s what this architecture enables.&lt;/p&gt;

&lt;h2 id="get-started"&gt;Get started&lt;/h2&gt;

&lt;p&gt;Whether you’re starting a greenfield initiative or hitting the limits of your current industrial data infrastructure, we’d love to talk.&lt;/p&gt;

&lt;p&gt;Reach out to &lt;a href="https://www.influxdata.com/contact-sales/"&gt;connect to an expert&lt;/a&gt; or join the conversation in the &lt;a href="https://community.influxdata.com/"&gt;InfluxData Community Forums&lt;/a&gt; where our team and broader community are active.&lt;/p&gt;

&lt;p&gt;If you’re attending Hannover Messe, &lt;a href="https://www.influxdata.com/event/meet-influxdb-at-hannover-messe-2026/?utm_source=website&amp;amp;utm_medium=litmus_and_influxdata_partnership&amp;amp;utm_content=blog"&gt;come find me at the Litmus booth&lt;/a&gt; (Stand A09 in Hall 16) and see the architecture running end-to-end.&lt;/p&gt;
</description>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/litmus-and-influxdata-partnership/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/litmus-and-influxdata-partnership/</guid>
      <category>Company</category>
      <category>Product</category>
      <author>Ben Corbett (InfluxData)</author>
    </item>
    <item>
      <title>Setting Up an MQTT Data Pipeline with InfluxDB</title>
      <description>&lt;p&gt;In this blog, we’re going to take a look at how you can set up a fully-functioning, robust data pipeline to centralize your data into an InfluxDB instance by collecting and sending messages with the MQTT protocol. We’ll start with a brief overview of the technologies and protocols used in the pipeline, then dive into how you can connect, configure, and test them to ensure your data pipeline is fully functional. It’s going to be a long post, so let’s jump right in.&lt;/p&gt;

&lt;h2 id="what-is-mqtt"&gt;What is MQTT?&lt;/h2&gt;

&lt;p&gt;MQTT is an industry-standard, lightweight protocol for moving messages through a network of devices. It functions by having a broker, or multiple brokers, receive messages from individual devices (publishing clients) across the network, and publish those messages to external systems (destination clients) that are connected and listening to the broker. By categorizing messages into “topics,” systems that subscribe to specific topics can opt to receive only messages they’re interested in.&lt;/p&gt;

&lt;p&gt;As a lightweight protocol with a number of prominent open source implementations, MQTT is an industry standard for a variety of use cases. It’s particularly common in Internet of Things (IoT) and Industrial IoT (IIoT) applications, but can be leveraged anywhere you have a distributed network of devices generating data or messages. This includes fleet management, home automation, real-time telemetry on computer hardware, and practically any use case where sensors generate data points periodically.&lt;/p&gt;

&lt;h2 id="why-use-influxdb-for-mqtt-data"&gt;Why use InfluxDB for MQTT data?&lt;/h2&gt;

&lt;p&gt;If you’ve already concluded that the MQTT protocol is the right way to move your data from various devices into a centralized broker, odds are that you’re working with time series data. Time series data has a couple of key characteristics: it’s a sequence of data collected in chronological order, and all data points contain a timestamp. Most commonly, this also means there’s a large volume of data. Hundreds or thousands of sensors generating new data points every second can quickly turn into millions or billions of records per day. As the scale of data increases, the need for a specialized, purpose-built solution to handle this volume grows, too.&lt;/p&gt;

&lt;p&gt;That’s where InfluxDB, the industry-leading time series database, comes in. InfluxDB is purpose-built for the time series data common in MQTT use case scenarios, delivering unparalleled performance and a number of dedicated features to make managing and working with your time series data as easy as possible.&lt;/p&gt;

&lt;p&gt;Performance is critical because ingesting millions or billions of data points per day can strain most databases. Because time series databases like InfluxDB are optimized to handle that firehose of continuous data, they can scale to handle and ingest it with greater efficiency and lower costs. A custom-built storage engine eliminates snags that most other types of databases encounter, such as index maintenance and contention locks. Last-value caches and engine optimizations for timestamp-based filtering makes retrieving recent data extremely efficient, so fresh data being written into InfluxDB can be queried in less than 10 milliseconds, minimizing time to insight (or as we like to call it, “time to awesome”). This ensures a real-time view of the data generated across your network of devices.&lt;/p&gt;

&lt;p&gt;Time series functionality also makes managing and working with this data much easier, regardless of if performance at scale is a concern. DataFusion, the SQL query engine embedded into InfluxDB 3, makes it easy to query with a language most data professionals and AI agents already know. With dedicated time-based functions, queries that look like this in a general purpose database:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;WITH hours AS (
  SELECT generate_series(
    date_trunc('hour', now() - interval '24 hours'),
    date_trunc('hour', now()),
    interval '1 hour'
  ) AS hour_bucket
),
sensors AS (
  SELECT DISTINCT sensor_id FROM sensor_data
),
hour_sensor AS (
  SELECT h.hour_bucket, s.sensor_id
  FROM hours h
  CROSS JOIN sensors s
),
agg AS (
  SELECT
    sensor_id,
    date_trunc('hour', time) AS hour_bucket,
    percentile_cont(0.95) WITHIN GROUP (ORDER BY temperature) AS p95
  FROM sensor_data
  WHERE time &amp;gt;= now() - interval '24 hours'
  GROUP BY sensor_id, hour_bucket
)
SELECT
  hs.hour_bucket,
  hs.sensor_id,
  COALESCE(a.p95, 0) AS p95
FROM hour_sensor hs
LEFT JOIN agg a USING (hour_bucket, sensor_id)
ORDER BY hs.sensor_id, hs.hour_bucket;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Can be shortened to this in InfluxDB:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT
  date_bin_gapfill(INTERVAL '1 hour', time) AS hour,
  sensor_id,
  interpolate(percentile(temperature, 95)) AS p95
FROM sensor_data
WHERE time &amp;gt;= NOW() - INTERVAL '24 hours'
GROUP BY hour, sensor_id;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Admittedly, this is a cherry-picked example for a complicated function most users won’t use every day, but there are plenty that aren’t. The InfluxDB 3 processing engine comes with a host of built-in plugins for processing and transforming data as it’s written, monitoring and anomaly detection, forecasting, and alerting. Retention policies can be set at a database or table level, ensuring you keep data as long as it’s useful, and the downsampling plugin for the processing engine can help you keep your data at a lower resolution once it’s past the end of that policy. InfluxDB also has tons of connections to the ecosystem of data visualization tools, clients, and, critical for the purposes of this tutorial, integrates seamlessly with Telegraf, the data collection agent we’ll be using to move data from our MQTT broker into InfluxDB.&lt;/p&gt;

&lt;h2 id="the-mqtt---influxdb-pipeline"&gt;The MQTT -&amp;gt; InfluxDB pipeline&lt;/h2&gt;

&lt;p&gt;The architecture of this data pipeline is relatively straightforward, with data flowing in one direction throughout:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Devices, sensors, and anything generating raw data are set up as an MQTT publishing client connected to the broker.&lt;/li&gt;
  &lt;li&gt;The MQTT broker receives the raw data from the various publishers and forwards it.&lt;/li&gt;
  &lt;li&gt;Telegraf subscribes to the published topics and then writes data into InfluxDB.&lt;/li&gt;
  &lt;li&gt;The InfluxDB processing engine handles all necessary transformations and makes the data immediately available for querying and visualization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So let’s jump into specifics.&lt;/p&gt;

&lt;h4 id="setting-up-the-mqtt-broker-and-clients"&gt;Setting Up the MQTT Broker and Clients&lt;/h4&gt;

&lt;p&gt;The first thing you’re going to need to do is install the MQTT technology of your choice on every device that’s going to be a publishing client, as well as on the server you want to act as your broker. Eclipse Mosquitto is a common open source option for MQTT that we’ll use in this guide, but any other MQTT client, such as HiveMQ, Paho, MQTTX, MQTT Explorer, or EasyMQTT, will also work great for this tutorial. The exact commands will differ depending on what you’re using, but the concepts will remain the same, as it’s a standardized protocol.&lt;/p&gt;

&lt;p&gt;To install Eclipse Mosquitto:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;On Linux, run: &lt;code class="language-markup"&gt;snap install mosquitto&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;On Mac: Install &lt;a href="https://brew.sh/"&gt;Homebrew&lt;/a&gt;, then run &lt;code class="language-markup"&gt;brew install mosquitto&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;On Windows: Go to the &lt;a href="https://mosquitto.org/download/"&gt;mosquitto download page&lt;/a&gt; and install from there&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you install Mosquitto, the installer will then tell you the exact file path that the configuration file sits in. You’ll want to configure your broker first, and you should set up authentication if you don’t want to allow unauthenticated connections. A lack of authentication can be fine if you’re running everything on a local network where you’re not doing any port forwarding, but it’s not recommended if your devices are communicating over the internet.&lt;/p&gt;

&lt;p&gt;There are &lt;em&gt;many&lt;/em&gt; different ways to set up authentication with Mosquitto—one of the simplest is &lt;a href="https://mosquitto.org/man/mosquitto_passwd-1.html"&gt;creating a password file with the &lt;code class="language-markup"&gt;mosquitto-passwd&lt;/code&gt; command&lt;/a&gt;, but you can read a full list of options on &lt;a href="https://mosquitto.org/documentation/authentication-methods/"&gt;their documentation page for authentication methods&lt;/a&gt;. Whatever you settle on, if you decide to use some form of authentication, you’ll need to add the following line to your Mosquitto configuration file.:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;allow_anonymous false&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are &lt;a href="https://mosquitto.org/man/mosquitto-conf-5.html"&gt;many other configuration options in the documentation&lt;/a&gt;, and what you set and configure will depend on your use case, but some you may want to consider are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;persistence false&lt;/code&gt; - Because we’re writing to InfluxDB, we don’t need to persist messages to disk.&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;log_dest stdout&lt;/code&gt; - For setting up, testing, and debugging, outputting logs directly to the terminal makes things easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And of course, make sure your listener is configured on the same port for all devices. The default is 1883, but you can change this if desired.&lt;/p&gt;

&lt;p&gt;Once you configure your broker, you can set up your publishing clients, and with whatever data you’re measuring, they can publish messages to the broker with the command:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;mosquitto_pub -h "host" -t "topic" -m "value"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you’re running this all on a local network, your host will be &lt;code class="language-markup"&gt;localhost&lt;/code&gt;; otherwise, it’ll be the address where your broker is running. The value should be whatever you’re measuring and publishing at that moment.&lt;/p&gt;

&lt;p&gt;Your topic can be whatever is appropriate to label that value. If you have different devices and different types of measurements for each device, it’s recommended to nest your topics and organize them in a way that makes logical sense. For example, if you have many different devices measuring, say, temperature and velocity, your topic arrangement may look like:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;/sensors/vehicles/v1/device1/temp&lt;/li&gt;
  &lt;li&gt;/sensors/vehicles/v1/device1/velocity&lt;/li&gt;
  &lt;li&gt;/sensors/vehicles/v1/device2/temp&lt;/li&gt;
  &lt;li&gt;/sensors/vehicles/v1/device2/velocity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As long as you have a unique topic structure for each type of value being sent, we can parse and sort this into tags and fields with InfluxDB. For further information on setting up MQTT topics, there are plenty of great &lt;a href="https://www.cedalo.com/blog/mqtt-topics-and-mqtt-wildcards-explained"&gt;guides on the matter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With your clients and broker configured, your clients publishing messages, and your broker receiving and forwarding those messages, you should be all set up for the MQTT portion of this data pipeline.&lt;/p&gt;

&lt;h2 id="installing-influxdb"&gt;Installing InfluxDB&lt;/h2&gt;

&lt;p&gt;The next step is to move your MQTT data into InfluxDB. The first step is to install InfluxDB. You can &lt;a href="https://docs.influxdata.com/influxdb3/core/install/"&gt;check out our docs on installing it here&lt;/a&gt;, but the simplest and easiest way to get started is to run the install scripts provided by InfluxData with:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl -O https://www.influxdata.com/d/install_influxdb3.sh \
&amp;amp;&amp;amp; sh install_influxdb3.sh&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;These should work on every operating system and provide you with some simple options to get started with InfluxDB 3 Core or Enterprise. The installation script should also give you an admin token, which you’ll want to store somewhere safe so you can use it for authentication. If you’d like to further configure your InfluxDB 3 instance, the installation script should tell you where all files and configuration files were installed for further adjusting, though it should run fine out of the box.&lt;/p&gt;

&lt;p&gt;If you have Docker installed, you can also install the InfluxDB Explorer UI as part of this process, giving you an easy way to view, manage, and query your InfluxDB 3 instance. You can reach it by navigating to &lt;code class="language-markup"&gt;localhost:8888&lt;/code&gt; in your browser, entering &lt;code class="language-markup"&gt;host.docker.internal:8181&lt;/code&gt; for the server address, and providing the admin token.&lt;/p&gt;

&lt;h4 id="installing-and-configuring-telegraf"&gt;Installing and Configuring Telegraf&lt;/h4&gt;

&lt;p&gt;With InfluxDB 3 installed and running, the last step to get the data pipeline operational is to install and configure Telegraf to connect our MQTT broker to InfluxDB. Telegraf installation varies by operating system and Linux distribution, so check out the &lt;a href="https://docs.influxdata.com/telegraf/v1/install/#download-and-install-telegraf"&gt;Telegraf documentation on installation to find the right files or command to run&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’re on Mac or Linux, this will generate a default configuration file for you:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;On Mac, install via Homebrew: &lt;code class="language-markup"&gt;/usr/local/etc/telegraf.conf&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;On Linux: &lt;code class="language-markup"&gt;/etc/telegraf/telegraf.conf&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Otherwise, you’ll need to create an empty configuration file or generate one with &lt;code class="language-markup"&gt;telegraf config &amp;gt; telegraf.conf&lt;/code&gt;. Once you have located or created your configuration file, all that’s left to do is connect Telegraf to your MQTT Broker and InfluxDB.&lt;/p&gt;

&lt;p&gt;InfluxDB is very easy to configure a connection to, and you can add these lines to the config file:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;[[outputs.influxdb_v2]]
  urls = ["InfluxDB address &amp;amp; port"]
  token = "admin token"
  organization = "org name"
  bucket = "destination database"&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;The InfluxDB address and port should be wherever you have InfluxDB installed. If you’re running on a local network, this will be &lt;code class="language-markup"&gt;http://127.0.0.1:8181&lt;/code&gt;; otherwise, it’ll be the IP and port.&lt;/li&gt;
  &lt;li&gt;Token is the admin token you copied from installation.&lt;/li&gt;
  &lt;li&gt;Organization can be whatever you’d like to name it.&lt;/li&gt;
  &lt;li&gt;Bucket should be the name of the database you’re writing all your MQTT data to. You don’t have to create the database first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting up a connection to your MQTT broker is also straightforward:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;[[inputs.mqtt_consumer]]
  servers = ["broker address"]
  topics = ["list of topics"]
  data_format = "value"
  data_type = "data_type"

  ## if you have username and password authentication for MQTT
  username = "username"
  password = "password"&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;The broker address is one again the address and port for where your MQTT broker is running. For a local network, this will be &lt;code class="language-markup"&gt;tcp://127.0.0.1:1883&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Topics is a comma-separated list of topics that you’re writing to.&lt;/li&gt;
  &lt;li&gt;Data type is the primitive data type being written: integer, float, long, string, or boolean.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is all you need in your configuration file to have the full pipeline running! If you run telegraf with &lt;code class="language-markup"&gt;telegraf --config telegraf.conf&lt;/code&gt;, you should be able to send a message from an MQTT publisher and view that data in InfluxDB.&lt;/p&gt;

&lt;p&gt;However, you can make some improvements in Telegraf’s configuration to help parse and organize your data by topic. By default, this writes each topic into a single tag column to the same table, with a monolithic “value” column for all your values, which isn’t a very good data model. With topic parsing and pivot processing added to the configuration, we can specify what part of the topic should define what table the data is written into, turn every level of the topic into a tag, and pivot on the last level of the topic so that each raw value is its own field:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;[[inputs.mqtt_consumer]]
  servers = ["broker address"]
  topics = ["/sensors/#"]
  data_format = "value"
  data_type = "data_type"

  ## if you have username and password authentication for MQTT
  username = "username"
  password = "password"

  [[inputs.mqtt_consumer.topic_parsing]]
    measurement = "/measurement/_/_/_/_"
    tags = "/_/device_type/version/device_name/field"
  [[processors.pivot]]
    tag_key = "field"
    value_key = "value"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This takes a value from the /sensors/vehicles/v1/device1/temp topic and writes it to the sensors table. The tag columns populate with &lt;code class="language-markup"&gt;device_type = vehicles&lt;/code&gt;, &lt;code class="language-markup"&gt;version = v1&lt;/code&gt;, &lt;code class="language-markup"&gt;device_name = device1&lt;/code&gt;, and temp is written as a field with the value of temp set to whatever your MQTT publisher wrote. You can modify this configuration as appropriate for your topics, and &lt;a href="https://docs.influxdata.com/telegraf/v1/input-plugins/mqtt_consumer/"&gt;the documentation provides full information on everything that can be done&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="further-improvements"&gt;Further improvements&lt;/h2&gt;

&lt;p&gt;With MQTT data being published, parsed, and written into InfluxDB, you’ve fully set up an MQTT data pipeline! However, there’s a lot more you can do:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;View and query your data with the InfluxDB Explorer UI, as discussed earlier.&lt;/li&gt;
  &lt;li&gt;Connect any one of the many &lt;a href="https://docs.influxdata.com/influxdb3/core/tags/client-libraries/"&gt;client libraries&lt;/a&gt; to access your data and use it for downstream applications, or to a data visualization tool for dashboarding and insight into what’s being written.&lt;/li&gt;
  &lt;li&gt;Use the &lt;a href="https://docs.influxdata.com/influxdb3/core/plugins/"&gt;InfluxDB 3 processing engine&lt;/a&gt; for further transformations and processing of your data as it’s written.&lt;/li&gt;
  &lt;li&gt;Set up alerts, monitoring, forecasting, and more with the processing engine, too.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="the-final-product"&gt;The final product&lt;/h2&gt;

&lt;p&gt;By integrating MQTT, Telegraf, and InfluxDB, you’ve constructed a robust, fully-functioning data pipeline capable of efficiently centralizing real-time telemetry. The lightweight MQTT protocol ensures that messages from your distributed network flow reliably to the broker, while Telegraf acts as the collection agent for seamless ingestion and transformation. Finally, InfluxDB provides the purpose-built storage and specialized features needed to query and visualize your data in minimal time. This architecture establishes a solid foundation for turning raw event streams into meaningful insights, minimizing your time to awesome.&lt;/p&gt;
</description>
      <pubDate>Fri, 17 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/mqtt-data-pipeline-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/mqtt-data-pipeline-influxdb/</guid>
      <category>Developer</category>
      <author>Cole Bowden (InfluxData)</author>
    </item>
    <item>
      <title>From Edge to Cloud: How Litmus Edge and InfluxDB Unlock Industrial Intelligence at Hannover Messe</title>
      <description>
&lt;p&gt;If you’ve spent time in industrial environments, you know the problem isn’t a lack of data. It’s collecting it reliably, contextualizing it, and storing it at scale. Most stacks weren’t built to fight all three battles.&lt;/p&gt;

&lt;h2 id="the-industrial-data-problem"&gt;The industrial data problem&lt;/h2&gt;

&lt;p&gt;Industrial connectivity is no joke. OT environments are notoriously fragmented and siloed, spanning PLCs, CNCs, SCADA systems, and sensors, each speaking a different protocol, running on a different vendor’s stack, and operating in a network zone that was never designed to talk to anything outside the shop floor.  Extracting value from that data has traditionally required heavy IT involvement, expensive integrations, and months of professional services work, and the traditional answer was usually a historian. Historians made progress on the access problem, giving individual sites a way to capture and store machine data. But standardizing that data across silos and contextualizing it across systems and plants is where they fall short. And unfortunately, that’s where most of the value lies.&lt;/p&gt;

&lt;p&gt;Once data is collected and contextualized, the next problem is keeping it useful at scale. This is more than a storage problem. Sustaining high-frequency ingest of contextualized telemetry and querying that data fast enough to act on it is where most systems break. Historians were not designed for this. They sacrifice resolution, degrade under query load, and make cross-site, cross-system analysis slow and impractical. The value in industrial data is in the detail, and most platforms are architected to throw this detail away.&lt;/p&gt;

&lt;h2 id="collect-contextualize-and-storeall-at-the-edge"&gt;Collect, contextualize, and store—all at the edge&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://litmus.io/litmus-edge"&gt;Litmus Edge&lt;/a&gt; acts as the intelligence layer between your machines and the rest of your data architecture. It connects natively to hundreds of industrial protocols, including OPC-UA, Modbus, MQTT, FANUC, Siemens S7, and many more, normalizing disparate machine data into a unified, consistent stream.&lt;/p&gt;

&lt;p&gt;But connectivity alone isn’t enough. Raw machine signals mean little without context. Litmus Edge allows operations teams to tag, enrich, and structure data at the point of collection. A temperature reading becomes tied to a specific asset, production line, facility, and product run. By the time data leaves the edge, it is no longer just a number. It is a meaningful, queryable event.&lt;/p&gt;

&lt;h2 id="scale-query-retain-your-industrial-data-hub"&gt;Scale, query, retain: your industrial data hub&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt; becomes the system of record for your industrial time series data at the edge, in a centralized environment, or both.&lt;/p&gt;

&lt;p&gt;It ingests high-frequency telemetry at full resolution, serves low-latency queries for real-time operations, and scales to fleet-wide analysis across sites and time horizons without forcing tradeoffs between fidelity and cost. High cardinality isn’t a problem to design around. Long-term retention doesn’t require a cost penalty. The data stays detailed, queryable, and useful.&lt;/p&gt;

&lt;h2 id="scaling-across-lines-sites-and-the-enterprise"&gt;Scaling across lines, sites, and the enterprise&lt;/h2&gt;

&lt;p&gt;Scale changes what’s possible, but only if the data model scales with it. When every site collects and contextualizes data the same way, writing to a consistent schema, cross-site analysis becomes straightforward. Comparing performance across plants, identifying outliers, and correlating signals across a global fleet become simple queries instead of integration projects. That consistency is what the Litmus and InfluxDB architecture is designed to deliver.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;em&gt;Which production lines across all facilities are showing early indicators of equipment degradation?&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;How does energy consumption per unit compare across sites running similar processes?&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;Where are the outliers? And what can the top performers teach the rest of the network?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not hypothetical future capabilities. They are available today to any organization willing to invest in getting the data foundation right.&lt;/p&gt;

&lt;h2 id="the-bridge-to-higher-level-analytics"&gt;The bridge to higher-level analytics&lt;/h2&gt;

&lt;p&gt;InfluxDB doesn’t just store data well; it integrates cleanly with the ecosystem: the analytics, visualization, and AI/ML tooling your teams are already investing in. Grafana dashboards, anomaly detection workflows, and digital twin platforms connect through InfluxDB’s SQL-native interface and open APIs without custom pipelines or bespoke integration work.&lt;/p&gt;

&lt;p&gt;For OT teams, that’s the point. The edge handles the hard part—protocol translation, normalization, enrichment. InfluxDB centralizes the results into a single, interoperable data layer that every team can query with the tools they already use.&lt;/p&gt;

&lt;p&gt;The result is a data architecture that is genuinely interoperable; the plant floor and the enterprise layer are finally speaking the same language.&lt;/p&gt;

&lt;h2 id="extending-into-the-cloud-with-aws"&gt;Extending into the cloud with AWS&lt;/h2&gt;

&lt;p&gt;There are several ways to deploy InfluxDB as your industrial data hub: on-premises, at the edge, or in the cloud. For teams who want to go straight to the cloud, AWS is a natural fit. In this reference architecture, Litmus Edge writes contextualized telemetry directly into &lt;a href="https://www.influxdata.com/products/timestream-for-influxdb/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;Amazon Timestream for InfluxDB&lt;/a&gt;, creating a seamless path from the shop floor to cloud-scale analytics. This allows teams to centralize access, scale analytics, and integrate with the broader AWS ecosystem without rebuilding their infrastructure from scratch.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7I05B89zisdmKtUk9EiUt6/e10ba53b117ae6b4c25dcfd791321705/image__6_.png" alt="Litmus Edge diagram" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Once data is available in AWS, it opens up a broader set of capabilities. For example, as new data arrives, you can trigger serverless workflows with AWS Lambda, stream high-velocity data through Kinesis for downstream processing, or connect directly to SageMaker to train models on high-fidelity data, without reshaping or downsampling it first.&lt;/p&gt;

&lt;h2 id="what-were-showing-at-hannover-messe"&gt;What we’re showing at Hannover Messe&lt;/h2&gt;

&lt;p&gt;At Hannover Messe, you’ll be able to see this architecture running end-to-end:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href="https://litmus.io/hannover-messe-2026"&gt;Litmus booth&lt;/a&gt; (Hall 16, Stand A09)&lt;/strong&gt;: The full Digital Factory demo, showing how data flows from industrial systems into Litmus and into InfluxDB 3 Enterprise in real-time.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href="https://www.influxdata.com/event/meet-influxdb-at-hannover-messe-2026/?utm_source=website&amp;amp;utm_medium=litmus_edge_influxdb&amp;amp;utm_content=blog"&gt;InfluxData kiosk&lt;/a&gt; (within the Litmus booth)&lt;/strong&gt;: A deeper look at how InfluxDB handles high-frequency ingest, real-time querying, and efficient storage at massive scale.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;AWS booth (Litmus kiosk)&lt;/strong&gt;: The cloud extension of the demo, highlighting replication into Amazon Timestream for InfluxDB and integration with AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The InfluxData team (including myself) will be on-site at the Litmus booth throughout the event to walk through the architecture and discuss real-world deployment patterns.&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Post by Ben Corbett, InfluxData; Rajesh Gomatam, Ph.D. Principal Partner Solutions Architect - Manufacturing, AWS; and Benjamin Norman, Partner Solution Architect, Litmus&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Thu, 16 Apr 2026 06:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/litmus-edge-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/litmus-edge-influxdb/</guid>
      <category>Demo</category>
      <category>Product</category>
      <category>Developer</category>
      <author>Ben Corbett (InfluxData)</author>
    </item>
    <item>
      <title>What’s New in InfluxDB 3 Explorer 1.7: Table Management, Data Import, Transforms, and More</title>
      <description>
&lt;p&gt;InfluxDB 3 Explorer 1.7 is a step forward for anyone who wants to manage their time series data without constantly switching between the UI and a terminal. This release adds table-level schema management, the ability to import data from other InfluxDB instances, and a new Transform Data section to reshape your data, all within the Explorer UI.&lt;/p&gt;

&lt;h2 id="table-management"&gt;Table management&lt;/h2&gt;

&lt;p&gt;Previously, if you wanted to see what tables existed inside a database, you had to query system tables or use the API. The new Manage Tables page changes that.
You can get there from the sidebar or from the new actions menu on any database in the Manage Databases page. That actions menu gives you quick access to query a database, view its tables, or delete it.&lt;/p&gt;

&lt;p&gt;The Manage Tables page lists every table in the selected database, along with its column count, type, and any configured &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/distinct-value-cache/"&gt;Distinct Value&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/last-value-cache/"&gt;Last Value&lt;/a&gt; Caches. Use the toggle filters to show or hide system tables and deleted tables. Deleted tables show up with a “Pending Delete” badge when the Show Deleted Tables toggle is enabled, so you always have visibility into what’s been removed.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6U2nqrukRwDJktsHPjiL91/4a8a861bf96b52061a6def8e23726593/Screenshot_2026-04-14_at_6.13.48â__PM.png" alt="Explorer 1.7 Manage Tables" /&gt;&lt;/p&gt;

&lt;p&gt;You can also &lt;strong&gt;create new tables&lt;/strong&gt; directly from this page. The Create Table dialog lets you define the schema up front: name, fields with data types, optional tags, and a retention period. This is useful when you want to control your schema explicitly rather than relying on &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/write/"&gt;schema-on-write&lt;/a&gt; to infer types from the first arriving data points.&lt;/p&gt;

&lt;p&gt;From any table’s action menu, you can jump straight to the Data Explorer with a pre-built query for that table.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/46bQpfsOyXjWem9M4125o7/73e9dcd0a33e3b11982d806d6d0f0504/Screenshot_2026-04-14_at_6.15.43â__PM.png" alt="1.7 Schema on Write" /&gt;&lt;/p&gt;

&lt;h2 id="import-from-influxdb"&gt;Import from InfluxDB&lt;/h2&gt;

&lt;p&gt;The next few features I’ll discuss are enhancements that make it much easier to work with the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;InfluxDB 3 Processing Engine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moving data between InfluxDB instances used to mean writing scripts, dealing with export formats, and coordinating tokens across environments. The new &lt;strong&gt;&lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/import"&gt;Import from InfluxDB&lt;/a&gt;&lt;/strong&gt; feature provides a guided workflow for migrating small-to-medium datasets from any existing InfluxDB v1, v2, or v3 instance (assuming v3 Schema compatibility) into your current InfluxDB 3 database.&lt;/p&gt;

&lt;p&gt;You’ll find it under the Write Data section, on both the Dev Data and Production Data pages. The workflow walks you through selecting a target database (or creating a new one), connecting to a source InfluxDB instance, authenticating, and then choosing which databases and tables to import.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2krWp1AKKHN86ICg70mjBL/b22f50fdf84fb8cbe43bb1be4d3f747e/Screenshot_2026-04-14_at_6.17.45â__PM.png" alt="Writing Dev Data" /&gt;&lt;/p&gt;

&lt;p&gt;Before committing to the import, perform a &lt;strong&gt;dry run&lt;/strong&gt; that shows you exactly what will be transferred, including the source and destination, the number of tables, the estimated row count, and how long it should take. Advanced options let you tune the batch size and concurrency if you need to balance import speed against resource usage.&lt;/p&gt;

&lt;p&gt;Once you start the import, a live progress view shows you how far along things are, how many rows have been imported, and the current status of each table. When it finishes, a “Query this database” button takes you straight to the Data Explorer so you can verify everything landed correctly.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1Ao5CzW0yXUYPijeK0k2Vu/44b63c64f71ccdd05a5fb3f74b048329/Screenshot_2026-04-14_at_6.19.20â__PM.png" alt="Write Data" /&gt;&lt;/p&gt;

&lt;p&gt;If you’re running an InfluxDB 1.x or 2.x instance and want to try InfluxDB 3 with your real data, this saves you from building a migration pipeline. Just point the import tool at your existing instance, pick the databases and time range you want, and the data flows over. It also works for consolidating data from multiple InfluxDB 3 instances into one place, or pulling production data into a dev environment for testing.&lt;/p&gt;

&lt;h2 id="transform-data"&gt;Transform data&lt;/h2&gt;

&lt;p&gt;The new &lt;strong&gt;Transform Data&lt;/strong&gt; section in the sidebar gives you a visual interface for setting up data transformations that run automatically on ingestion via the Processing Engine. Under the hood, these are powered by the &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/basic_transformation"&gt;Basic Transformation Processing Engine plugin&lt;/a&gt;, but you don’t need to write any plugin configuration by hand. The UI handles that for you.&lt;/p&gt;

&lt;p&gt;The way it works: when data is written to a source table, the transformation runs automatically and writes the results to a target database or table. You can set a short &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/databases/#table-retention-period"&gt;retention period&lt;/a&gt; on the source data (say, one day) so the raw data cleans itself up, and the transformed data lives on in the destination. There are four types of transformations available.&lt;/p&gt;

&lt;h4 id="rename-table"&gt;Rename Table&lt;/h4&gt;

&lt;p&gt;Rename Table lets you route data arriving in one table to another table. This is handy when you’re consuming data from a source you don’t control, and the table names don’t match your naming conventions.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5BiXqB4Q9BDHEFsOv8QtaW/c56cd9fe61d7ca91c1dcc37385bf6656/Screenshot_2026-04-14_at_6.24.41â__PM.png" alt="rename table" /&gt;&lt;/p&gt;

&lt;h4 id="rename-columns"&gt;Rename Columns&lt;/h4&gt;

&lt;p&gt;Rename Columns works similarly, but at the column level. You pick a source table and select which columns to rename. If you’re integrating data from different systems that use different naming conventions (for example, &lt;code class="language-markup"&gt;temp_f&lt;/code&gt; vs &lt;code class="language-markup"&gt;temperature_fahrenheit&lt;/code&gt;), this standardizes everything without touching the source.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3hF8Wa6vbro73j1A2O3f6W/cae32a0cfe6a43949f5b64b09a7338c2/Screenshot_2026-04-14_at_6.27.58â__PM.png" alt="rename columns" /&gt;&lt;/p&gt;

&lt;h4 id="transform-values"&gt;Transform Values&lt;/h4&gt;

&lt;p&gt;Transform Values lets you apply calculations or conversions to field values as they come in. You can do math operations, string transformations, unit conversions, or simple find-and-replace. If your sensors report temperature in Celsius but your dashboards expect Fahrenheit, this handles the conversion at ingestion time so your queries stay clean.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2rTFmTLs7vQ2Z5LPUDHzTx/e10529f9e3eb69f7a8e251956a9acff4/Screenshot_2026-04-14_at_6.29.13â__PM.png" alt="transform values" /&gt;&lt;/p&gt;

&lt;h4 id="filter-data"&gt;Filter Data&lt;/h4&gt;

&lt;p&gt;Filter Data lets you keep only the rows or columns that match specific conditions. You can filter by rows (e.g., only keep data where &lt;code class="language-markup"&gt;crop_type = 'carrots'&lt;/code&gt;) or by columns (drop fields you don’t need). This is useful when you’re receiving more data than you actually want to store. For example, a third-party feed might send 50 fields when you only care about 5.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4mTxJgxUUyEZH7RSbRXRet/c67d429d6e87d4bfdb0b90c29e9cbbbc/Screenshot_2026-04-14_at_6.30.22â__PM.png" alt="create transform" /&gt;&lt;/p&gt;

&lt;p&gt;You can test each transformation before deployment, and once deployed, monitor its status (running, stopped, errors) from the Transform Data dashboard.&lt;/p&gt;

&lt;h4 id="downsample-data"&gt;Downsample Data&lt;/h4&gt;

&lt;p&gt;Downsampling is a classic time series operation: take high-frequency data and roll it up into lower-frequency summaries to save storage and speed up queries over long time ranges. The new &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/library/official/downsampler/"&gt;&lt;strong&gt;Downsample&lt;/strong&gt;&lt;/a&gt; page, also under the Transform Data section, makes this easy to set up.
You create a downsample trigger by specifying a source table, a target table, a schedule (how often the aggregation runs), a time window (how far back to look), an aggregation interval (the bucket size), and an aggregation function (avg, sum, min, max, etc.). You can also choose to include or exclude specific fields.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7yPPBCTavele7EaFCLvIsa/156aa1c09f6bbb88b37ff14f425ce995/Screenshot_2026-04-14_at_6.31.40â__PM.png" alt="downsample" /&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/downsampler/"&gt;Downsample Processing Engine plugin&lt;/a&gt; powers this feature.&lt;/p&gt;

&lt;h2 id="get-started"&gt;Get started&lt;/h2&gt;

&lt;p&gt;All of these features are available now in &lt;a href="https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Explorer 1.7&lt;/a&gt;. For more on these Processing Engine capabilities, see InfluxDB 3 Processing Engine Updates.&lt;/p&gt;

&lt;p&gt;If you’re running &lt;a href="https://docs.influxdata.com/influxdb3/core/install/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;, update to the latest version to try them out. To learn more, check out the &lt;a href="https://docs.influxdata.com/influxdb3/explorer/?utm_source=website&amp;amp;utm_medium=influxdb_explorer_1_7&amp;amp;utm_content=blog"&gt;InfluxDB 3 Explorer documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To update InfluxDB 3 Explorer, pull the latest Docker image:
&lt;code class="language-markup"&gt;docker pull influxdata/influxdb3-ui&lt;/code&gt;&lt;/p&gt;
</description>
      <pubDate>Wed, 15 Apr 2026 05:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-explorer-1-7/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-explorer-1-7/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Daniel Campbell (InfluxData)</author>
    </item>
    <item>
      <title>Less Friction, More Control: Here's What Shipped in Q1</title>
      <description>&lt;p&gt;Our Q1 momentum has been focused on a simple goal: making InfluxDB easier to operate, easier to scale, and faster to put to work.&lt;/p&gt;

&lt;p&gt;Across Telegraf, InfluxDB 3, and our managed offerings, these updates reduce friction in how teams collect, process, and scale time series workloads.&lt;/p&gt;

&lt;h2 id="telegraf-controller-enters-beta"&gt;Telegraf Controller enters beta&lt;/h2&gt;

&lt;p&gt;Telegraf is already a powerful way to collect metrics, logs, and events across environments. At scale, the challenge shifts from collection to control. Telegraf Enterprise is designed to solve that problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At the center is Telegraf Controller, a control plane that gives teams centralized configuration management and fleet-wide health visibility&lt;/strong&gt;. The beta includes major capabilities such as API authentication, API token management, user account management, multi-user support, role-based access control, global settings management, and expanded plugin support in the visual config builder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feedback from early users is shaping the road to general availability, with enterprise licensing, enforcement, audit logging, and federated identity management next on the roadmap.&lt;/strong&gt; &lt;a href="https://www.influxdata.com/products/telegraf-enterprise/?utm_source=website&amp;amp;utm_medium=q1_product_recap_2026&amp;amp;utm_content=blog"&gt;Sign up to join the beta&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2C5Q22cX3rXamZNOqVDPIF/a46fed22b3ff4f33e7552dddcddc8796/Screenshot_2026-04-07_at_5.41.54â__PM.png" alt="Telegraf Agents SS" /&gt;&lt;/p&gt;

&lt;h2 id="influxdb-39-adds-more-operational-control"&gt;InfluxDB 3.9 adds more operational control&lt;/h2&gt;

&lt;p&gt;Last week’s &lt;a href="https://www.influxdata.com/blog/influxdb-3-9/"&gt;release&lt;/a&gt; of &lt;strong&gt;InfluxDB 3.9 is focused on making the platform easier to run at scale, 
with improvements aimed at predictability, visibility, and day-to-day management&lt;/strong&gt;. The release expands CLI and automation support for headless environments, improves resource and lifecycle management, and adds clearer visibility into access control and product identity across Core and Enterprise deployments. These are the changes that matter in production: fewer rough edges, stronger operational clarity, and better control as workloads grow.&lt;/p&gt;

&lt;p&gt;InfluxDB 3.9 Enterprise also includes a new beta performance preview for non-production environments. &lt;strong&gt;This optional preview includes optimized single-series queries, reduced CPU and memory spikes under load, support for wider and sparser schemas, and early automatic distinct value caches to reduce metadata query latency&lt;/strong&gt;. These features are not yet recommended for production, but they give customers an early look at capabilities planned for future releases and a chance to help shape what comes next.&lt;/p&gt;

&lt;h2 id="processing-engine-updates-make-influxdb-3-easier-to-operationalize"&gt;Processing Engine updates make InfluxDB 3 easier to operationalize&lt;/h2&gt;

&lt;p&gt;The Processing Engine remains one of the most powerful parts of InfluxDB 3 because it allows teams to run logic directly at the database. Users can transform data on ingest, run scheduled jobs, or serve HTTP requests without adding external services or layering on more pipeline complexity.&lt;/p&gt;

&lt;p&gt;This quarter, we continued to expand both the engine itself and the plugin ecosystem around it. 
The latest plugins make it easier to get data into InfluxDB 3 from more sources:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The Import Plugin&lt;/strong&gt; provides a simpler path for bringing data from InfluxDB v1, v2, or v3 into InfluxDB 3 Core and Enterprise, with support for dry runs, progress tracking, pause and resume, conflict handling, and flexible filtering.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;New MQTT, Kafka, and AMQP subscription plugins&lt;/strong&gt; help users ingest streaming data directly from external message brokers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The new OPC UA Plugin&lt;/strong&gt; gives industrial teams a more direct path to data from PLCs, SCADA systems, and other OPC UA-enabled equipment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also made important improvements to the Processing Engine itself:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;New synchronous write controls give plugin authors more flexibility over durability and throughput.&lt;/li&gt;
  &lt;li&gt;Batch write support improves efficiency for high-volume workloads.&lt;/li&gt;
  &lt;li&gt;Asynchronous request handling keeps status checks and control operations responsive during long-running jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these updates make the Processing Engine a more practical way to build and operate real-time data pipelines directly inside InfluxDB 3. &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Check out our docs to learn more&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="better-visibility-for-cloud-dedicated-customers"&gt;Better visibility for Cloud Dedicated customers&lt;/h2&gt;

&lt;p&gt;As teams run production workloads on Cloud Dedicated, understanding how the system is being used becomes just as important as performance itself.&lt;/p&gt;

&lt;p&gt;This quarter, we introduced:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Query History (GA)&lt;/strong&gt; for troubleshooting, performance analysis, and deeper insight into query activity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;S3 API dashboards (Tier 1 and Tier 2)&lt;/strong&gt;, including monthly usage visibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates give teams better visibility into system behavior, usage patterns, and a faster path to understanding activity across the environment. &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/query-data/"&gt;Detailed docs here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6NxMXhxR3dvcUzNXa83cwN/5fa53025e47b947a57b55675b37d11c1/Screenshot_2026-04-07_at_5.45.32â__PM.png" alt="Q1 update SS" /&gt;&lt;/p&gt;

&lt;h2 id="influxdb-enterprise-1123-delivers-efficiency-gains-for-v1-environments"&gt;InfluxDB Enterprise 1.12.3 delivers efficiency gains for v1 environments&lt;/h2&gt;

&lt;p&gt;For teams needing more performance and running large-scale v1 Enterprise environments, InfluxDB Enterprise 1.12.3 is now available with substantial improvements in efficiency and reliability:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;100x faster retention enforcement for high-cardinality datasets&lt;/li&gt;
  &lt;li&gt;30% lower CPU usage during compaction&lt;/li&gt;
  &lt;li&gt;5x faster backups with configurable compression&lt;/li&gt;
  &lt;li&gt;3x less disk I/O during cold shard compactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These improvements make Enterprise v1 clusters more efficient, more predictable under load, and more cost-effective to operate. &lt;a href="https://docs.influxdata.com/enterprise_influxdb/v1/about_the_project/release-notes/"&gt;Read the release notes&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="amazon-timestream-for-influxdb-adds-a-new-scale-tier-and-simple-upgrade-path"&gt;Amazon Timestream for InfluxDB adds a new scale tier and simple upgrade path&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 on Amazon Timestream for InfluxDB now supports clusters of up to 15 nodes, giving customers a new scale tier for more demanding real-time workloads.&lt;/p&gt;

&lt;p&gt;This expanded tier improves query concurrency, increases ingestion throughput, and provides stronger workload isolation across ingestion, queries, and compaction. For teams running high-velocity, high-resolution data in production, that means more headroom to scale without compromising real-time performance.&lt;/p&gt;

&lt;p&gt;Customers can also seamlessly migrate from InfluxDB 3 Core to InfluxDB 3 Enterprise, making it easier to move into this higher-performance tier without a manual architectural overhaul or data loss. The new 15-node option is available for InfluxDB 3 Enterprise in all AWS regions where Amazon Timestream for InfluxDB is offered. &lt;a href="https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/"&gt;Read more here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="looking-ahead"&gt;Looking ahead&lt;/h2&gt;

&lt;p&gt;Taken together, these updates are about helping teams do more with less friction: move data faster, operate with more confidence, and scale time series workloads without losing control.
As operational data becomes more central to modern systems, we are continuing to invest in the infrastructure that turns that data into action across edge, cloud, and distributed environments.&lt;/p&gt;
</description>
      <pubDate>Wed, 08 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/q1-product-recap-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/q1-product-recap-2026/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Ryan Nelson (InfluxData)</author>
    </item>
    <item>
      <title>New Plugins, Faster Writes, and Easier Configuration: What’s New with the InfluxDB 3 Processing Engine</title>
      <description>&lt;p&gt;The Processing Engine is one of the most powerful features in InfluxDB 3. It lets you run Python code at the database—transforming data on ingest, running scheduled jobs, or serving HTTP requests—without spinning up external services or building middleware. You define the logic, attach it to a trigger, and the database handles the rest.&lt;/p&gt;

&lt;p&gt;Since launching the Processing Engine, we’ve been building out both the engine itself and the ecosystem of plugins that run on it. Today, we want to walk you through some exciting recent additions: new plugins for data ingestion, import, and validation; some general improvements to the engine; and a better configuration experience using InfluxDB 3 Explorer.&lt;/p&gt;

&lt;h2 id="a-quick-refresher-processing-engine-plugins"&gt;A quick refresher: Processing Engine plugins&lt;/h2&gt;

&lt;p&gt;If you’re already familiar with the Processing Engine, feel free to skip ahead. For those newer to the concept, here’s the short version.&lt;/p&gt;

&lt;p&gt;A plugin is a Python script that runs inside InfluxDB 3 in response to a trigger. There are three trigger types: data writes (react to incoming data as it’s written), scheduled events (run on a timer or cron expression), and HTTP requests (expose a custom API endpoint). Plugins have direct access to the database: they can query and write without having to egress and ingress the data to a different machine or location.  Plugins can also talk to other systems, letting you utilize data from other places or systems.&lt;/p&gt;

&lt;p&gt;You can write your own plugins from scratch to solve problems specific to your environment. That’s the whole point of embedding Python in the database: your logic, your rules, running right next to your data.&lt;/p&gt;

&lt;p&gt;But we also know that not everyone wants to start from a blank page. That’s why we maintain an &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;official plugin library&lt;/a&gt; with production-ready plugins for common time series tasks, such as downsampling, anomaly detection, forecasting, state change monitoring, and sending notifications to Slack, email, or SMS.&lt;/p&gt;

&lt;p&gt;These official plugins are designed to work in two ways. You can install them and use them as-is, configuring them through trigger arguments or TOML files to fit your setup. Or you can treat them as templates: fork one, customize the logic, and build something tailored to your exact workflow. Either way, they’re meant to get you moving faster.&lt;/p&gt;

&lt;p&gt;One more thing worth mentioning: if you’re thinking about building a custom plugin but aren’t sure where to start, AI tools like Claude can be very effective. Point Claude to the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/plugins/"&gt;Processing Engine documentation&lt;/a&gt; and the &lt;a href="https://github.com/influxdata/influxdb3_plugins"&gt;plugin library repo&lt;/a&gt; for examples, describe what you want your plugin to do, and let it generate a first draft. We’ve seen simple plugins created in a single shot, from description to working code, and even more complex plugins come together quickly when the AI has good examples to work from. It’s a great way to get past the blank-page problem and into something you can iterate on.&lt;/p&gt;

&lt;h2 id="new-plugins-data-ingestion-import-and-validation"&gt;New plugins: data ingestion, import, and validation&lt;/h2&gt;

&lt;p&gt;We’ve recently added several new plugins to the library that address some of the most common requests we’ve been hearing from the community. These are available now in beta—they’re fully functional, but we want to see them tested across more environments before we call them production-ready. Give them a try and let us know how they work for you.&lt;/p&gt;

&lt;h4 id="influxdb-import-plugin"&gt;InfluxDB Import Plugin&lt;/h4&gt;

&lt;p&gt;If you’re running an older version of InfluxDB and want to bring your data into InfluxDB 3, the new Import Plugin makes that significantly easier. It supports importing from InfluxDB v1, v2, or v3 instances over HTTP, with features you’d expect from a serious import tool: automatic data sampling for optimal batch sizing, pause/resume for long-running imports, progress tracking, tag/field conflict detection and resolution, configurable time ranges and table filtering, and a dry run mode so you can preview what an import will look like before committing to it.&lt;/p&gt;

&lt;p&gt;The plugin runs as an HTTP trigger, so you control the entire import lifecycle (start, pause, resume, cancel, check status) through simple HTTP requests. That means you can kick off a large import, pause it during peak hours, and pick it up later from exactly where it left off.
For small or medium-sized InfluxDB databases, some might even use this as a migration tool to move to InfluxDB 3.&lt;/p&gt;

&lt;h4 id="data-subscription-plugins-mqtt-kafka-and-amqp"&gt;Data subscription plugins: MQTT, Kafka, and AMQP&lt;/h4&gt;

&lt;p&gt;These three plugins let new InfluxDB 3 users start getting data into InfluxDB 3 fast and without coding. They let you subscribe to external message brokers and begin automatically ingesting that data into InfluxDB 3.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;MQTT Subscriber Plugin&lt;/strong&gt; connects to an MQTT broker, subscribes to topics you specify, and transforms incoming messages into time series data. It supports JSON, Line Protocol, and custom text formats with regex parsing, and uses persistent sessions to ensure reliable message delivery between executions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Kafka Subscriber Plugin&lt;/strong&gt; does the same for Kafka topics. It uses consumer groups for reliable delivery, supports configurable offset commit policies (commit on success for data integrity, or commit always for maximum throughput), and handles JSON, Line Protocol, and text formats.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AMQP Subscriber Plugin&lt;/strong&gt; rounds out the trio with support for RabbitMQ and other AMQP-compatible brokers. Like the others, it supports multiple message formats, flexible acknowledgment policies, and comprehensive error tracking.&lt;/p&gt;

&lt;h4 id="opc-ua-plugin"&gt;OPC UA Plugin&lt;/h4&gt;

&lt;p&gt;For industrial environments, the new OPC UA Plugin connects directly to PLCs, SCADA systems, and other OPC UA-enabled equipment. It polls node values on a schedule and writes them into InfluxDB 3 with automatic data type detection. You can list specific nodes for precise control, or use browse mode to auto-discover devices and variables across large deployments. The plugin maintains a persistent connection between polling intervals and supports quality filtering, namespace URI resolution, and TLS security.&lt;/p&gt;

&lt;p&gt;Now, you might be thinking: “I’m already using Telegraf to interface with my streaming data services or OPC UA, why do I need these?” If Telegraf is working well for you, that’s great; there’s no need to change what isn’t broken. But if you’re newer to InfluxDB and aren’t yet a Telegraf user, these plugins give you another way to quickly get data flowing into InfluxDB 3 without adding another component to your stack.&lt;/p&gt;

&lt;p&gt;All three plugins share a consistent configuration model: you can set them up with CLI arguments for simple cases or TOML configuration files for more complex mapping scenarios. They all include built-in error tracking (logging parse failures to dedicated exception tables) and write statistics so you can monitor ingestion health over time.&lt;/p&gt;

&lt;h4 id="schema-validator-plugin"&gt;Schema Validator Plugin&lt;/h4&gt;

&lt;p&gt;One of the benefits of InfluxDB is that you don’t have to pre-define a schema. Data gets written as it is received. But for some use cases our customers have, they do want to constrain  incoming data to conform to a specific schema.&lt;/p&gt;

&lt;p&gt;The Schema Validator Plugin addresses that challenge, ensuring only clean, well-structured data makes it into your production tables. You define a JSON schema that specifies allowed measurements, required and optional tags and fields, data types, and allowed values. The plugin sits on a WAL flush trigger and validates every incoming row against your schema. Rows that pass get written to your target database or table; rows that fail get rejected (and optionally logged so you can see what’s being filtered out).&lt;/p&gt;

&lt;p&gt;A typical pattern is to write raw data into a single database or table, let the validator check it, and have clean data land in a separate database or table. It’s a straightforward way to build a reliable data pipeline without external tooling.&lt;/p&gt;

&lt;h4 id="processing-engine-general-improvements"&gt;Processing Engine general improvements&lt;/h4&gt;

&lt;p&gt;Alongside the new plugins, we’ve made several improvements to the Processing Engine itself that give plugin authors more control over write behavior, throughput, and concurrency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous writes with durability control&lt;/strong&gt;. New synchronous write functions let you choose between two modes: wait for the write to persist to the WAL before returning (for cases where you need to query the data immediately after writing), or return immediately for maximum throughput. This means you can treat bulk telemetry data as a fast path while ensuring that coordination states, such as job checkpoints or configuration flags, are immediately durable and queryable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch writes&lt;/strong&gt;. If your plugin writes thousands of points, the overhead isn’t in the data itself; it’s in the repeated write calls. The new batch write capability lets you group many records into a single write operation, which can dramatically improve throughput and make memory usage more predictable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous request handling&lt;/strong&gt;. Request-based triggers now support concurrent execution. Previously, request handlers processed one request at a time, which meant a slow request would block everything behind it. With asynchronous mode enabled, the engine can handle multiple requests concurrently, so status checks, control commands, and other lightweight requests stay responsive even while a heavy operation is running.&lt;/p&gt;

&lt;p&gt;These improvements work together in practice. The Import Plugin, for example, uses batch writes with fast-path durability for bulk data transfer, synchronous durable writes for checkpoints and state, and async request handling to keep its pause/resume/status endpoints responsive during long-running imports.&lt;/p&gt;

&lt;h2 id="easier-plugin-configuration-in-explorer"&gt;Easier plugin configuration in Explorer&lt;/h2&gt;

&lt;p&gt;We’ve also been improving InfluxDB 3 Explorer to make configuring plugins simpler, especially for the plugins in the library.&lt;/p&gt;

&lt;p&gt;Until now, configuring a plugin meant passing all the right parameters as startup arguments to the Python script or specifying them in a TOML file. That works, but it requires you to know exactly which parameters a plugin expects—which means reading the documentation first.&lt;/p&gt;

&lt;p&gt;We’re adding dedicated UI configuration forms for some of the plugins in Explorer. Instead of assembling a string of key-value pairs, you’ll see a form with all the available options laid out, along with descriptions and example values. Required fields are clearly marked, and the form handles the formatting for you. It’s the same configuration under the hood, just a much more approachable way to get there.&lt;/p&gt;

&lt;p&gt;This is especially helpful for plugins with more involved configuration, like the data subscription plugins. where you’re specifying broker connections, authentication, message format mappings, and field type definitions. The form-based approach removes the guesswork and lets you get a plugin running without bouncing back and forth between the docs and your terminal.
So far, we have built a specific configuration for the Import, Basic Transformation, and Downsampling plugins.&lt;/p&gt;

&lt;p&gt;This is what it looks like for the Import plugin:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3AOZLptneTTvDTFPs5CNvK/e0e621644c7c402fde86b32595b0715e/Screenshot_2026-04-07_at_9.15.20â__AM.png" alt="Import plugin SS" /&gt;&lt;/p&gt;

&lt;p&gt;This is what the Basic Transformation and Downsample configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3OMYWwTYij5hcV5B1C1Api/f79bd5d69024c0d14ff90e39dd3b0b26/Screenshot_2026-04-07_at_9.16.23â__AM.png" alt="Basic Transformation SS" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2vtmZDWXRcuTyY4odVQWZ6/d33e5aad87c3147e1fa12bf1b41f3150/Screenshot_2026-04-07_at_9.17.13â__AM.png" alt="Downsample SS" /&gt;&lt;/p&gt;

&lt;p&gt;Look for these to become available in Explorer in the next couple of months.&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next&lt;/h2&gt;

&lt;p&gt;We are continuing to improve the Processing Engine and the Plugin Library. We have an OPC UA plugin about ready for you to try, as well as some additional anomaly detection and forecasting plugins. And, we are building UI configuration for the data subscription plugins mentioned above to make them even easier to configure.&lt;/p&gt;

&lt;h2 id="try-them-out"&gt;Try them out&lt;/h2&gt;

&lt;p&gt;All new plugins are now available in beta in the &lt;a href="https://www.influxdata.com/products/processing-engine-plugins/?utm_source=website&amp;amp;utm_medium=influxdb_3_processing-engine-updates&amp;amp;utm_content=blog"&gt;InfluxDB 3 Plugin Library&lt;/a&gt;. They require InfluxDB 3 v3.8.2 or later. Install them from the CLI using the gh: prefix, or browse and install them directly from InfluxDB 3 Explorer’s Plugin Library.&lt;/p&gt;

&lt;p&gt;We’re releasing these as a beta because we want your feedback. We’ve tested them thoroughly internally, but real-world environments are always more diverse and more demanding than any test suite. If you run into issues, have ideas for improvements, or build something cool on top of these plugins, we’d love to hear from you: drop into the &lt;a href="https://discord.com/invite/influxdata"&gt;InfluxData Discord&lt;/a&gt;, post on the &lt;a href="https://community.influxdata.com/"&gt;Community Forums&lt;/a&gt;, or open an issue on &lt;a href="https://github.com/influxdata/influxdb3_plugins/issues"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Tue, 07 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-processing-engine-updates/</guid>
      <category>Developer</category>
      <category>Product</category>
      <author>Gary Fowler (InfluxData)</author>
    </item>
    <item>
      <title>What’s New in InfluxDB 3.9: More Operational Control and a New Performance Preview</title>
      <description>&lt;p&gt;We’ve spent the last few months listening to how teams are running InfluxDB 3 in the wild. The feedback was clear: as you scale, you need less “guesswork” and more control. Today’s release of InfluxDB 3.9 is our answer to that.&lt;/p&gt;

&lt;p&gt;As more teams move InfluxDB 3 into production, our focus has shifted toward the operational experience: how you manage the database at scale, how you ensure it remains secure, and how you provide a seamless experience for users. This release is packed with a host of quality-of-life improvements and a beta of the key features we have planned for upcoming releases.&lt;/p&gt;

&lt;p&gt;Whether you’re using the open source &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; for recent data and local workloads or scaling with &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt; for the full clustering and security suite, these 3.9 updates are designed to make your stack more predictable.&lt;/p&gt;

&lt;h2 id="operational-maturity-and-system-transparency"&gt;Operational maturity and system transparency&lt;/h2&gt;

&lt;p&gt;In 3.9, we’ve focused on making the database more predictable and transparent for operators. We have organized these refinements into three key areas:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Advanced CLI &amp;amp; Automation&lt;/strong&gt;: We’ve expanded the CLI to better support complex, headless environments. This includes new flags for non-interactive automation and data validation, alongside support for unique host overrides to target specific node types in a cluster. We’ve also improved how Parquet query outputs are piped, making it easier to integrate InfluxDB into automated data pipelines.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;System Reliability &amp;amp; Resource Management&lt;/strong&gt;: We’ve refined how the database handles resources and large-scale schemas. To better support complex data, we’ve increased the default string field limit to 1MB. We’ve also hardened the database lifecycle; administrative controls are now more rigorous, and we’ve ensured that background resources, such as triggers, are cleanly decommissioned whenever a database is removed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Visibility &amp;amp; Under-the-Hood Infrastructure&lt;/strong&gt;: We’ve upgraded our core infrastructure to improve both security and operational clarity. This includes upgrading DataFusion and the bundled Python for more efficient query execution and plugin security. Additionally, the system now provides better visibility into access control and product identity, updating metrics, headers, and metadata access to clearly distinguish between Core and Enterprise builds across your stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Collectively, these refinements remove the subtle points of friction that can accumulate as a system scales in production. By hardening resource management and streamlining automation, we’re ensuring that InfluxDB 3 remains a predictable, “set-it-and-forget-it” core for your infrastructure.&lt;/p&gt;

&lt;h2 id="now-in-beta-a-new-performance-preview"&gt;Now in beta: A new performance preview&lt;/h2&gt;

&lt;p&gt;Behind the scenes, we’ve been working on performance updates to InfluxDB 3. These improvements support large-scale time series workloads without sacrificing predictability or operational simplicity. This work lays the foundation for what’s coming in 3.10 and 3.11, specifically focusing on smoothing behavior under load and expanding the range of schemas InfluxDB 3 can handle.&lt;/p&gt;

&lt;p&gt;Because performance in time series is highly dependent on specific workloads and cardinality, we are introducing these updates as a beta in InfluxDB 3 Enterprise. The beta is intended for testing in staging or development environments only. It allows you to explore and provide feedback on:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Optimized single-series queries&lt;/strong&gt;: Targeting reduced latency when fetching single-series data over long time windows.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Resource smoothing&lt;/strong&gt;: Testing reduced CPU and memory spikes during heavy compaction or ingestion bursts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Wide-and-sparse table support&lt;/strong&gt;: For handling schemas ranging from extreme column counts to ultra-sparse data tables (or any combination).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automatic distinct value caches&lt;/strong&gt;: Early-stage, auto-creation of caches designed to reduce friction and eliminate metadata query latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates are available as an optional, flag-gated preview in InfluxDB 3.9 Enterprise. &lt;strong&gt;They are not recommended for production workloads&lt;/strong&gt;. We encourage Enterprise users to test these capabilities against their specific use cases to help us refine the features for GA. InfluxDB 3 Core will also support many of these new features in the coming releases.&lt;/p&gt;

&lt;p&gt;For instructions on how to enable these preview flags and to view the full technical requirements, visit our &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;official Enterprise documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="get-started-and-share-your-feedback"&gt;Get started and share your feedback:&lt;/h5&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Download InfluxDB 3.9&lt;/strong&gt;: Available now via our &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;downloads page&lt;/a&gt; or latest Docker images.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Join the beta&lt;/strong&gt;: If you are an InfluxDB 3 Enterprise Trial user, reach out to me in our &lt;a href="https://discord.com/invite/9zaNCW2PRT"&gt;Discord&lt;/a&gt; or &lt;a href="https://influxcommunity.slack.com/join/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA#/shared-invite/email"&gt;Community Slack&lt;/a&gt; to learn how to enable these beta features.&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Thu, 02 Apr 2026 12:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-9/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-9/</guid>
      <category>Product</category>
      <category>Developer</category>
      <category>news</category>
      <author>Peter Barnett (InfluxData)</author>
    </item>
    <item>
      <title>What is MRO? Maintenance, Repair, and Operations Explained</title>
      <description>&lt;p&gt;MRO stands for &lt;strong&gt;maintenance, repair, and operations&lt;/strong&gt;. It refers to the activities, supplies, and services that keep equipment, facilities, and infrastructure running safely and efficiently. Every industry that relies on physical assets depends on MRO, whether that means replacing a worn bearing on a production line, restocking safety gloves in a warehouse, or servicing an HVAC system in a hospital.&lt;/p&gt;

&lt;p&gt;Despite being one of the largest categories of indirect spending in most organizations, MRO is chronically under-managed. This article explains what MRO covers, why it matters, how maintenance strategies differ, and how it plays out across industries.&lt;/p&gt;

&lt;h2 id="what-is-mro"&gt;What is MRO?&lt;/h2&gt;

&lt;p&gt;MRO is a broad category that encompasses the indirect materials, maintenance activities, and operational support required to keep a business functioning. MRO includes everything from spare parts and lubricants to safety equipment, cleaning supplies, and the labor required to inspect, fix, and service physical assets.&lt;/p&gt;

&lt;p&gt;The scope of MRO varies by organization, but it always sits outside of direct production. A replacement motor for a conveyor belt is an MRO item. The raw steel that travels on that conveyor is not. This distinction matters for accounting, procurement strategy, and inventory management.&lt;/p&gt;

&lt;h4 id="common-mro-supplies-and-activities"&gt;Common MRO Supplies and Activities&lt;/h4&gt;

&lt;p&gt;MRO is easier to understand through concrete examples:&lt;/p&gt;

&lt;div&gt;
  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;Category&lt;/th&gt;
        &lt;th&gt;Description&lt;/th&gt;
        &lt;th&gt;Examples&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;MRO supplies&lt;/td&gt;
        &lt;td&gt;Parts, materials, and consumables used to maintain equipment and facilities.&lt;/td&gt;
        &lt;td&gt;Spare parts (bearings, seals, belts, filters, motors), lubricants and greases, fasteners, hand and power tools, electrical components (fuses, contactors, wiring), safety equipment (gloves, goggles, hard hats, respirators), cleaning and janitorial products, adhesives and tapes, and facility consumables (light bulbs, HVAC filters).&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;MRO activities&lt;/td&gt;
        &lt;td&gt;Hands-on maintenance and repair work performed on assets.&lt;/td&gt;
        &lt;td&gt;Routine inspections, lubrication, electrical testing, equipment alignment, welding repairs, painting and corrosion protection, calibration, and full equipment rebuilds.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;MRO services&lt;/td&gt;
        &lt;td&gt;Outsourced or contracted maintenance support.&lt;/td&gt;
        &lt;td&gt;Third-party maintenance contracts, on-call repair technicians, specialized inspections (non-destructive testing), and outsourced maintenance for complex assets.&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="why-mro-matters"&gt;Why MRO matters&lt;/h2&gt;

&lt;p&gt;MRO spending often accounts for a significant share of an organization’s operating costs, yet it receives a fraction of the strategic attention that direct materials get. The numbers make a compelling case for changing that.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;The market is massive&lt;/strong&gt;. The global MRO market was valued at roughly $715 billion in 2025 and is projected to grow steadily through the next decade, driven by aging infrastructure, the rise of predictive maintenance, and increasing demand for operational efficiency.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Downtime is extraordinarily expensive&lt;/strong&gt;. &lt;a href="https://www.ismworld.org/supply-management-news-and-reports/news-publications/inside-supply-management-magazine/blog/2024/2024-08/the-monthly-metric-unscheduled-downtime/"&gt;A 2024 Siemens report&lt;/a&gt; found that unplanned downtime costs the world’s 500 largest companies a combined $1.4 trillion per year, roughly 11% of their annual revenues. At a facility level, costs vary by industry, but the averages are sobering: approximately $260,000 per hour in general manufacturing, and over $2 million per hour in automotive production. Even smaller manufacturers typically lose over $100,000 per hour of unexpected downtime.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Equipment failure is the leading cause of downtime&lt;/strong&gt;. The average manufacturer faces an estimated 800 hours of equipment downtime annually. Equipment failure accounts for roughly 42% of unplanned downtime incidents, and base components like bearings, seals, and motors are the most common culprits. These are precisely the kinds of failures that a well-run MRO program is designed to prevent.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Proactive maintenance pays for itself&lt;/strong&gt;. Research from McKinsey and others consistently shows that organizations implementing predictive maintenance programs see &lt;a href="https://www.iiot-world.com/predictive-analytics/predictive-maintenance/predictive-maintenance-cost-savings/"&gt;18–25% reductions&lt;/a&gt; in overall maintenance costs and 30–50% reductions in unplanned downtime. The U.S. Department of Energy has reported a potential &lt;strong&gt;ROI of up to 10x on predictive maintenance investments&lt;/strong&gt;. Reactive repairs, by contrast, cost three to five times more than planned maintenance once you account for emergency labor, expedited parts shipping, and cascading production losses.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Safety and compliance depend on it&lt;/strong&gt;. Regulatory bodies across industries mandate specific maintenance activities and intervals. Falling behind on MRO creates safety hazards for workers, compliance risk for the organization, and potential legal liability.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="maintenance-strategies-preventive-predictive-planned-and-condition-based"&gt;Maintenance strategies: preventive, predictive, planned, and condition-based&lt;/h2&gt;

&lt;p&gt;Organizations typically employ a mix of strategies, and the trend across industries is a steady shift from reactive to proactive, data-driven approaches.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3xBRG5cCTK4CqGAImWHorU/6d8cafbd1630cb9d3bfdddcd1218e482/Diagram_01.png" alt="Reactive to Predictive MRO" /&gt;&lt;/p&gt;

&lt;h4 id="preventive-maintenance"&gt;Preventive Maintenance&lt;/h4&gt;

&lt;p&gt;Preventive maintenance is scheduled work performed at fixed intervals to reduce the likelihood of failure. Oil changes every 500 operating hours, filter replacements every quarter, and belt inspections every month are all preventive activities. The advantage is predictability: you know what work is coming and can plan parts and labor accordingly. The drawback is that you may be replacing components that still have significant useful life remaining, which wastes money and materials.&lt;/p&gt;

&lt;h4 id="planned-maintenance"&gt;Planned Maintenance&lt;/h4&gt;

&lt;p&gt;Planned maintenance is a broader category that includes any maintenance activity scheduled in advance, whether it follows a calendar-based interval, a usage-based trigger, or a condition-based alert. The defining characteristic is that the work is anticipated and resourced before it begins, as opposed to reactive or emergency maintenance. Planned maintenance also encompasses scheduled shutdowns and turnarounds, where equipment is taken offline deliberately for extensive servicing.&lt;/p&gt;

&lt;h4 id="condition-based-maintenance"&gt;Condition-Based Maintenance&lt;/h4&gt;

&lt;p&gt;Condition-based maintenance (CBM) uses real-time monitoring of equipment health indicators like vibration, temperature, oil quality, and electrical signatures to trigger maintenance only when those indicators show that maintenance is actually needed. Rather than replacing a bearing on a fixed schedule, CBM replaces it when vibration analysis shows degradation has reached a threshold. This approach eliminates much of the waste inherent in time-based schedules while still catching problems before failure.&lt;/p&gt;

&lt;h4 id="predictive-maintenance"&gt;Predictive Maintenance&lt;/h4&gt;

&lt;p&gt;Predictive maintenance takes condition-based monitoring a step further by applying machine learning, statistical models, and trend analysis to forecast when a component is likely to fail. Where CBM reacts to current conditions, predictive maintenance anticipates future conditions based on patterns in historical and real-time data. Sensors tracking vibration, temperature, pressure, and acoustic signatures feed data into analytics platforms that can predict failures days or weeks in advance.&lt;/p&gt;

&lt;p&gt;The results are striking: organizations with mature predictive maintenance programs report 35–45% reductions in unplanned downtime and an average ROI of around 250% within the first 18 months.&lt;/p&gt;

&lt;p&gt;The movement from reactive to predictive maintenance is one of the defining trends in MRO. As IIoT sensors become cheaper and more accessible, even smaller manufacturers can begin shifting toward condition-based and predictive approaches.&lt;/p&gt;

&lt;h3 id="mro-in-manufacturing"&gt;MRO in manufacturing&lt;/h3&gt;

&lt;p&gt;In the manufacturing industry, MRO encompasses all indirect materials and maintenance activities required to keep a production facility running. It is everything that supports the production process without becoming part of the finished product.&lt;/p&gt;

&lt;p&gt;Manufacturing MRO spending is often highly fragmented. A single plant might purchase thousands of distinct SKUs, such as motor drives, conveyor belts, lubricants, rags, and safety boots, from dozens of suppliers. The proportion of organizations using more than 250 MRO suppliers has grown from 6% to 15% in recent years. This fragmentation makes it difficult to negotiate volume discounts, track usage, or identify waste.&lt;/p&gt;

&lt;p&gt;Common MRO priorities in manufacturing include reducing unplanned downtime on production lines, maintaining critical spares inventory for high-impact equipment, shifting from reactive to preventive or predictive maintenance, standardizing parts and suppliers to simplify procurement, and ensuring compliance with OSHA and environmental regulations.&lt;/p&gt;

&lt;p&gt;Manufacturers that invest in structured MRO programs typically see improvements in overall equipment effectiveness (OEE), lower maintenance costs per unit of output, and fewer safety incidents.&lt;/p&gt;

&lt;h3 id="mro-in-aviation"&gt;MRO in aviation&lt;/h3&gt;

&lt;p&gt;Aviation has one of the most rigorous and regulated MRO environments of any industry. Aircraft MRO is governed by strict regulatory frameworks like the FAA in the United States and EASA in Europe. Every maintenance activity must be performed by certified repair stations, documented in detail, and traceable.&lt;/p&gt;

&lt;p&gt;The four main categories of aviation MRO are airframe maintenance, engine maintenance, component maintenance, and line maintenance.&lt;/p&gt;

&lt;p&gt;Aviation MRO is also where data-driven maintenance has seen some of its most advanced applications. Airlines use predictive maintenance platforms that analyze sensor data from aircraft systems to forecast component failures before they occur, minimizing aircraft-on-ground events and improving safety.&lt;/p&gt;

&lt;h3 id="mro-in-energy-and-utilities"&gt;MRO in energy and utilities&lt;/h3&gt;

&lt;p&gt;Energy and utilities represent one of the most asset-intensive sectors for MRO. Power plants, refineries, pipelines, water treatment facilities, and electrical grids all require continuous maintenance to remain operational and safe.&lt;/p&gt;

&lt;p&gt;The consequences of downtime in energy are particularly severe. Utilities face additional complexity from regulatory oversight and public safety requirements; a failed transformer or water treatment system affects entire communities.&lt;/p&gt;

&lt;p&gt;This sector has been an early adopter of IIoT and predictive maintenance technologies. Real-time monitoring of turbines, generators, transformers, and pipeline infrastructure allows operators to detect degradation early and schedule maintenance during planned outages rather than responding to emergencies.&lt;/p&gt;

&lt;h2 id="mro-procurement-inventory-and-software"&gt;MRO procurement, inventory, and software&lt;/h2&gt;

&lt;p&gt;Three operational areas determine how well an MRO program actually performs on a day-to-day basis.&lt;/p&gt;

&lt;div&gt;
  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;Area&lt;/th&gt;
        &lt;th&gt;Description and Key Strategies&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;Procurement&lt;/td&gt;
        &lt;td&gt;The process of sourcing and purchasing indirect materials. High transaction volume but low individual dollar value. Improvement strategies include consolidating suppliers, using blanket purchase orders, and implementing e-procurement platforms.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;Inventory&lt;/td&gt;
        &lt;td&gt;Balancing part availability against carrying costs. Effective management relies on criticality-based stocking, min/max levels, and regular cycle counts. MRO inventory supports production but is not part of the finished product.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;Software&lt;/td&gt;
        &lt;td&gt;Tools to plan, track, and optimize maintenance. Includes CMMS for work orders, EAM for lifecycle planning, and e-procurement tools to streamline purchasing.&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;The process of sourcing and purchasing indirect materials. High transaction volume but low individual dollar value. Improvement strategies include consolidating suppliers, using blanket purchase orders, and implementing e-procurement platforms.&lt;/p&gt;

&lt;h4 id="inventory"&gt;Inventory&lt;/h4&gt;

&lt;p&gt;Balancing part availability against carrying costs. Effective management relies on criticality-based stocking, min/max levels, and regular cycle counts. MRO inventory supports production but is not part of the finished product.&lt;/p&gt;

&lt;h4 id="software"&gt;Software&lt;/h4&gt;

&lt;p&gt;Tools to plan, track, and optimize maintenance. Includes CMMS for work orders, EAM for lifecycle planning, and e-procurement tools to streamline purchasing.&lt;/p&gt;

&lt;h2 id="where-time-series-databases-fit-in-an-mro-strategy"&gt;Where time series databases fit in an MRO strategy&lt;/h2&gt;

&lt;p&gt;The shift toward predictive maintenance creates a data infrastructure challenge that traditional systems were never designed to handle. A modern manufacturing facility with thousands of IIoT sensors can generate billions of data points daily. This is time series data, and it requires specialized tools at scale.&lt;/p&gt;

&lt;p&gt;Traditional relational databases and legacy data historians struggle with the volume, velocity, and query patterns of high-frequency sensor data. Time series databases are built for this workload. They are designed to ingest large volumes of timestamped data at high speed, compress it efficiently for long-term storage, and support the kinds of queries that maintenance and operations teams actually need: trend analysis over time windows, anomaly detection, and correlation across multiple sensor streams.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5GIp6lyhNY9PPBrYRlO000/d5336a5398aa3ae4137af83384c737db/Diagram_02.png" alt="Telegraf Agent MRO" /&gt;&lt;/p&gt;

&lt;p&gt;InfluxDB is one of the most widely adopted time series databases in industrial environments. It is built to handle the data patterns that MRO and predictive maintenance generate, and it fits into the maintenance technology stack in several important ways.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Real-time equipment monitoring&lt;/strong&gt;: InfluxDB ingests data from PLCs, SCADA systems, and IIoT sensors via standard industrial protocols like MQTT, OPC UA, and Modbus through its Telegraf agent. This creates a live feed of equipment health data that maintenance teams can use to spot anomalies as they develop.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Historical context for predictive models&lt;/strong&gt;: Effective predictive maintenance depends on having deep historical data to train machine learning models. InfluxDB stores years of sensor data in a compressed columnar format, making it practical and cost-effective to retain the historical depth that ML models need to identify failure patterns.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Bridging OT and IT systems&lt;/strong&gt;: One of the persistent challenges in MRO is that operational technology and information technology often exist in separate silos. InfluxDB integrates with both sides of this divide, connecting industrial data sources at the edge with analytics tools, cloud platforms, and AI/ML pipelines on the IT side.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Edge-to-cloud flexibility&lt;/strong&gt;: Not every facility has the same infrastructure. Some need on-premises data processing for latency or security reasons; others want cloud-based analytics. InfluxDB supports deployment at the edge, in private clouds, or in fully-managed cloud environments, allowing organizations to match their data architecture to their operational reality.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The practical impact is tangible. &lt;a href="https://www.influxdata.com/resources/how-seadrill-transformed-billions-sensor-data-into-actionable-insights-with-influxdb/"&gt;Seadrill&lt;/a&gt; has reported saving over $1.6 million in a single year by using InfluxDB as its time series database for equipment monitoring. &lt;a href="https://www.influxdata.com/blog/siemens-energy-standardizes-predictive-maintenance-influxdb/"&gt;Siemens Energy uses InfluxDB to monitor 23,000 battery modules across more than 70 sites&lt;/a&gt;, analyzing billions of sensor readings to prevent downtime and ensure quality.&lt;/p&gt;

&lt;p&gt;For operations and maintenance teams evaluating their data infrastructure, the key question is whether their current systems can handle the data volumes that condition-based and predictive maintenance demand. If the answer is no, a time series database is the foundational layer that makes advanced maintenance strategies possible.&lt;/p&gt;

&lt;h2 id="common-mro-challenges"&gt;Common MRO challenges&lt;/h2&gt;

&lt;p&gt;Even well-intentioned MRO programs run into recurring problems.&lt;/p&gt;

&lt;h4 id="fragmented-spending"&gt;Fragmented Spending&lt;/h4&gt;

&lt;p&gt;This is the most widespread issue. When every department or site purchases MRO supplies independently, organizations lose leverage with suppliers and have no visibility into total spend.&lt;/p&gt;

&lt;h4 id="reactive-maintenance-culture"&gt;Reactive Maintenance Culture&lt;/h4&gt;

&lt;p&gt;This culture remains entrenched in many organizations. ABB’s Value of Reliability research found that two-thirds of companies experience unplanned downtime at least once per month, and a full third have not undertaken motor or drive modernization projects in the past two years, even though upgrading obsolete equipment can generate ROI in less than two years.&lt;/p&gt;

&lt;h4 id="poor-data-quality"&gt;Poor Data Quality&lt;/h4&gt;

&lt;p&gt;Poor data quality undermines almost every MRO improvement effort. Incomplete asset records, mislabeled parts, and patchy work-order histories make it difficult to decide what to stock, when to maintain, and where to invest. This problem compounds as organizations try to implement predictive maintenance, which depends entirely on clean, structured, time-stamped data.&lt;/p&gt;

&lt;h4 id="excess-and-obsolete-inventory"&gt;Excess and Obsolete Inventory&lt;/h4&gt;

&lt;p&gt;Excess and obsolete inventory tie up capital and warehouse space. Parts ordered for equipment that has since been retired, or spares stocked based on outdated failure rates, accumulate quietly until someone audits the stockroom.&lt;/p&gt;

&lt;h2 id="how-to-improve-an-mro-strategy"&gt;How to improve an MRO strategy&lt;/h2&gt;

&lt;p&gt;There is no single playbook for MRO improvement, but a few principles apply broadly.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Start with visibility&lt;/strong&gt;. Before you optimize anything, you need a clear picture of what you are spending, where your inventory sits, and how your assets are performing. Consolidating data from procurement, maintenance, and inventory systems is almost always the first step.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Classify assets by criticality&lt;/strong&gt;. Not all equipment deserves the same level of attention. Focus preventive and predictive maintenance resources on the assets whose failure would cause the greatest impact on safety, production, or cost.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Consolidate suppliers and standardize parts&lt;/strong&gt;. Reducing the number of MRO suppliers simplifies procurement, improves negotiating leverage, and makes it easier to manage inventory. Standardizing on common parts across similar equipment reduces the total number of SKUs you need to carry.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Shift from reactive to proactive maintenance&lt;/strong&gt;. This is a long-term cultural change, not a one-time project. Start with the highest-criticality assets, prove the value with condition monitoring and predictive analytics, and then scale. Organizations that make this transition consistently report dramatic reductions in both downtime and total maintenance cost.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Invest in the right data infrastructure&lt;/strong&gt;. Advanced maintenance strategies are only as good as the data infrastructure behind them. This means CMMS/EAM software for work order management, time series databases for high-frequency sensor data, and integration layers that connect these systems so that insights flow from the sensor to the decision-maker without friction.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Measure what matters&lt;/strong&gt;. Track metrics that connect MRO performance to business outcomes: planned vs. unplanned maintenance ratio, spare parts availability, mean time between failures (MTBF), overall equipment effectiveness (OEE), and maintenance cost as a percentage of asset replacement value.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="wrapping-up"&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;MRO may not be the most glamorous line item in an operating budget, but it is one of the most consequential. The organizations that treat maintenance, repair, and operations as a strategic function consistently outperform those that don’t. As sensor technology gets cheaper, predictive analytics gets smarter, and the data infrastructure to support them becomes more accessible, the gap between reactive and proactive organizations will only widen. The best time to invest in your MRO strategy was five years ago. The second-best time is now.&lt;/p&gt;

&lt;p&gt;MRO FAQs&lt;/p&gt;
&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What does MRO stand for?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                MRO most commonly stands for maintenance, repair, and operations—the activities, supplies, and services that keep equipment and facilities running. In aviation and heavy industry, MRO can also stand for maintenance, repair, and overhaul, where "overhaul" refers to the complete teardown, inspection, and rebuild of a component or system to original specifications. Both meanings describe the same core concept: sustaining operational readiness of physical assets.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is MRO in business?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In a business context, MRO refers to all indirect spending related to keeping operations running. This includes everything from preventive maintenance schedules and spare parts to safety equipment, cleaning supplies, and facility consumables. MRO sits outside of direct production costs but has a significant impact on uptime, safety, and total operating expense.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between MRO inventory and production inventory?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Production inventory consists of raw materials and components that become part of the finished product. MRO inventory includes spare parts, tools, consumables, and supplies used to maintain equipment and facilities; items that support production but never appear in the final product. Both require management, but they serve different purposes and are often handled by different teams with different procurement strategies.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is MRO in manufacturing?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In manufacturing, MRO covers the indirect materials (lubricants, filters, PPE, tools, electrical components) and maintenance activities (inspections, repairs, preventive maintenance) required to keep production equipment operational. It is one of the largest categories of indirect spending in most manufacturing organizations.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-5"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is MRO in aviation?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-5" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In aviation, MRO stands for maintenance, repair, and overhaul. It is a heavily regulated segment that includes line maintenance, airframe and engine maintenance, component repair, and full overhauls of aircraft systems. Aviation MRO is essential for airworthiness certification and passenger safety, and it is governed by regulatory bodies like the FAA and EASA.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-6"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What are MRO supplies?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-6" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                MRO supplies are the materials purchased to support maintenance and operational activities. Common examples include spare parts, fasteners, lubricants, hand tools, safety gear, cleaning products, electrical components, and facility consumables like light bulbs and HVAC filters. These items are consumed during the maintenance process rather than incorporated into a finished product.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-7"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Why is MRO important?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-7" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                MRO directly affects equipment uptime, workplace safety, regulatory compliance, and operating costs. Unplanned downtime alone costs U.S. manufacturers an estimated $50 billion per year. Organizations that manage MRO effectively experience fewer breakdowns, lower total maintenance costs, longer asset lifespans, and better safety records. As maintenance strategies evolve from reactive to predictive, the strategic importance of MRO continues to grow.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-8"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between preventive and predictive maintenance?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-8" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Preventive maintenance follows a fixed schedule. For example, replacing a filter every 90 days regardless of its condition. Predictive maintenance uses real-time data from sensors to forecast when maintenance is actually needed, based on the condition and performance trends of the equipment. Predictive approaches reduce both unnecessary maintenance and unexpected failures, but they require investment in sensors, data infrastructure, and analytics tools.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-9"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is a CMMS and how does it relate to MRO?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-9" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                A CMMS (computerized maintenance management system) is software used to schedule, track, and document maintenance activities. It is one of the core tools in an MRO program, helping teams manage work orders, track asset history, plan preventive maintenance schedules, and monitor spare parts inventory. More advanced platforms (often called EAM, or enterprise asset management systems) add lifecycle planning, capital project tracking, and integration with other enterprise systems.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

&lt;/div&gt;
</description>
      <pubDate>Tue, 31 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/mro-explained-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/mro-explained-influxdb/</guid>
      <category>Developer</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
    <item>
      <title>Telegraf Enterprise Beta is Now Available: Centralized Control for Telegraf at Scale</title>
      <description>&lt;p&gt;Telegraf is incredibly good at what it does: collecting metrics, logs, and events from just about anywhere and sending them wherever you need. But once Telegraf becomes part of your production telemetry pipeline, spread across environments, teams, regions, and edge locations, the hard part isn’t installing agents; it’s operating them.&lt;/p&gt;

&lt;p&gt;Configs drift. “Temporary” overrides linger. Rolling out changes across hundreds (or thousands) of agents becomes a careful, manual process. And when something breaks, the first question is rarely about the data—it’s about the fleet:&lt;/p&gt;

&lt;p&gt;which configuration is running where, and is every agent healthy?&lt;/p&gt;

&lt;p&gt;That’s the problem Telegraf Enterprise is built to solve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Today, we’re opening the Telegraf Enterprise beta to the broader Telegraf community so you can help us validate the product where it matters most: at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8J9tj2g9cNGnqtL94tMOn/adf53d91e1e98a76f8c9461186b1cccf/Screenshot_2026-03-25_at_10.59.07â__AM.png" alt="Telegraf Enterprise SS 1" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="what-is-telegraf-enterprise"&gt;What is Telegraf Enterprise?&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Telegraf Enterprise&lt;/strong&gt; is a commercial offering for organizations running Telegraf at scale and needing centralized management, governance, and support. It brings together two key components:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf Controller&lt;/strong&gt;: A control plane (UI + API) that centralizes Telegraf configuration management and fleet health visibility.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf Enterprise Support&lt;/strong&gt;: Official support for Telegraf Controller and official Telegraf plugins, designed for teams that need dependable response times and expert guidance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s built for real-world, large-scale agent deployments, where Telegraf isn’t a tool you occasionally touch, but a platform you rely on.&lt;/p&gt;

&lt;h2 id="meet-telegraf-controller-your-telegraf-control-plane"&gt;Meet Telegraf Controller: your Telegraf control plane&lt;/h2&gt;

&lt;p&gt;At the heart of Telegraf Enterprise is &lt;strong&gt;Telegraf Controller&lt;/strong&gt;, which centralizes two things teams struggle with most at scale:&lt;/p&gt;

&lt;h4 id="configuration-management-that-doesnt-collapse-under-growth"&gt;Configuration Management That Doesn’t Collapse Under Growth&lt;/h4&gt;

&lt;p&gt;Telegraf Controller helps you create and manage configurations to support consistency across environments while still allowing necessary variation. Core capabilities include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Centralized configuration creation and editing&lt;/li&gt;
  &lt;li&gt;Templates and parameterization to reuse configs safely&lt;/li&gt;
  &lt;li&gt;Label-based organization (so fleets don’t devolve into a long list of “agent-123”)&lt;/li&gt;
  &lt;li&gt;Bulk operations for fleet-wide changes&lt;/li&gt;
  &lt;li&gt;Environment variable and parameter management&lt;/li&gt;
  &lt;li&gt;Plugin metadata visibility to simplify config authoring and maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/63My9Gr4T1fkbk4tXziKRL/535ae3a8d927ddfe52e47d3596cd8b79/Screenshot_2026-03-25_at_11.00.14â__AM.png" alt="Telegraf Enterprise SS 2" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h4 id="fleet-wide-health-visibility"&gt;Fleet-Wide Health Visibility&lt;/h4&gt;

&lt;p&gt;Telegraf Controller gives you a single view into the overall status of your agent deployments, so you can understand:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Which agents are reporting as expected&lt;/li&gt;
  &lt;li&gt;Where health issues are clustering&lt;/li&gt;
  &lt;li&gt;What changed recently, and what might be correlated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, you don’t just manage Telegraf. You &lt;strong&gt;operate&lt;/strong&gt; it.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6LcWrqwByO7CtGvf8cDT3C/b2d04ee37b9b14bffec9e77693a716af/Screenshot_2026-03-25_at_11.01.30â__AM.png" alt="Telegraf Enterprise SS 3" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="designed-to-fit-your-telemetry-stack"&gt;Designed to fit your telemetry stack&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is designed to work with the way teams actually deploy Telegraf.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;It does not require InfluxDB&lt;/strong&gt;. You can use the Telegraf Controller regardless of where your telemetry data is going.&lt;/li&gt;
  &lt;li&gt;Configuration delivery follows a &lt;strong&gt;pull-based model&lt;/strong&gt;, where agents fetch configuration over HTTP. This keeps change management predictable and compatible with locked-down environments.&lt;/li&gt;
  &lt;li&gt;It’s built to support &lt;strong&gt;hundreds to thousands of agents&lt;/strong&gt;, with production-grade storage options and a modern UI + API architecture for automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="why-were-running-this-beta"&gt;Why we’re running this beta&lt;/h2&gt;

&lt;p&gt;This beta is open to any Telegraf user who wants to test-drive Telegraf Controller.&lt;/p&gt;

&lt;p&gt;The focus of the beta is simple:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Test Telegraf Controller at scale&lt;/strong&gt;: We want to validate how well Telegraf Controller holds up when you connect real fleets—hundreds or thousands of agents—with real operational behaviors.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Gather feedback from the community:&lt;/strong&gt; We’re intentionally inviting community input early, while we’re still shaping the GA experience. What workflows are missing? What’s confusing? What would make this tool indispensable in your environment?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this stage, your feedback directly influences what Telegraf Enterprise becomes.&lt;/p&gt;

&lt;h2 id="enterprise-support-that-matches-production-expectations"&gt;Enterprise support that matches production expectations&lt;/h2&gt;

&lt;p&gt;Operating telemetry pipelines is a production responsibility, and when Telegraf is part of that pipeline, you need support that understands the stakes.&lt;/p&gt;

&lt;p&gt;Telegraf Enterprise includes support designed for teams that need:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Clear expectations for response and escalation&lt;/li&gt;
  &lt;li&gt;Coverage for Telegraf Controller and official Telegraf plugins&lt;/li&gt;
  &lt;li&gt;Help diagnosing issues and reducing operational risk as fleets grow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially valuable when Telegraf is deployed across multiple teams, environments, or customer sites, where operational consistency matters as much as collection capability.&lt;/p&gt;

&lt;h2 id="who-is-telegraf-enterprise-for"&gt;Who is Telegraf Enterprise for?&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is built for organizations that manage Telegraf fleets at a meaningful scale, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Platform engineering and SRE teams&lt;/li&gt;
  &lt;li&gt;DevOps organizations operating across multi-cloud / hybrid / edge&lt;/li&gt;
  &lt;li&gt;Managed service providers delivering telemetry as a service&lt;/li&gt;
  &lt;li&gt;Compliance-sensitive teams that need standardized configurations and governance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re running a small number of agents and are comfortable managing configs manually, you may not need Telegraf Enterprise today. But if Telegraf is everywhere—and your team is responsible for keeping it reliable—centralized control quickly becomes less “nice to have” and more “how did we operate without this?”&lt;/p&gt;

&lt;h2 id="packaging-free-and-enterprise-options"&gt;Packaging: free and enterprise options&lt;/h2&gt;

&lt;h4 id="telegraf-controller"&gt;Telegraf Controller&lt;/h4&gt;

&lt;p&gt;A free tier is available for teams that want centralized configuration management and visibility with pre-defined limits.&lt;/p&gt;

&lt;h4 id="telegraf-enterprise"&gt;Telegraf Enterprise&lt;/h4&gt;

&lt;p&gt;For teams operating Telegraf as critical infrastructure, &lt;strong&gt;Telegraf Enterprise&lt;/strong&gt; includes the Telegraf Controller packaged with enterprise support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key difference&lt;/strong&gt;: the Telegraf Enterprise is built for scale and operational reliability, with support and capabilities aligned to production fleet management.&lt;/p&gt;

&lt;h2 id="getting-started-with-telegraf-controller"&gt;Getting started with Telegraf Controller&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is designed for teams operating Telegraf as a core part of production telemetry pipelines. If Telegraf is already how you collect metrics, logs, and events across your infrastructure, Telegraf Controller is the missing piece that helps you operate that collection layer like a platform—not a pile of configs.&lt;/p&gt;

&lt;p&gt;To join the beta, &lt;a href="https://influxdata.com/products/telegraf-enterprise"&gt;click here&lt;/a&gt; to opt in. Please share your feedback in-app with the feedback button or our slack channel #telegraf-enterprise-beta.&lt;/p&gt;

&lt;p&gt;Join the beta, push it hard, share your use case, and what makes your workflow easier!&lt;/p&gt;
</description>
      <pubDate>Thu, 26 Mar 2026 07:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/telegraf-enterprise-beta/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/telegraf-enterprise-beta/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>Unifying Telemetry in Battery Energy Storage Systems</title>
      <description>&lt;p&gt;&lt;a href="https://www.influxdata.com/solutions/battery-energy-storage-systems/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Battery energy storage systems (BESS)&lt;/a&gt; play a critical role in modern energy infrastructure. Utilities rely on these systems to balance renewable generation, stabilize grid operations, and respond to changing electricity demand. As deployments scale in size and complexity, operators require continuous insight into battery health, system performance, and grid interaction.
Operators rely on telemetry generated across several operational platforms. Battery management systems monitor cell behavior, power conversion systems, and regulate energy flow, while plant control platforms track facility status. Energy management software and environmental sensors provide additional context about facility conditions.&lt;/p&gt;

&lt;p&gt;In many deployments, however, this information remains scattered across separate monitoring environments. Operators often move between multiple dashboards to understand activity across a single facility. Many BESS operators are now adopting unified telemetry platforms that consolidate operational signals and create a clearer operational view of system behavior.&lt;/p&gt;

&lt;h2 id="the-operational-reality-of-modern-bess-systems"&gt;The operational reality of modern BESS systems&lt;/h2&gt;

&lt;p&gt;A battery energy storage facility is not a single system but a collection of specialized subsystems that manage energy storage, power conversion, and grid interaction. Each subsystem monitors a different aspect of facility performance and generates operational signals that help operators understand how the system behaves.&lt;/p&gt;

&lt;p&gt;Several platforms produce these signals. Battery Management Systems (BMS) track cell-level conditions such as voltage, temperature, and state of charge to protect battery health. Power Conversion Systems (PCS), typically implemented through inverters, regulate how electricity flows between the battery and the grid.&lt;/p&gt;

&lt;p&gt;Plant-level monitoring runs through &lt;a href="https://www.influxdata.com/glossary/SCADA-supervisory-control-and-data-acquisition/"&gt;SCADA platforms&lt;/a&gt;, which provide alarms, system status, and operational controls. Energy Management Systems (EMS) determine when energy should be stored or dispatched based on grid signals and market conditions, while environmental sensors monitor external factors such as ambient temperature.&lt;/p&gt;

&lt;p&gt;Together, these systems create a continuous operational record of facility performance, but the resulting information does not always exist in a shared environment.&lt;/p&gt;

&lt;h2 id="the-fragmented-reality-of-bess-telemetry"&gt;The fragmented reality of BESS telemetry&lt;/h2&gt;

&lt;p&gt;In most battery energy storage deployments, operational data originates from multiple independent platforms, as described above. This fragmentation reflects the modular design and deployment of energy storage facilities. Battery systems, power conversion equipment, and plant control platforms are frequently delivered by different vendors, each with its own software, data models, and monitoring tools.&lt;/p&gt;

&lt;p&gt;Because these platforms monitor individual components rather than the entire facility, data is rarely consolidated automatically. Operators often rely on multiple dashboards to understand activity across a single storage site. Correlating events between subsystems may require switching between tools and manually comparing timestamps or operational signals.&lt;/p&gt;

&lt;p&gt;The result? Operators have access to large volumes of operational information but lack a unified view of the facility as a whole. When events occur across multiple subsystems, understanding how those signals relate to one another requires time and effort.&lt;/p&gt;

&lt;h2 id="operational-cost-of-data-silos"&gt;Operational cost of data silos&lt;/h2&gt;

&lt;p&gt;Even small issues can require significant labor to diagnose. The &lt;a href="https://www.influxdata.com/blog/breaking-data-silos-influxdb-3/#heading0"&gt;data silos&lt;/a&gt; created by ala carte technologies prevent engineers from seeing how signals across the storage system relate.For example, a thermal anomaly—an unexpected rise in battery temperature—may require operators to review battery readings, compare inverter load behavior, and examine environmental conditions. Without a unified view of these signals, determining the cause can take time.&lt;/p&gt;

&lt;p&gt;These delays affect both system reliability and financial performance. If operators cannot quickly determine why system capacity dropped or alarms triggered, dispatch readiness may be affected during critical market windows. Over time, slower investigations and delayed anomaly detection can lead to reduced system availability, higher operational overhead, and missed revenue opportunities.&lt;/p&gt;

&lt;h2 id="what-unified-telemetry-actually-means"&gt;What unified telemetry actually means&lt;/h2&gt;

&lt;p&gt;Unified telemetry consolidates operational signals from across the storage system into a shared data environment. Instead of storing data separately within subsystem platforms, telemetry from across the facility enters a common dataset.&lt;/p&gt;

&lt;p&gt;In this environment, operational signals are stored as time-series data, or measurements organized by timestamp, allowing signals from different subsystems to be synchronized and analyzed together.&lt;/p&gt;

&lt;p&gt;This shared dataset allows engineers to correlate signals that were previously isolated. Battery temperature trends can be examined alongside inverter load behavior, dispatch signals, and environmental conditions to better understand system performance. Instead of switching between monitoring platforms, operators can observe how signals across subsystems evolve together within a unified operational timeline.&lt;/p&gt;

&lt;h2 id="how-unified-telemetry-works"&gt;How unified telemetry works&lt;/h2&gt;

&lt;p&gt;In many deployments, telemetry aggregation begins at the edge of the facility. Edge collectors connect to operational systems such as the BMS, PCS, SCADA platform, EMS and environmental sensors using industrial protocols such as &lt;a href="https://www.influxdata.com/integration/modbus/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Modbus&lt;/a&gt;, &lt;a href="https://www.influxdata.com/integration/opcua/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;OPC-UA&lt;/a&gt;, or CANbus. These collectors ingest operational signals and convert them into structured telemetry streams.&lt;/p&gt;

&lt;p&gt;From there, the data flows through streaming pipelines into centralized platforms. These pipelines handle ingestion, buffering, and transport of high-frequency signals so information from across the facility can be processed as a continuous operational stream.&lt;/p&gt;

&lt;p&gt;Time series databases store and index this telemetry by timestamp, allowing engineers to query system behavior over time. Organizing operational signals this way enables teams to correlate events across subsystems, analyze performance trends, and investigate anomalies.&lt;/p&gt;

&lt;p&gt;Because signals from different systems exist in the same time-aligned dataset, engineers can examine battery performance, inverter activity, dispatch signals, and environmental conditions together. This enables faster incident investigation and supports advanced analysis such as anomaly detection and &lt;a href="https://www.influxdata.com/glossary/predictive-maintenance/"&gt;predictive maintenance&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="operational-impact"&gt;Operational impact&lt;/h2&gt;

&lt;p&gt;Unified telemetry changes how energy storage facilities are operated and how organizations manage risk, reliability, and revenue. When signals from battery systems, power electronics, and plant controls are  analyzed together, operators gain a comprehensive view of facility behavior rather than having to reconstruct events across multiple monitoring platforms.&lt;/p&gt;

&lt;p&gt;This visibility allows teams to detect anomalies earlier and respond to operational issues before they escalate. Faster diagnosis reduces downtime and helps maintain system availability during critical dispatch windows. In energy markets, maintaining dispatch readiness helps protect revenue during high-value trading periods.&lt;/p&gt;

&lt;h4 id="juniz-energy-deployment"&gt;ju:niz Energy Deployment&lt;/h4&gt;

&lt;p&gt;ju:niz Energy operates large-scale battery storage systems that provide grid services and trading flexibility in energy markets. Their systems collect thousands of data points per second on battery health, temperature, climate conditions, and system performance.&lt;/p&gt;

&lt;p&gt;To manage this telemetry, ju:niz built a centralized monitoring architecture using Telegraf, Modbus, MQTT, Grafana, Docker, AWS, and InfluxDB. Operational signals from battery systems stream into a centralized time series platform, giving engineers a unified view of system behavior and eliminating the need for legacy Python monitoring scripts.&lt;/p&gt;

&lt;p&gt;This architecture enables the ju:niz team to analyze battery telemetry in real-time, improve alerting accuracy, and support predictive maintenance strategies across their storage infrastructure.To see how ju:niz implemented unified telemetry for its operations, read the full &lt;a href="https://get.influxdata.com/rs/972-GDU-533/images/Customer_Case_Study_Juniz.pdf?version=0"&gt;case study&lt;/a&gt; or watch the &lt;a href="https://www.influxdata.com/resources/how-to-improve-renewable-energy-storage-with-mqtt-modbus-and-influxdb-cloud/"&gt;webinar&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-bottom-line"&gt;The bottom line&lt;/h2&gt;

&lt;p&gt;Battery energy storage systems generate telemetry across multiple operational platforms, but when that data remains fragmented, operators struggle to understand how the system behaves as a whole.
Unified telemetry solves this by bringing operational signals into a shared, time-aligned dataset. As BESS deployments scale, this capability will become foundational for operating energy storage systems reliably, efficiently, and profitably.&lt;/p&gt;

&lt;p&gt;Ready to build a unified telemetry architecture? Get started with a free download of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Core OSS&lt;/a&gt; or a trial of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb-enterprise/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Thu, 19 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/unified-telemetry-BESS/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/unified-telemetry-BESS/</guid>
      <category>Developer</category>
      <author>Allyson Boate (InfluxData)</author>
    </item>
    <item>
      <title>A New Scale Tier for Time Series on Amazon Timestream for InfluxDB</title>
      <description>
&lt;p&gt;When we first announced the &lt;a href="https://www.influxdata.com/blog/influxdb3-on-amazon-timestream/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;availability&lt;/a&gt; of &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; and &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt; on Amazon Timestream for InfluxDB last year, we set a new standard for managed time series on AWS. We gave developers a simple way to harness high performance at scale while removing the burden of infrastructure management.&lt;/p&gt;

&lt;p&gt;But as our customers have taught us, “at scale” is a moving target. Across Industrial IoT, physical AI, and real-time observability, data is growing in both volume and resolution. When you move from minute-by-minute polling to sub-millisecond, high-fidelity telemetry, the pressure on the underlying database compounds. To stay ahead of that curve, developers need a platform that scales as fast as their workloads.&lt;/p&gt;

&lt;p&gt;Today, we’re delivering that by expanding InfluxDB 3 on Amazon Timestream for InfluxDB to &lt;a href="https://aws.amazon.com/timestream/"&gt;support expanding clusters up to 15 nodes&lt;/a&gt;. We’re also introducing a seamless migration path from InfluxDB 3 Core to InfluxDB 3 Enterprise, allowing teams to unlock this massive performance tier without friction, risk of a manual architectural overhaul, or any data loss.&lt;/p&gt;

&lt;h2 id="scaling-for-the-mission-critical"&gt;Scaling for the mission-critical&lt;/h2&gt;

&lt;p&gt;At InfluxData, we’re seeing time series expand from infrastructure monitoring to the foundation for autonomous systems. In high-stakes environments like power grid management or autonomous vehicle navigation, increased latency is a significant operational risk rather than just a performance metric.&lt;/p&gt;

&lt;p&gt;Previously, AWS Timestream’s support of InfluxDB 3 was focused on smaller, highly efficient configurations. By expanding to 15 nodes, we are providing major upgrades across three important areas:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Query concurrency&lt;/strong&gt;: More nodes mean more hands on deck to process complex, concurrent queries. Large teams can now run heavy analytical workloads without impacting real-time dashboards or critical alerts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Massive throughput&lt;/strong&gt;: With a larger cluster, you can ingest millions of data points per second across hundreds of millions of unique series, maintaining real-time query performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Workload isolation and optimization&lt;/strong&gt;: These expanded clusters enable true functional isolation between ingestion, queries, and compaction. This allows granular performance tuning optimized for your most demanding workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="architected-for-enterprise-demand"&gt;Architected for enterprise demand&lt;/h2&gt;

&lt;p&gt;This new 15-node option is available for InfluxDB 3 Enterprise and is designed for organizations that require high availability, enhanced security, and the power to maintain high ingestion and real-time query performance across high-resolution, high-velocity datasets. InfluxDB 3 Core will continue to operate in single-node deployments.&lt;/p&gt;

&lt;p&gt;By leveraging AWS infrastructure, you can spin up these expanded clusters in minutes directly from the AWS Console. With our new seamless migration capabilities, you can transition your existing Core workloads to Enterprise clusters with a single click. This ensures that as your data grows (from a few local sensors to a global fleet of devices), your database never becomes the bottleneck, and your team never has to worry about the downtime typically associated with a migration. These larger clusters are available today in all AWS regions where Amazon Timestream for InfluxDB is available, ensuring you can deploy and optimize mission-critical time series infrastructure wherever your data lives.&lt;/p&gt;

&lt;h2 id="the-foundation-for-physical-ai"&gt;The foundation for physical AI&lt;/h2&gt;

&lt;p&gt;Our partnership with AWS is about meeting developers where they build. By integrating with services like AWS Lambda, SageMaker, and Kinesis, we’ve simplified the path from high-volume streams into Physical AI. This is the frontier where intelligence moves from the digital realm into the physical world.&lt;/p&gt;

&lt;p&gt;Time series is the heartbeat of this transition, fueling a two-part lifecycle:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Training&lt;/strong&gt;: Using massive volumes of historical data to establish baselines and “normal” patterns.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Inference&lt;/strong&gt;: Streaming real-time data against those models to trigger automated, deterministic actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes our partnership with AWS unique is that we support both sides of this loop. With up to 15 nodes at your disposal, InfluxDB 3 has the headroom to act as a distributed inference engine, running predictive maintenance and anomaly detection against your data. This eliminates the latency tax of moving massive datasets between layers, ensuring that whether you are managing a robotic fleet or a smart grid, your autonomous systems can perceive and react with real-time precision.&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next?&lt;/h2&gt;

&lt;p&gt;The future of time series is about speed, precision, and scale. With today’s announcement, we’re handing you the keys to all three. By removing the barriers between single-node efficiency and enterprise-grade performance, we’re making it easier than ever to evolve your architecture as fast as your data grows.&lt;/p&gt;

&lt;p&gt;We’re excited to see what the community builds with this new level of power. If you’re ready to scale your real-time workloads, head over to the &lt;a href="https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fus-east-1.console.aws.amazon.com%2Ftimestream%2Fhome%3Fca-oauth-flow-id%3D3617%26hashArgs%3D%2523welcome%26isauthcode%3Dtrue%26oauthStart%3D1768948312939%26region%3Dus-east-1%26state%3DhashArgsFromTB_us-east-1_89587d800d106091&amp;amp;client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fpyramid&amp;amp;forceMobileApp=0&amp;amp;code_challenge=0mEuy-XrhJW82iYjevEt3OqO4t46aGARztfwPAhfPX4&amp;amp;code_challenge_method=SHA-256"&gt;AWS Console&lt;/a&gt; and start building.&lt;/p&gt;
</description>
      <pubDate>Mon, 16 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Pat Walsh (InfluxData)</author>
    </item>
    <item>
      <title>What is Industry 4.0? Everything You Need to Know in 2026</title>
      <description>&lt;p&gt;Industry 4.0 is the term used to describe the fourth industrial revolution, a name given to the integration of physical and digital systems, which includes the internet of things (IoT) and artificial intelligence that are transforming a huge number of industries.&lt;/p&gt;

&lt;p&gt;At a high level, its goal is to create an efficient, automated process for creating products or services that can be adapted quickly and efficiently to changing customer needs.&lt;/p&gt;

&lt;p&gt;Industry 4.0 also includes concepts such as cloud computing, big &lt;a href="https://www.influxdata.com/solutions/industrial-iot/?utm_source=website&amp;amp;utm_medium=industry_4_0_update_2026&amp;amp;utm_content=blog"&gt;data analytics&lt;/a&gt;, and machine learning to enable smarter production processes.&lt;/p&gt;

&lt;p&gt;By using sensors and automation technology, manufacturers can collect real-time data on their machines and operations, which can be analyzed to make more informed decisions about how best to manage resources, optimize production lines, and reduce costs.&lt;/p&gt;

&lt;p&gt;Industry 4.0 is leading manufacturers away from the traditional linear, push-based approach to production toward a new data-driven, customer-centric model. This “smart” manufacturing can help businesses remain competitive and stay ahead of the curve in terms of production capabilities, while also contributing to a more sustainable future.&lt;/p&gt;

&lt;h2 id="the-path-to-industry-40"&gt;The path to Industry 4.0&lt;/h2&gt;

&lt;p&gt;Let’s take a look at how we arrived at Industry 4.0 by looking to the past. This additional context will help give you a better understanding of why Industry 4.0 is important and why so many people think it is valuable to adopt these technologies.&lt;/p&gt;

&lt;h4 id="first-industrial-revolution"&gt;First Industrial Revolution&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.britannica.com/event/Industrial-Revolution"&gt;First Industrial Revolution&lt;/a&gt;, which took place in the late 18th and early 19th centuries, was characterized by the mechanization of production, the use of steam power, and the development of the factory system.&lt;/p&gt;

&lt;p&gt;This revolution led to significant changes in manufacturing, transportation, and communication, and had a major impact on society and the economy.&lt;/p&gt;

&lt;h4 id="second-industrial-revolution"&gt;Second Industrial Revolution&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.history.com/articles/second-industrial-revolution-advances"&gt;Second Industrial Revolution&lt;/a&gt; took place in the late 19th and early 20th centuries. It was characterized by mass production of goods, the use of electricity, and the development of the assembly line.&lt;/p&gt;

&lt;h4 id="third-industrial-revolution"&gt;Third Industrial Revolution&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.economist.com/leaders/2012/04/21/the-third-industrial-revolution"&gt;Third Industrial Revolution&lt;/a&gt;, also known as the Digital Revolution, took place in the late 20th and early 21st centuries and was characterized by the adoption of computers and automation in manufacturing and other industries.&lt;/p&gt;

&lt;h4 id="fourth-industrial-revolution"&gt;Fourth Industrial Revolution&lt;/h4&gt;

&lt;p&gt;Industry 4.0, also known as the Fourth Industrial Revolution, is the current trend of automation and data exchange in manufacturing technologies, including developments in artificial intelligence, the &lt;a href="https://www.influxdata.com/glossary/iot-devices/"&gt;internet of things&lt;/a&gt; (IoT), and cyber-physical systems.&lt;/p&gt;

&lt;p&gt;It’s seen as the fourth major revolution in manufacturing, following the mechanization of production in the First Industrial Revolution, the mass production of the Second Industrial Revolution, and the introduction of computers and automation in the Third Industrial Revolution.&lt;/p&gt;

&lt;h2 id="industry-40-key-concepts-and-principles"&gt;Industry 4.0 key concepts and principles&lt;/h2&gt;

&lt;h4 id="interoperability"&gt;Interoperability&lt;/h4&gt;

&lt;p&gt;Interoperability is a fundamental concept in Industry 4.0, emphasizing seamless communication and data exchange among systems, devices, and software platforms within an industrial environment.&lt;/p&gt;

&lt;p&gt;As Industry 4.0 relies heavily on integrating diverse technologies such as IoT, AI, and cloud computing, ensuring these components work effectively together is crucial to realizing the full potential of a connected, intelligent manufacturing ecosystem.&lt;/p&gt;

&lt;p&gt;Interoperability enables businesses to break down silos, streamline processes, and make better-informed decisions, ultimately leading to increased efficiency, productivity, and competitiveness.&lt;/p&gt;

&lt;p&gt;To achieve interoperability, manufacturers must adopt standardized communication protocols, open architectures, and flexible data formats to facilitate a smooth flow of information across the entire production chain.&lt;/p&gt;

&lt;h4 id="virtualization"&gt;Virtualization&lt;/h4&gt;

&lt;p&gt;Virtualization is the creation of virtual representations of physical assets, processes, and systems within the industrial environment.&lt;/p&gt;

&lt;p&gt;By using advanced technologies such as &lt;a href="https://www.influxdata.com/glossary/digital-twins/"&gt;digital twins&lt;/a&gt;, simulation software, and augmented reality, virtualization enables manufacturers to test, analyze, and optimize their operations without impacting the actual production process.&lt;/p&gt;

&lt;p&gt;Virtualization not only allows more efficient planning and decision making but also helps businesses identify potential bottlenecks or issues before they occur, resulting in reduced downtime, lower costs, and enhanced product quality.&lt;/p&gt;

&lt;p&gt;At the same time, it promotes remote monitoring and control of industrial processes, allowing experts to collaborate and troubleshoot issues from any location, which improves overall operational efficiency.&lt;/p&gt;

&lt;h4 id="cyber-physical-systems"&gt;Cyber-Physical Systems&lt;/h4&gt;

&lt;p&gt;Cyber-physical systems (CPS) are a core part of Industry 4.0, representing the seamless integration of computational and physical components. These systems enable real-time communication and data exchange between machines, humans, and digital networks, resulting in smarter, more efficient, and autonomous industrial processes.&lt;/p&gt;

&lt;h4 id="decentralization"&gt;Decentralization&lt;/h4&gt;

&lt;p&gt;Decentralization involves the shift towards distributed decision-making and autonomous control within industrial systems.&lt;/p&gt;

&lt;p&gt;In the context of manufacturing, decentralization empowers machines, devices, and production units to make decisions and perform tasks independently, without centralized supervision or control.&lt;/p&gt;

&lt;p&gt;This approach increases the agility and resilience of manufacturing operations and enables businesses to scale more effectively, as new components or devices can be seamlessly integrated into the existing network.&lt;/p&gt;

&lt;h4 id="modularity"&gt;Modularity&lt;/h4&gt;

&lt;p&gt;Modularity, the ability to adjust production lines, processes, and equipment with minimal effort and downtime, is a key concept in Industry 4.0.&lt;/p&gt;

&lt;p&gt;It emphasizes the importance of designing flexible, scalable, and adaptable systems that can be easily reconfigured or upgraded to meet changing market demands and technological advancements.&lt;/p&gt;

&lt;p&gt;By embracing modularity, manufacturers can rapidly adapt to fluctuations in product demand, introduce new products, or incorporate emerging technologies, ensuring their operations remain agile and competitive.&lt;/p&gt;

&lt;p&gt;Modularity also enables greater customization, as production lines can be adjusted to accommodate unique customer requirements or preferences.&lt;/p&gt;

&lt;h2 id="what-technologies-are-driving-industry-40"&gt;What technologies are driving Industry 4.0?&lt;/h2&gt;

&lt;h4 id="internet-of-things"&gt;Internet of Things&lt;/h4&gt;

&lt;p&gt;IoT is an important part of Industry 4.0, enabling businesses to optimize processes and become more efficient. With this technology, companies can deploy intelligent machines to automate processes and workflows, leading to higher accuracy and productivity.&lt;/p&gt;

&lt;p&gt;IoT technology also makes it possible for machines and databases to communicate, allowing businesses to access real-time data. This improved data collection has enabled insights about productivity and efficiency, streamlining many processes in Industry 4.0.&lt;/p&gt;

&lt;h4 id="cloud-computing"&gt;Cloud Computing&lt;/h4&gt;

&lt;p&gt;Cloud computing enables new ways for organizations to develop agile digital operations. By using cloud computing, companies can reduce the time needed to deploy or upgrade applications and further benefit from scalability.&lt;/p&gt;

&lt;p&gt;With cloud computing, manufacturers now have access to analytics data they did not previously have, enabling them to make informed, real-time decisions.&lt;/p&gt;

&lt;h4 id="edge-computing"&gt;Edge Computing&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/glossary/edge-computing/"&gt;Edge computing&lt;/a&gt; is the process of collecting and analyzing data at the edge of a network, closer to where it is generated. It’s at the opposite end of the spectrum from cloud computing, but it’s just as important for Industry 4.0 workloads.&lt;/p&gt;

&lt;p&gt;This makes it ideal for applications that require real-time analytics, such as autonomous robotic systems and self-driving cars.&lt;/p&gt;

&lt;p&gt;Edge computing also helps reduce network traffic by minimizing the need to send large amounts of data back and forth between devices and centralized data centers.&lt;/p&gt;

&lt;h4 id="g-networking"&gt;5G Networking&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/customer/5g-test-network-and-influxdb/"&gt;5G networks&lt;/a&gt; allow for faster communication and data transfer speeds, a huge factor in making Industry 4.0 viable. This ultimately makes the technology more accessible to businesses of all sizes and enables them to deploy IoT solutions at scale.&lt;/p&gt;

&lt;p&gt;5G can enable companies to increase operational efficiency by supporting real-time decision-making and remote monitoring capabilities.&lt;/p&gt;

&lt;h4 id="ai-and-machine-learning"&gt;AI and Machine Learning&lt;/h4&gt;

&lt;p&gt;AI and machine learning are another key piece of making Industry 4.0 possible. Using AI, companies are able to automate processes, improve decision-making, and better analyze data.&lt;/p&gt;

&lt;p&gt;Many industries &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;are already using AI&lt;/a&gt; to increase efficiency, accelerate innovation, and reduce costs. In manufacturing, for example, AI can be used to optimize production lines, predict maintenance needs, and schedule resources more efficiently.&lt;/p&gt;

&lt;h4 id="cybersecurity"&gt;Cybersecurity&lt;/h4&gt;

&lt;p&gt;Collecting and analyzing more data is great, but it also opens up numerous potential vulnerabilities for businesses. No company wants to be in the news for leaking internal or customer data, or for not being able to function because critical infrastructure has been hacked.&lt;/p&gt;

&lt;p&gt;Industry 4.0 requires sophisticated cybersecurity solutions that protect data at rest and in transit, detect malicious activity before it becomes a problem, and alert users when something is amiss. This can be accomplished through various measures such as encryption, intrusion detection systems, two-factor authentication (2FA), and network segmentation.&lt;/p&gt;

&lt;p&gt;In addition to implementing security solutions, organizations should also develop a comprehensive cybersecurity strategy that covers personnel training and processes for responding to emergency situations. This way, businesses can be more prepared for any potential attacks or data breaches.&lt;/p&gt;

&lt;h4 id="digital-twins"&gt;Digital Twins&lt;/h4&gt;

&lt;p&gt;Digital twins enable engineers to create virtual models of systems and processes that can be used to measure performance, anticipate variation, and even detect defects or dangers before they become issues in the physical world.&lt;/p&gt;

&lt;p&gt;As a result of this technology’s high accuracy, digital twin simulations can substantially reduce design costs, improve operational efficiency and sustainability, enhance product quality, and promote workplace safety.&lt;/p&gt;

&lt;p&gt;Furthermore, companies are leveraging the combination of digital twins’ advanced analytics capabilities and connected devices to optimize factory operations through remote commissioning, proactive maintenance, and streamlined troubleshooting.&lt;/p&gt;

&lt;h4 id="real-time-data-analytics"&gt;Real-Time Data Analytics&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/influxdb-3-ideal-solution-real-time-analytics/"&gt;Real-time analytics&lt;/a&gt; is an essential part of Industry 4.0, enabling businesses to monitor, analyze, and respond to operational and process changes with unprecedented speed and accuracy.&lt;/p&gt;

&lt;p&gt;By utilizing IoT devices, sensors, and advanced analytics models, manufacturers can collect and process data in real time, allowing them to make data-driven decisions and adjustments on the fly.&lt;/p&gt;

&lt;h4 id="d-printing-and-additive-manufacturing"&gt;3D Printing and Additive Manufacturing&lt;/h4&gt;

&lt;p&gt;3D printing and additive manufacturing are quickly becoming essential tools for businesses to maximize efficiency, reduce costs, and create complicated designs with ease.&lt;/p&gt;

&lt;p&gt;For example, factories can print replacement parts on-site without having to call a supplier and wait for them to arrive. This means faster repairs and less downtime overall.&lt;/p&gt;

&lt;p&gt;Additive manufacturing also allows companies to manufacture complex designs that were previously impossible with traditional manufacturing methods.&lt;/p&gt;

&lt;h4 id="robotics"&gt;Robotics&lt;/h4&gt;

&lt;p&gt;In the context of Industry 4.0, robotics goes beyond traditional automation, incorporating advanced capabilities such as AI, machine learning, and sensor integration to create intelligent, adaptive, and versatile machines capable of performing complex tasks with precision and consistency.&lt;/p&gt;

&lt;p&gt;This also includes collaborative robots, or “cobots,” which are designed to work alongside human operators, enhancing their capabilities and ensuring a safer, more ergonomic work environment. 
By using robotics, manufacturers can automate repetitive tasks, reduce human error, and reduce labor costs, while also enabling greater flexibility and customization in production.&lt;/p&gt;

&lt;h2 id="benefits-of-industry-40"&gt;Benefits of Industry 4.0&lt;/h2&gt;

&lt;h5 id="improved-productivity"&gt;1. Improved productivity&lt;/h5&gt;

&lt;p&gt;One of the primary benefits of Industry 4.0 is improved productivity. Key 4.0 technologies, such as data analytics and machine learning, can be used to identify inefficiencies and optimize production processes.&lt;/p&gt;

&lt;p&gt;Similarly, robotics and 3D printing can automate tasks, reducing the need for human labor and increasing manufacturing output.&lt;/p&gt;

&lt;h5 id="increased-efficiency"&gt;2. Increased efficiency&lt;/h5&gt;

&lt;p&gt;By enabling smarter use of resources and more efficient processes, Industry 4.0 contributes significantly to reducing energy consumption, waste generation, and greenhouse gas emissions.&lt;/p&gt;

&lt;p&gt;When companies adopt Industry 4.0 technologies, they can actively contribute to global sustainability goals while simultaneously improving their bottom line.&lt;/p&gt;

&lt;p&gt;Predictive maintenance is a prime example. This proactive approach allows companies to monitor equipment performance in real-time, identify potential issues before they escalate, and schedule maintenance activities based on actual equipment conditions rather than fixed intervals.&lt;/p&gt;

&lt;p&gt;Predictive maintenance minimizes unexpected downtime and costly repairs, extends equipment lifespan, reduces the need for frequent replacements, and reduces associated environmental impact. As an added bonus, equipment that is properly maintained also tends to run more efficiently in terms of power consumption and greenhouse gas emissions.&lt;/p&gt;

&lt;h5 id="improved-quality"&gt;3. Improved quality&lt;/h5&gt;

&lt;p&gt;By identifying errors in collected sensor data, Industry 4.0 can also help improve product quality. Additionally, 3D printing can create prototypes that can be tested for quality before mass production begins.&lt;/p&gt;

&lt;h5 id="reduced-costs"&gt;4. Reduced costs&lt;/h5&gt;

&lt;p&gt;The implementation of Industry 4.0 technologies helps minimize expenses because these technologies can help improve productivity and efficiency, leading to reduced labor costs and waste.&lt;/p&gt;

&lt;h5 id="increased-flexibility"&gt;5. Increased flexibility&lt;/h5&gt;

&lt;p&gt;Industry 4.0 helps to increase flexibility within manufacturing operations. Technologies such as 3D printing and robotics can be used to create customized products quickly and with minimal human labor.&lt;/p&gt;

&lt;p&gt;The use of data analytics also helps companies respond to changes in customer demand, scaling production up or down when needed.&lt;/p&gt;

&lt;h5 id="enhanced-safety"&gt;6. Enhanced safety&lt;/h5&gt;

&lt;p&gt;Thanks to advances such as robotics and machine learning, dangerous tasks can now be automated. This reduces the risk of worker injury and helps create a safer working environment.&lt;/p&gt;

&lt;h5 id="more-resilient-supply-chains"&gt;7. More resilient supply chains&lt;/h5&gt;

&lt;p&gt;Adopting many Industry 4.0 technologies can help businesses strengthen their supply chains. By leveraging data analytics, businesses can monitor the production process in real time and detect small issues before they escalate into larger problems.&lt;/p&gt;

&lt;p&gt;Plus, 3D printing and additive manufacturing can also be used to quickly produce replacement parts or components for machinery with little to no downtime. This helps companies maintain  operations without disruption due to supply chain problems.&lt;/p&gt;

&lt;h5 id="improved-customer-experience"&gt;8. Improved customer experience&lt;/h5&gt;

&lt;p&gt;Industry 4.0 can help businesses improve their customer experience by providing insights into customer behaviors and preferences. Through data analysis, companies can identify areas where they need to focus their efforts in order to provide the best possible service or product.&lt;/p&gt;

&lt;p&gt;Data can also help during the manufacturing process to help identify potential defects early, so customers don’t receive a faulty product.&lt;/p&gt;

&lt;h2 id="industry-40-challenges-and-risks"&gt;Industry 4.0 challenges and risks&lt;/h2&gt;

&lt;h5 id="implementation-costs"&gt;1. Implementation costs&lt;/h5&gt;

&lt;p&gt;Implementing Industry 4.0 technologies and practices can be expensive, particularly for smaller businesses. If a business doesn’t have the necessary financial resources to invest in these technologies, it may not see a return on the investment.&lt;/p&gt;

&lt;h5 id="cybersecurity-risks"&gt;2. Cybersecurity risks&lt;/h5&gt;

&lt;p&gt;The integration of advanced technologies and the reliance on connected systems increase the risk of cybersecurity threats. Without robust cybersecurity measures in place, a business may be vulnerable to attacks, which can have serious consequences.&lt;/p&gt;

&lt;h5 id="culture-challenges"&gt;3. Culture challenges&lt;/h5&gt;

&lt;p&gt;Some businesses may be hesitant to adopt new technologies and practices due to concerns about costs and disruptions to their existing operations. If a business isn’t willing to adapt to new technologies and processes, it may struggle to compete with competitors that are more forward-thinking.&lt;/p&gt;

&lt;p&gt;This can also apply to employees who aren’t familiar with new technologies and may be resistant to change, making it important to ensure that employees at all levels of the company understand how and why changes are being made.&lt;/p&gt;

&lt;h2 id="common-industry-40-use-cases"&gt;Common Industry 4.0 use cases&lt;/h2&gt;

&lt;h5 id="smart-manufacturing"&gt;1. Smart manufacturing&lt;/h5&gt;

&lt;p&gt;Smart manufacturing and smart factories are common Industry 4.0 use cases where adopting new technologies can improve productivity, make products more reliable, and keep workers safer.&lt;/p&gt;

&lt;p&gt;Beyond the direct benefits to the company, smart manufacturing can benefit the environment by reducing waste and making production more efficient.&lt;/p&gt;

&lt;h5 id="agriculture"&gt;2. Agriculture&lt;/h5&gt;

&lt;p&gt;The advantages of incorporating Industry 4.0 in agriculture are substantial.&lt;/p&gt;

&lt;p&gt;Precision farming techniques, powered by IoT sensors and data analytics, facilitate the targeted application of fertilizers, pesticides, and irrigation, reducing waste and minimizing environmental impact.&lt;/p&gt;

&lt;p&gt;Robotics and autonomous machinery can also perform repetitive tasks, such as planting, harvesting, and monitoring, improving efficiency and freeing up valuable human resources.&lt;/p&gt;

&lt;p&gt;Advanced data analysis also enables predictive modeling and forecasting, helping farmers make informed decisions on crop selection, planting schedules, and resource allocation.&lt;/p&gt;

&lt;h5 id="healthcare"&gt;3. Healthcare&lt;/h5&gt;

&lt;p&gt;By using IoT devices to collect health data, patients are able to get more personalized and effective healthcare. This can include everything from detecting emergency situations, such as a heart attack, to enabling the detection and mitigation of diseases before they become severe.&lt;/p&gt;

&lt;p&gt;Robotics is also increasingly used during surgery to reduce human error and improve outcomes.&lt;/p&gt;

&lt;h5 id="supply-chain-management"&gt;4. Supply chain management&lt;/h5&gt;

&lt;p&gt;Adopting Industry 4.0 technologies can enhance supply chain management by enabling better visibility, efficiency, and resilience.&lt;/p&gt;

&lt;p&gt;Connecting components such as suppliers, manufacturers, distributors, and retailers, enables smoother information exchange, ensuring that all stakeholders have access to accurate and up-to-date data.&lt;/p&gt;

&lt;p&gt;Predictive analytics and machine learning can help forecast demand patterns, optimize inventory levels, and identify potential disruptions, allowing supply chain managers to address issues and minimize risks.&lt;/p&gt;

&lt;h2 id="industry-40-tools"&gt;Industry 4.0 tools&lt;/h2&gt;

&lt;p&gt;In this section, we’ll examine some tools useful for a variety of tasks involved in adopting industry 4.0 technology.&lt;/p&gt;

&lt;h5 id="data-storage"&gt;1. Data storage&lt;/h5&gt;

&lt;p&gt;Storing Industry 4.0 data at scale requires scalable, efficient solutions that can handle the high volume of data generated by interconnected devices and systems. Here are a few different options for storing your data:&lt;/p&gt;

&lt;h5 id="time-series-databases"&gt;2. Time series databases&lt;/h5&gt;

&lt;p&gt;Time series databases (TSDBs) are specifically designed to store time-stamped data from sensors and IoT devices. They offer high write and query performance, making them ideal for handling the high-frequency data typical of Industry 4.0 use cases. An example of a TSDB is &lt;a href="https://www.influxdata.com/?utm_source=website&amp;amp;utm_medium=industry_4_0_update_2026&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="data-historians"&gt;3. Data historians&lt;/h5&gt;

&lt;p&gt;Data historians are specialized databases for storing and retrieving historical process data from industrial systems. They are optimized for handling time series data and offer capabilities like data compression, aggregation, and real-time querying. An example of a data historian is OSI PI.&lt;/p&gt;

&lt;h5 id="columnar-databases"&gt;4. Columnar databases&lt;/h5&gt;

&lt;p&gt;Columnar databases store data in columns rather than rows, which is well-suited for analytics and processing large datasets and is often used as a data warehouse. Columnar databases offer high query performance and data compression, making them suitable for storing and analyzing the vast amounts of structured data generated by Industry 4.0 systems.&lt;/p&gt;

&lt;h5 id="communication-protocols"&gt;5. Communication protocols&lt;/h5&gt;

&lt;p&gt;Several communication protocols are well-suited for Industry 4.0 systems, providing efficient and reliable data transfer between interconnected devices, machines, and software platforms. Here are some good options for communication protocols in Industry 4.0:&lt;/p&gt;

&lt;h5 id="mqtt"&gt;6. MQTT&lt;/h5&gt;

&lt;p&gt;MQTT is a lightweight, publish-subscribe messaging protocol designed for low-bandwidth, high-latency, and unreliable networks. Its low overhead and minimal resource requirements make it ideal for IoT devices and Industry 4.0 applications.&lt;/p&gt;

&lt;p&gt;MQTT is widely used to connect sensors, actuators, and other devices to cloud platforms, enabling efficient data exchange and remote monitoring.&lt;/p&gt;

&lt;h5 id="opc-unified-architecture-opc-ua"&gt;7. OPC Unified Architecture (OPC UA)&lt;/h5&gt;

&lt;p&gt;OPC UA is a platform-independent, service-oriented architecture developed specifically for industrial automation and communication. It provides secure and reliable data exchange between devices, machines, and software applications, regardless of the underlying platform or programming language.&lt;/p&gt;

&lt;p&gt;OPC UA supports a wide range of data types and features with built-in security mechanisms, making it a popular choice for Industry 4.0 systems.&lt;/p&gt;

&lt;h5 id="advanced-message-queuing-protocol-amqp"&gt;8. Advanced Message Queuing Protocol (AMQP)&lt;/h5&gt;

&lt;p&gt;AMQP is an open standard, application-layer protocol for message-oriented middleware. It supports flexible messaging patterns and offers reliable, secure communication between devices and applications. AMQP is well-suited to scenarios that require complex routing and guaranteed message delivery, making it a good fit for many Industry 4.0 applications.&lt;/p&gt;

&lt;h4 id="data-collection-and-integration"&gt;Data Collection and Integration&lt;/h4&gt;

&lt;p&gt;One of the big challenges for Industry 4.0 is collecting data from a variety of devices that may communicate over different protocols, then sending it to various tools for storage and analysis. Let’s take a look at some options that make collecting and integrating data easier:&lt;/p&gt;

&lt;h5 id="node-red"&gt;1. Node-RED&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt; is an open-source, flow-based programming tool for wiring together devices, APIs, and online services. It provides a browser-based visual interface for designing and deploying data flows, making it easy to connect and integrate various data sources, such as IoT devices, industrial sensors, and web services.&lt;/p&gt;

&lt;p&gt;With a large library of prebuilt nodes and support for custom nodes, Node-RED allows users to build complex data pipelines and perform data transformations with &lt;a href="https://www.influxdata.com/blog/node-red-dashboard-tutorial/"&gt;minimal coding effort&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="telegraf"&gt;2. Telegraf&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/time-series-platform/telegraf/?utm_source=website&amp;amp;utm_medium=industry_4_0_update_2026&amp;amp;utm_content=blog"&gt;Telegraf&lt;/a&gt; is an open source, plugin-driven server agent for collecting and reporting metrics from different data sources. Telegraf supports a wide range of input, output, and processing plugins, allowing it to gather and transmit data from various devices, systems, and APIs to different storage platforms.&lt;/p&gt;

&lt;p&gt;Its flexibility and extensibility make it suitable for Industry 4.0 applications, where diverse data sources are common.&lt;/p&gt;

&lt;h5 id="apache-nifi"&gt;3. Apache NiFi&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://nifi.apache.org/"&gt;Apache NiFi&lt;/a&gt; is an open source, web-based data integration tool for designing, deploying, and managing data flows. It offers a visual interface for designing data pipelines and supports a wide range of data sources, processors, and sinks.&lt;/p&gt;

&lt;p&gt;NiFi is particularly well-suited to use cases that require complex data routing, transformation, and enrichment. With built-in security features and support for data provenance, NiFi ensures data integrity and traceability in Industry 4.0 environments.&lt;/p&gt;

&lt;h2 id="industry-40-best-practices"&gt;Industry 4.0 best practices&lt;/h2&gt;

&lt;p&gt;Moving towards Industry 4.0 is a major endeavor for existing businesses and involves all areas of a business to work properly. In this section, let’s explore some best practices that can help you avoid major pitfalls that could hurt your business.&lt;/p&gt;

&lt;h5 id="have-a-clear-strategy-and-goals"&gt;1. Have a clear strategy and goals&lt;/h5&gt;

&lt;p&gt;Above all else, you need a clear understanding of how adopting these new technologies will help achieve your business goals. If you can’t actually find concrete ways that this will help your business, don’t blindly invest resources in them. Some potential things to identify:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Specific technologies that will be used&lt;/li&gt;
  &lt;li&gt;Which processes could be automated&lt;/li&gt;
  &lt;li&gt;Metrics to measure success&lt;/li&gt;
  &lt;li&gt;Cybersecurity focus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The integration of advanced technologies and the reliance on connected systems increase the risk of cybersecurity threats. Implement robust cybersecurity measures to protect against these threats from day one, so you don’t regret it later on.&lt;/p&gt;

&lt;h5 id="collaboration"&gt;2. Collaboration&lt;/h5&gt;

&lt;p&gt;Industry 4.0 technologies often involve integrating systems and processes across different organizations. It’s important to collaborate with suppliers and partners to ensure that these systems and processes are integrated effectively.&lt;/p&gt;

&lt;h5 id="track-results-and-iterate"&gt;3. Track results and iterate&lt;/h5&gt;

&lt;p&gt;Establish metrics before starting so you can measure progress against expected results. Based on progress, you need to be willing and able to change your strategy if necessary.&lt;/p&gt;

&lt;h2 id="faqs"&gt;FAQs&lt;/h2&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;What are the origins of Industry 4.0?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Industry 4.0 as a concept dates back to 2006, when the German government laid out a plan to maintain its manufacturing dominance in a paper that looked into the future of manufacturing and how companies would be impacted and need to adapt to emerging technologies. The concept was further refined in 2010 when the German Cabinet laid out their High-Tech Strategy 2020 plan, which defined five priorities that would be used to direct billions of dollars in government investment.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;How are digital transformation and Industry 4.0 related?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                &lt;a href="https://www.influxdata.com/customers/iot-data-platform/"&gt;Digital transformation&lt;/a&gt; and Industry 4.0 are often used interchangeably, but it's crucial to understand their unique characteristics and how they relate to each other. While both concepts involve adopting advanced technologies to improve business operations, Industry 4.0 specifically focuses on the manufacturing sector, whereas digital transformation encompasses a broader range of industries and applications. Digital transformation is the process of integrating digital technologies across a business's customer service, marketing, supply chain management, and internal operations. The goal of digital transformation is to optimize processes, enhance efficiency, and create new business models that drive growth and competitiveness. This transformation is achieved through the implementation of technologies such as cloud computing, data analytics, artificial intelligence, and IoT. Industry 4.0, on the other hand, is a subset of digital transformation that targets the manufacturing industry. It is often referred to as the Fourth Industrial Revolution, representing a new era of intelligent, connected, and autonomous manufacturing systems. Industry 4.0 leverages technologies like IoT, advanced analytics, robotics, and additive manufacturing to optimize production processes, improve product quality, and increase overall efficiency. Despite their differences, digital transformation and Industry 4.0 are closely related, as both aim to drive innovation and create value through the adoption of advanced technologies. In fact, Industry 4.0 can be considered a specific application of digital transformation within the manufacturing sector. As companies embark on their digital transformation journeys, embracing Industry 4.0 principles can provide a solid foundation for growth and success in manufacturing.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;What is IT/OT convergence?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Businesses have traditionally been siloed between information technology (IT) and operational technology (OT). But in recent years, these worlds have started to merge in a process commonly referred to as IT/OT convergence. Better collaboration between IT and OT can add tremendous value to any business by providing greater visibility across the organization, improved data analysis capabilities, fewer manual processes, and a faster response to customer needs. By leveraging both sets of technologies, businesses can gain unprecedented control over their operations. IT/OT convergence involves integrating hardware, software, and networks traditionally used in OT with those used in IT. This integration synchronizes the two disconnected systems, allowing them to exchange data and information. For example, an IT system can enable operators to access real-time operational data from OT systems, such as sensors and actuators.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;What is Industry 5.0?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Industry 5.0 is a term used to describe the next phase of the Fourth Industrial Revolution, characterized by the integration of advanced technologies such as AI, IoT, and &lt;a href="https://www.ibm.com/think/topics/quantum-computing"&gt;quantum computing&lt;/a&gt; into manufacturing and other industries. There isn't a universally accepted definition of Industry 5.0, and the concept is still evolving. However, it's generally seen as a continuation of the trend towards increased automation and data exchange that began with Industry 4.0, with a focus on even more advanced technologies and their integration across sectors. One key difference between Industry 4.0 and Industry 5.0 is the focus on sustainability and social responsibility. Industry 5.0 is expected to involve the development of technologies that are more environmentally friendly and that promote social equity. This could include using renewable energy sources and developing technologies to reduce waste and pollution. Overall, the main difference between Industry 4.0 and Industry 5.0 is the level of technological advancement. Industry 5.0 involves the integration of even more advanced technologies, such as quantum computing, which have the potential to significantly impact and transform various industries.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;
&lt;/div&gt;
</description>
      <pubDate>Fri, 13 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/industry-4-0-update-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/industry-4-0-update-2026/</guid>
      <category>IoT</category>
      <category>Developer</category>
      <author>Company (InfluxData)</author>
    </item>
    <item>
      <title>When Your Plant Talks Back: Conversational AI with InfluxDB 3</title>
      <description>&lt;p&gt;No one wants to stare at a plant and guess if it needs water. It’s much easier if the plant can say, “I’m thirsty.” A few years ago, we built &lt;a href="https://www.influxdata.com/blog/prototyping-iot-with-influxdb-cloud-2-0/?utm_source=website&amp;amp;utm_medium=plant_buddy_influxdb_3&amp;amp;utm_content=blog"&gt;Plant Buddy using InfluxDB Cloud 2.0&lt;/a&gt;. The linked article is still a great guide for cloud-first IoT prototyping as it shows how quickly you can connect devices, store time series data, and build dashboards in the cloud with the previous version of InfluxDB.&lt;/p&gt;

&lt;p&gt;But this time, the goal was different. Instead of sending soil moisture data to the cloud, the entire system runs locally using the latest InfluxDB 3 Core, similar to a modern industrial setup powered by LLM for a natural conversational interaction.&lt;/p&gt;

&lt;h2 id="the-architecture-the-factory-at-home"&gt;The architecture: the “factory” at home&lt;/h2&gt;

&lt;p&gt;In real factories, raw PLC data is captured at the edge, often using MQTT and a local database. That same architecture now powers PlantBuddy v3 with the following setup.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Edge Device (ESP32 / Arduino)&lt;/strong&gt;: Works like a small PLC. It reads soil moisture and publishes the plant’s state to the network without knowing anything about the database.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Soil Moisture Sensor (Analog)&lt;/strong&gt;: Outputs an analog signal based on soil moisture. The microcontroller converts it to digital using its built-in ADC.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Message Bus (Mosquitto MQTT)&lt;/strong&gt;: Handles publish/subscribe communication. The Arduino publishes data to the broker (running locally), and Telegraf subscribes to forward data to the database.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Database (InfluxDB 3 Core)&lt;/strong&gt;: Runs locally in Docker as a high-performance time series database storing all sensor readings.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;User Interface (Claude + MCP)&lt;/strong&gt;: Enables natural language queries. Instead of writing SQL, questions about plant health can be asked conversationally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1ZSbIHFEYUbPMC1AdqrrST/ea99e0486c676472a7f68eec9b8b7d7e/Screenshot_2026-02-19_at_9.59.35â__AM.png" alt="Plant Buddy architecture" /&gt;&lt;/p&gt;

&lt;h4 id="collecting--sending-data-from-the-edge"&gt;1. Collecting &amp;amp; Sending Data from the Edge&lt;/h4&gt;

&lt;p&gt;To make this scalable, I treat the sensor data like an industrial payload. It’s not just a number; it’s a structured JSON object containing the ID, raw metrics, and a pre-calculated status flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Arduino Payload&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-xml"&gt;{ 
"id": "pothos_01",    // Device identifier (like a PLC tag) 
"raw": 715,  		// Raw ADC value (0-1023) 
"pct": 19,  		// Calculated moisture percentage 
"stat": "DRY_ALERT"   // Pre-computed status 
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Why compute status at the edge?&lt;/strong&gt; In factories, PLCs make local decisions (e.g., stop motor, trigger alarm). Here, the Arduino determines “DRY_ALERT” so the database can trigger alerts without recalculating thresholds.&lt;/p&gt;

&lt;h4 id="the-ingest-pipeline"&gt;2. The Ingest Pipeline&lt;/h4&gt;

&lt;p&gt;Below are two approaches to send plant data to InfluxDB. In this project, I went with MQTT and Telegraf, which are more standard for an industrial setup.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5McEkD3dooB2Ii4nfJQ6D1/2d370c54ba97a41a460a66ec05c07af1/Screenshot_2026-02-19_at_10.02.34â__AM.png" alt="Plant Buddy Ingest Pipeline" /&gt;&lt;/p&gt;

&lt;p&gt;Telegraf acts as the gateway, subscribing to the MQTT broker and translating the JSON into InfluxDB Line Protocol. This configuration is identical to what you’d see in a manufacturing plant monitoring vibration sensors.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-toml"&gt;# telegraf.conf - Complete Working Example
[agent]
  interval = "10s"
  flush_interval = "10s"

[[inputs.mqtt_consumer]]
  servers = ["tcp://127.0.0.1:1883"]
  topics = ["home/livingroom/plant/moisture"]
  data_format = "json"

  # Tags become indexed dimensions (fast filtering)
  tag_keys = ["id", "stat"]

  # Fields become measured values
  json_string_fields = ["raw", "pct"]

[[outputs.influxdb_v2]]
  urls = ["http://127.0.0.1:8181"]
  token = "$INFLUX_TOKEN"
  organization = "my-org"
  bucket = "plant_data"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If Telegraf runs in Docker, use &lt;code class="language-markup"&gt;host.docker.internal:8181&lt;/code&gt; to reach the database.&lt;/p&gt;

&lt;h4 id="time-series-database-influxdb-3-docker-container"&gt;3. Time Series Database: InfluxDB 3 (Docker Container)&lt;/h4&gt;

&lt;p&gt;InfluxDB 3 Core runs locally in Docker as the time series database. It stores soil moisture readings and enables real-time analytics, all without depending on external cloud connectivity.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Create persistent storage 
mkdir -p ~/influxdb3-data

# Run InfluxDB 3 Core with proper configuration
docker run --rm -p 8181:8181 \
  -v $PWD/data:/var/lib/influxdb3/data \
  -v $PWD/plugins:/var/lib/influxdb3/plugins \
  influxdb:3-core influxdb3 serve \
    --node-id=my-node-0 \
    --object-store=file \
    --data-dir=/var/lib/influxdb3/data \
    --plugin-dir=/var/lib/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="the-ai-interface-influxdb-mcp--claude"&gt;4. The “AI” Interface: InfluxDB MCP &amp;amp; Claude&lt;/h4&gt;

&lt;p&gt;Instead of writing SQL queries or building dashboards, the system connects an LLM to InfluxDB through the Model Context Protocol (MCP). I’ve written another blog post on how to connect InfluxDB 3 to MCP, which you can find here.&lt;/p&gt;

&lt;p&gt;Now the question isn’t:
&lt;strong&gt;“What’s the SQL query for average soil moisture over the last 24 hours?”&lt;/strong&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;It becomes:
&lt;strong&gt;“Has the plant been dry today?”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The LLM generates the correct SQL under the hood. If needed, the generated query can be inspected. This makes time series analytics accessible through conversation.&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;claude_desktop_config.json&lt;/code&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;{
  "mcpServers": {
    "influxdb": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "--interactive",
        "--add-host=host.docker.internal:host-gateway",
        "--env",
        "INFLUX_DB_PRODUCT_TYPE",
        "--env",
        "INFLUX_DB_INSTANCE_URL",
        "--env",
        "INFLUX_DB_TOKEN",
        "influxdata/influxdb3-mcp-server"
      ],
      "env": {
        "INFLUX_DB_PRODUCT_TYPE": "core",
        "INFLUX_DB_INSTANCE_URL": "http://host.docker.internal:8181",
        "INFLUX_DB_TOKEN": "YOUR_RESOURCE_TOKEN"
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="the-result"&gt;The Result:&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5ic88rDutPS2omn2Z6tD1k/908b17ccb43b429d80c7dfa134de9dd2/Screenshot_2026-02-19_at_10.08.18â__AM.png" alt="Plant Buddy result" /&gt;&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next&lt;/h2&gt;

&lt;p&gt;In the next post, we will upgrade this Plant Buddy project to do more than passively monitor. New features will include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;A water pump, motor, and small display&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automatic watering&lt;/strong&gt; when the plant enters &lt;code class="language-markup"&gt;DRY_ALERT&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;An extended system with &lt;strong&gt;light and temperature sensors&lt;/strong&gt; to determine how placement of the potted plant affects its health, especially during trips when no one is home.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try to build one yourself with &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=plant_buddy_influxdb_3&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt;! We would love to hear your questions/comments in our &lt;a href="https://community.influxdata.com"&gt;community forum&lt;/a&gt;, &lt;a href="https://join.slack.com/t/influxcommunity/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA"&gt;Slack&lt;/a&gt;, or Discord.&lt;/p&gt;
</description>
      <pubDate>Tue, 10 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/plant-buddy-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/plant-buddy-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>From Reactive to Predictive: Preserving BESS Uptime at Scale</title>
      <description>&lt;p&gt;Battery Energy Storage Systems (BESS) operate as revenue-generating grid assets that capture surplus electricity, deploy power during demand spikes, and support frequency control. By shifting energy across time, they stabilize grid conditions, enable renewable integration, and execute market dispatch commitments. When systems respond as designed, stored capacity becomes a flexible, monetizable supply.&lt;/p&gt;

&lt;p&gt;But BESS performance depends on precision and availability. When deviations in temperature, voltage, or current go undetected, instability can propagate across battery modules and supporting systems. Dispatch commitments fail, contractual penalties follow, and safety exposure increases.&lt;/p&gt;

&lt;p&gt;In large-scale deployments, uptime becomes a financial and operational control variable rather than a maintenance metric. Preserving availability requires more than reacting to alarms after limits are breached. As fleets expand and system complexity grows, reactive monitoring reaches its ceiling.&lt;/p&gt;

&lt;h2 id="what-is-a-bess"&gt;What is a BESS?&lt;/h2&gt;

&lt;p&gt;A Battery Energy Storage System (BESS) is a grid-connected battery infrastructure that stores electricity when supply exceeds demand and deploys it when demand rises. By shifting energy across time, these systems help balance generation and consumption while supporting market commitments and frequency control. Their value lies not only in storing energy, but in responding precisely when grid conditions change.&lt;/p&gt;

&lt;p&gt;Electrical supply and demand must remain balanced at all times. When surplus power enters the grid, a BESS absorbs that energy and holds it until demand increases, at which point stored electricity is released back into the network. This coordinated charge-and-discharge cycle enables controlled energy movement that stabilizes supply, supports renewable energy sources, and maintains consistent grid performance.&lt;/p&gt;

&lt;p&gt;Storage systems adjust output within seconds to correct short-term imbalances. Rapid response smooths fluctuations from wind and solar generation and helps maintain grid stability. As more renewable energy comes online and demand patterns shift, reliance on storage systems increases. In this environment, availability and response speed directly influence reliability and financial performance.&lt;/p&gt;

&lt;h4 id="availability-as-an-operational-variable"&gt;Availability as an Operational Variable&lt;/h4&gt;

&lt;p&gt;The value of a BESS depends on its availability. When a system goes offline, dispatch capacity contracts immediately, and stored energy cannot be delivered as planned. Market commitments may go unmet, and replacement capacity must be sourced elsewhere, resulting in lost revenue, potential penalties, and increased operational expenses.&lt;/p&gt;

&lt;p&gt;In large-scale deployments, availability becomes more complex to manage. Thousands of battery modules operate simultaneously, each producing continuous temperature, voltage, and current data. These modules function as a coordinated system, in whichwhere small issues in one area can affectinfluence overall performance. As fleet size grows, operational oversight becomes more demanding.&lt;/p&gt;

&lt;p&gt;Uptime is more than a maintenance metric. It directly affects revenue performance, capacity payments, and grid commitments. Even small disruptions can reduce dispatch capability before a full outage occurs. Preserving availability requires visibility that scales with system complexity.&lt;/p&gt;

&lt;h2 id="the-limits-of-reactive-monitoring"&gt;The limits of reactive monitoring&lt;/h2&gt;

&lt;p&gt;Operational failures in BESS environments rarely begin as sudden outages. They often start as gradual shifts in temperature, voltage, or current that move systems toward instability while remaining within acceptable limits. These early changes can appear normal when viewed in isolation.&lt;/p&gt;

&lt;p&gt;Most monitoring systems rely on predefined thresholds to detect abnormal conditions. An alert is triggered only after a value crosses a set boundary, confirming that a limit has already been breached. By the time an alarm activates, the underlying condition may have been developing for hours or days. The opportunity for intervention narrows.&lt;/p&gt;

&lt;p&gt;Telemetry is often distributed across battery management systems, inverter controls, and environmental monitoring platforms, creating &lt;a href="https://www.influxdata.com/blog/breaking-data-silos-influxdb-3/"&gt;data silos&lt;/a&gt; across operational layers. Each system captures a portion of operational behavior, but signals are reviewed separately and correlated manually. This separation makes it difficult to see how conditions evolve across modules. Engineers spend valuable time assembling context rather than acting on it.&lt;/p&gt;

&lt;p&gt;As deviations compound, risk increases. Capacity can drop offline, dispatch commitments may fail, and safety exposure rises. Reactive monitoring preserves awareness of failure, but does not preserve control.&lt;/p&gt;

&lt;h4 id="thermal-runway"&gt;Thermal Runway&lt;/h4&gt;

&lt;p&gt;Thermal runaway is one example of how small battery deviations can escalate when not addressed early. A gradual rise in temperature can accelerate internal reactions and generate additional heat. Without timely correction, this cycle can intensify and spread to neighboring cells.
What begins as minor drift can trigger protective shutdown mechanisms designed to prevent damage. While necessary for safety, shutdown interrupts dispatch commitments and reduces available capacity. Lost availability affects revenue performance and may introduce regulatory and safety exposure. The longer that instability goes undetected, the greater the operational impact.&lt;/p&gt;

&lt;h2 id="predictive-monitoring-extends-control"&gt;Predictive monitoring extends control&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/glossary/predictive-maintenance/"&gt;Predictive monitoring&lt;/a&gt; evaluates how operational signals change over time rather than reacting only after limits are breached. Temperature, voltage, and current readings are analyzed as evolving trends across battery modules, allowing engineers to see how conditions develop instead of viewing each signal in isolation. The value lies not only in collecting data, but in understanding how system behavior shifts as signals change together.&lt;/p&gt;

&lt;p&gt;In large BESS deployments, thousands of modules generate high-frequency telemetry that reflects thermal and electrical conditions. When these signals are reviewed independently or only against static thresholds, gradual drift can appear routine. Evaluated within a shared time context, emerging patterns become visible across modules and clarify where intervention is required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/what-is-time-series-data/"&gt;Time series data&lt;/a&gt; reflects current operating conditions, while historical data preserves baseline behavior and long-term performance trends. Comparing live readings against historical baselines distinguishes normal variation from early signs of degradation. By combining immediate visibility with long-term context, operators can intervene before instability propagates.&lt;/p&gt;

&lt;h4 id="real-time-analysis-with-influxdb"&gt;Real-time Analysis with InfluxDB&lt;/h4&gt;

&lt;p&gt;InfluxDB is purpose-built for time-series workloads that require high ingestion rates, scalable retention, and fast analytical queries. It captures continuous telemetry from distributed battery systems and organizes it using &lt;a href="https://www.influxdata.com/glossary/database-indexing/"&gt;time-based indexing&lt;/a&gt; and &lt;a href="https://www.influxdata.com/glossary/column-database/"&gt;columnar storage&lt;/a&gt; structures optimized for time-stamped data. Its value lies not only in storing operational signals, but in preserving query efficiency as data volume increases.&lt;/p&gt;

&lt;p&gt;As BESS fleets expand, ingestion and query demand rise simultaneously. Temperature, voltage, and current streams must be written at scale while remaining immediately available for investigation. InfluxDB applies compression and retention policies that balance long-term historical context with storage growth. This design maintains visibility at scale without slowing down dashboards or investigative workflows.&lt;/p&gt;

&lt;p&gt;Real-time analysis and historical comparison occur within the same execution path. Engineers can evaluate gradual drift and investigate emerging instability without exporting data to separate systems. Downsampling strategies preserve long-term trend visibility while keeping high-resolution data available for recent events. This unified architecture reduces operational overhead and preserves intervention windows under load.&lt;/p&gt;

&lt;h2 id="predictive-monitoring-in-action"&gt;Predictive monitoring in action&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/siemens-energy-standardizes-predictive-maintenance-influxdb/"&gt;Siemens Energy&lt;/a&gt; uses InfluxDB to standardize predictive maintenance across distributed energy and battery storage operations. High-frequency sensor telemetry from production systems and battery deployments is ingested into a unified time-series platform that preserves both real-time visibility and long-term historical context. Its value lies not only in collecting large volumes of operational data, but in maintaining consistent access as systems expand across sites and regions.&lt;/p&gt;

&lt;p&gt;Across more than 70 global locations and approximately 23,000 battery modules, continuous temperature, voltage, and performance signals are captured and stored within the same environment. Time-based indexing and scalable retention policies ensure that high-resolution data remains accessible for immediate analysis while preserving long-term degradation trends. This coordinated data architecture enables engineers to evaluate system behavior across modules rather than reviewing signals in isolation.&lt;/p&gt;

&lt;h2 id="the-verdict"&gt;The verdict&lt;/h2&gt;

&lt;p&gt;BESS assets operate within narrow operational and financial tolerances where availability directly influences revenue, safety, and grid reliability. Reactive monitoring confirms when limits are crossed, but predictive monitoring preserves visibility into how conditions evolve before capacity is affected. As fleets expand and telemetry volume increases, infrastructure must ingest high-frequency signals, retain historical context, and return results without latency. When time-series architecture aligns with the structure of operational data, predictive maintenance scales with system complexity rather than breaking under it, preserving uptime across large BESS environments.&lt;/p&gt;

&lt;p&gt;Ready to move from reactive monitoring to predictive control?  Get started with a free download of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=preserving_bess_uptime&amp;amp;utm_content=blog"&gt;Core OSS&lt;/a&gt; or a trial of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=preserving_bess_uptime&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Thu, 05 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/preserving-bess-uptime/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/preserving-bess-uptime/</guid>
      <category>Developer</category>
      <author>Allyson Boate (InfluxData)</author>
    </item>
    <item>
      <title>A Practical Guide to SCADA Security</title>
      <description>&lt;p&gt;Critical infrastructure is under siege. The systems that control our power grids, water treatment plants, and oil pipelines weren’t designed for a connected world. This post covers what security measures teams need to understand and how &lt;a href="https://www.influxdata.com/what-is-time-series-data/?utm_source=website&amp;amp;utm_medium=scada_security_guide&amp;amp;utm_content=blog"&gt;time series&lt;/a&gt; monitoring can help turn SCADA’s weaknesses into a security advantage.&lt;/p&gt;

&lt;h2 id="the-stakes-for-scada-security-have-never-been-higher"&gt;The stakes for SCADA security have never been higher&lt;/h2&gt;

&lt;p&gt;Somewhere right now, a programmable logic controller is opening a valve, adjusting a turbine’s speed, or regulating the chlorine levels in a city’s drinking water. These actions are orchestrated by Supervisory Control and Data Acquisition (SCADA) systems. They run power grids, water treatment facilities, oil and gas pipelines, manufacturing plants, and transportation networks.&lt;/p&gt;

&lt;p&gt;For decades, these systems operated in relative obscurity. They sat on isolated networks, spoke proprietary protocols, and were managed by operational technology (OT) engineers who rarely crossed paths with the IT security team.&lt;/p&gt;

&lt;p&gt;The convergence of IT and OT networks, driven by the demand for remote access, operational analytics, and cost efficiency, has dragged &lt;a href="https://www.influxdata.com/glossary/SCADA-supervisory-control-and-data-acquisition/"&gt;SCADA&lt;/a&gt; systems into a threat landscape they were never built to survive. The results have been dramatic. In 2015 and 2016, coordinated cyberattacks took down portions of Ukraine’s power grid, leaving hundreds of thousands without electricity. In 2021, the Colonial Pipeline ransomware attack shut down fuel distribution across the U.S. East Coast, triggering panic buying and fuel shortages.&lt;/p&gt;

&lt;p&gt;These aren’t theoretical risks. They’re documented events, and they only represent the incidents that became public. The reality is that SCADA systems are being probed, scanned, and targeted every day, and many operators lack the visibility to even know it’s happening.&lt;/p&gt;

&lt;h2 id="scada-security-challenges"&gt;SCADA security challenges&lt;/h2&gt;

&lt;p&gt;Securing SCADA and industrial control systems is fundamentally different from securing a corporate IT environment. The assumptions, priorities, and constraints are almost inverted.&lt;/p&gt;

&lt;h4 id="availability-over-confidentiality"&gt;Availability Over Confidentiality&lt;/h4&gt;

&lt;p&gt;In IT security, the classic triad is confidentiality, integrity, and availability, usually prioritized in roughly that order. In OT, the priorities flip. A power plant cannot tolerate downtime. A water treatment facility cannot go offline for a patch cycle. The consequences of a disrupted industrial process aren’t a lost spreadsheet; they’re potential physical harm, environmental damage, or loss of life. This means that many standard IT security practices, such as aggressive patching, frequent reboots, and network scanning, can be dangerous or even impossible in OT environments.&lt;/p&gt;

&lt;h4 id="legacy-systems-and-long-lifecycles"&gt;Legacy Systems and Long Lifecycles&lt;/h4&gt;

&lt;p&gt;SCADA components often have operational lifecycles of 20 to 30 years. It’s not uncommon to find PLCs running firmware from the early 2000s, human-machine interfaces (HMIs) on Windows XP, or historians on unsupported database platforms. These systems were engineered for reliability and determinism, not security. Replacing them is expensive and operationally risky, so they persist despite the vulnerabilities.&lt;/p&gt;

&lt;h4 id="protocols-without-security"&gt;Protocols Without Security&lt;/h4&gt;

&lt;p&gt;Modbus, DNP3, and &lt;a href="https://www.influxdata.com/glossary/opc-ua/"&gt;OPC&lt;/a&gt; Classic are the workhorses of industrial communication, but they were designed in an era when network isolation was considered sufficient protection. Modbus, for instance, has no authentication, no encryption, and no way to verify the identity of a device sending commands. These protocols are deeply embedded in operational infrastructure and cannot be easily replaced.&lt;/p&gt;

&lt;h4 id="the-air-gap-myth"&gt;The Air Gap Myth&lt;/h4&gt;

&lt;p&gt;Many organizations still believe their OT networks are air-gapped. In practice, true air gaps are rare. Remote access solutions, vendor support connections, shared file servers, USB drives, and even cellular modems on RTUs create pathways between networks.&lt;/p&gt;

&lt;h2 id="key-strategies-for-scada-security"&gt;Key strategies for SCADA security&lt;/h2&gt;

&lt;p&gt;Effective SCADA security is layered, OT-aware, and built around the operational realities of industrial environments. There is no single solution, but a combination of strategies dramatically reduces risk.&lt;/p&gt;

&lt;h4 id="network-segmentation"&gt;Network Segmentation&lt;/h4&gt;

&lt;p&gt;The foundation of SCADA security is proper network architecture. At a minimum, there should be a demilitarized zone (DMZ) between the corporate IT network and the OT network, with no direct traffic flowing between them. Within the OT network, further segmentation between supervisory systems, control systems, and field devices helps limit lateral movement.&lt;/p&gt;

&lt;h4 id="asset-inventory-and-visibility"&gt;Asset Inventory and Visibility&lt;/h4&gt;

&lt;p&gt;You cannot protect what you don’t know exists. Many organizations lack a complete, accurate inventory of their OT assets, including &lt;a href="https://www.influxdata.com/resources/overcoming-iiot-data-challenges-data-injection-from-plcs-to-influxdb/"&gt;PLCs&lt;/a&gt;, RTUs, HMIs, &lt;a href="https://www.influxdata.com/glossary/data-historian/"&gt;historians&lt;/a&gt;, network switches, and communication links. Passive network discovery tools designed for OT environments can build and maintain this inventory without disrupting operations.&lt;/p&gt;

&lt;h4 id="access-control-and-authentication"&gt;Access Control and Authentication&lt;/h4&gt;

&lt;p&gt;Every access point into the OT environment should require strong authentication, ideally multi-factor. Least-privilege principles should govern who can access what, and remote access should be tightly controlled, monitored, and time-limited. Shared accounts should be eliminated wherever possible.&lt;/p&gt;

&lt;h4 id="ot-aware-patch-management"&gt;OT-Aware Patch Management&lt;/h4&gt;

&lt;p&gt;Patching in OT requires a risk-based approach. Not every vulnerability needs an immediate patch, and not every system can be patched without operational impact. Organizations need a process that evaluates vulnerability severity in the context of their specific environment, tests patches in a staging environment where possible, and schedules maintenance windows that align with operational needs.&lt;/p&gt;

&lt;h4 id="deep-packet-inspection-for-industrial-protocols"&gt;Deep Packet Inspection for Industrial Protocols&lt;/h4&gt;

&lt;p&gt;Traditional firewalls see Modbus traffic as TCP on port 502 and nothing more. OT-aware firewalls and intrusion detection systems can parse the actual protocol content to inspect function codes and register addresses to enforce policies.&lt;/p&gt;

&lt;h4 id="incident-response-planning"&gt;Incident Response Planning&lt;/h4&gt;

&lt;p&gt;OT incident response is not IT incident response, the playbook must account for the physical consequences of containment actions. Isolating a network segment might stop an attacker, but could also trip a safety system or halt a process. Response plans need to be developed collaboratively between security teams, OT engineers, and plant operations.&lt;/p&gt;

&lt;h2 id="continuous-monitoring-for-scada-security"&gt;Continuous monitoring for SCADA security&lt;/h2&gt;

&lt;p&gt;All of the strategies above are essential, but there’s a fundamental truth about SCADA security that defenders can exploit: &lt;strong&gt;industrial processes are inherently predictable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A temperature sensor in a chemical reactor reports a value every second. A PLC cycles through its logic on a fixed schedule. A pump runs at a consistent speed. Network traffic between a SCADA server and its RTUs follows regular, repeatable patterns. This predictability means that anomalies like equipment failure, operator error, or a cyberattack create detectable deviations from established baselines.&lt;/p&gt;

&lt;p&gt;This is where time series data becomes a security team’s most powerful tool.&lt;/p&gt;

&lt;h4 id="baselining-normal-behavior"&gt;Baselining Normal Behavior&lt;/h4&gt;

&lt;p&gt;By collecting and storing high-resolution time series data from sensors, PLCs, network flows, and protocol logs, you can build a detailed behavioral profile of “normal” for every asset and process in your environment. What does normal Modbus traffic look like between the SCADA server and PLC-07? What’s the typical temperature range for reactor vessel 3 during a batch run? How often does the engineering workstation initiate write commands?&lt;/p&gt;

&lt;p&gt;With enough historical data, these baselines become remarkably precise, and deviations become immediately apparent.&lt;/p&gt;

&lt;h4 id="detecting-process-manipulation"&gt;Detecting Process Manipulation&lt;/h4&gt;

&lt;p&gt;An attacker who gains access to a SCADA system may try to subtly alter process parameters, such as changing a setpoint, opening a valve, or adjusting a chemical dosing rate. If you’re monitoring time series data from those processes, you can detect changes that fall outside historical norms.&lt;/p&gt;

&lt;h4 id="spotting-anomalous-network-behavior"&gt;Spotting Anomalous Network Behavior&lt;/h4&gt;

&lt;p&gt;Industrial network traffic is highly structured. By logging protocol-level metadata, you can detect unusual patterns. A “write multiple registers” command from an IP address that has only ever issued read commands is suspicious. A burst of DNP3 unsolicited responses at an unusual time deserves investigation. These signals are only visible if you’re capturing and analyzing this data.&lt;/p&gt;

&lt;h4 id="correlating-across-it-and-ot"&gt;Correlating Across IT and OT&lt;/h4&gt;

&lt;p&gt;The most sophisticated attacks traverse the IT/OT boundary. Detecting them requires correlating events across both domains on a unified timeline. For example, a failed VPN login attempt at 1:47 AM, followed by a successful login at 1:52 AM, followed by an unusual engineering workstation session at 1:55 AM, followed by a PLC configuration change at 2:03 AM. While each of these events in isolation might not trigger an alert, together, on a single timeline, the pattern is unmistakable. Time series data makes this correlation possible.&lt;/p&gt;

&lt;h2 id="why-a-time-series-database-beats-a-siem-or-relational-database-for-ot-security-data"&gt;Why a time series database beats a SIEM or relational database for OT security data&lt;/h2&gt;

&lt;p&gt;If you’re convinced that this kind of monitoring is critical for SCADA security, the next question is where to store and analyze all this data. The three common options are a traditional relational database, a Security Information and Event Management (SIEM) platform, or a time series database like InfluxDB. For OT security data, the &lt;a href="https://www.influxdata.com/time-series-database/?utm_source=website&amp;amp;utm_medium=scada_security_guide&amp;amp;utm_content=blog"&gt;time series database&lt;/a&gt; wins decisively. Here’s why.&lt;/p&gt;

&lt;h4 id="data-volume"&gt;Data Volume&lt;/h4&gt;

&lt;p&gt;A single SCADA environment can generate enormous volumes of data. Consider a modest facility with 500 sensors reporting every second, 20 PLCs, a network tap capturing protocol metadata, and authentication logs from access points. That’s easily millions of data points per day, and larger environments generate orders of magnitude more.&lt;/p&gt;

&lt;p&gt;Relational databases like PostgreSQL or MySQL were designed for transactional workloads: inserts, updates, deletes, and joins across normalized tables. They handle time series data poorly at scale. Write throughput degrades as tables grow, and time-based queries over millions of rows become expensive without careful indexing and partitioning, which creates operational complexity.
SIEMs are built for log ingestion, but they’re optimized for text-based event logs, not numerical telemetry. Ingesting raw sensor data at one-second intervals into a SIEM is technically possible, but economically painful, as SIEM licensing is typically based on data volume, and the cost of ingesting OT data can be prohibitive. Many organizations end up sampling or aggregating data before it reaches the SIEM, losing the granularity needed for effective &lt;a href="https://www.influxdata.com/blog/IOT-anomaly-detection-primer-influxdb/"&gt;anomaly detection&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;InfluxDB and other time series databases are built for this workload. They use storage engines optimized for high-volume writes of timestamped data and compressed, columnar storage that keeps disk usage manageable even at scale. InfluxDB can handle hundreds of thousands of writes per second on modest hardware.&lt;/p&gt;

&lt;h4 id="query-performance"&gt;Query Performance&lt;/h4&gt;

&lt;p&gt;OT security analysis is fundamentally time-focused. You need to answer questions like: “What was the average pressure in vessel 4 between 2:00 and 2:15 AM?” or “Show me all Modbus write commands to PLC-12 in the last 24 hours alongside the corresponding sensor readings.” or “Alert me if the rate of change of this temperature exceeds the 99th percentile of its 30-day historical distribution.”&lt;/p&gt;

&lt;p&gt;In a relational database, these queries require careful SQL with window functions, CTEs, and often materialized views to perform well. The query language wasn’t designed for time series operations, and performance tuning is an ongoing burden.&lt;/p&gt;

&lt;p&gt;SIEMs offer search languages that handle event correlation well but are awkward for continuous numerical analysis. Calculating rolling averages, derivatives, or statistical distributions over sensor data in a SIEM is possible but cumbersome.&lt;/p&gt;

&lt;p&gt;Time series databases provide native query primitives for exactly these operations. InfluxDB includes built-in functions for windowed aggregation, moving averages, derivatives, percentiles, and histogram analysis. A query that would require 30 lines of carefully optimized SQL can often be expressed in a few lines with InfluxDB. This matters not just for convenience but for enabling security analysts and OT engineers to explore data and build detection logic without being database specialists.&lt;/p&gt;

&lt;h4 id="data-retention"&gt;Data Retention&lt;/h4&gt;

&lt;p&gt;OT security data has a natural tiered value structure. The last 24 hours of raw sensor data are extremely valuable for investigating an active incident. The last 30 days at full resolution are important for anomaly detection baselines. Data from six months ago is useful for trend analysis, but doesn’t need high granularity. Data from a year ago might only need hourly averages for compliance purposes.&lt;/p&gt;

&lt;p&gt;Relational databases require you to manage this lifecycle manually by writing ETL jobs to downsample old data, archive tables, and manage storage. SIEMs typically offer hot/warm/cold storage tiers, but with limited control over how data is aggregated as it ages.
InfluxDB has retention policies and downsampling built into the database itself. You can define policies that automatically downsample data from one-second to one-minute resolution after 30 days, then to five-minute resolution after 90 days, and delete raw data after a year. This happens transparently, without external tooling, and keeps storage costs predictable while preserving long-term visibility.&lt;/p&gt;

&lt;h2 id="moving-forward"&gt;Moving forward&lt;/h2&gt;

&lt;p&gt;SCADA security is not a problem that can be solved with a single product, a one-time assessment, or a policy document. It requires sustained commitment to understanding your environment, monitoring it continuously, and building the organizational capacity to detect and respond to threats.&lt;/p&gt;

&lt;p&gt;The good news is that the same characteristic that makes SCADA systems vulnerable, like their reliance on predictable, deterministic processes, is also what makes them uniquely defensible through data-driven monitoring. Industrial processes generate time series data that reveals anomalies clearly when you have the right tools to capture and analyze it.&lt;/p&gt;

&lt;p&gt;A time series database like &lt;a href="https://www.influxdata.com/products/influxdb-overview/?utm_source=website&amp;amp;utm_medium=scada_security_guide&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt;, paired with a well-designed collection pipeline and visualization layer, enables security teams to see their OT environment with a level of clarity that was previously impractical. Not as a replacement for network segmentation, access control, and the other foundational security measures, but as the monitoring layer that ties everything together and ensures that when something goes wrong, you know about it in seconds rather than weeks.&lt;/p&gt;
</description>
      <pubDate>Tue, 03 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/scada-security-guide/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/scada-security-guide/</guid>
      <category>Developer</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
    <item>
      <title>The "Now" Problem: Why BESS Operations Demand Last Value Caching</title>
      <description>&lt;p&gt;Battery Energy Storage Systems (BESS) represent one of the most unforgiving environments for real-time data. Unlike a passive asset, a battery is a complex electrochemical system where safety and revenue are determined by split-second decisions. In this context, “average” latency can become a serious problem. Performance depends entirely on one key question:&lt;/p&gt;

&lt;h2 id="what-is-happening-right-now"&gt;“What is happening right now?”&lt;/h2&gt;

&lt;p&gt;For grid operators, Energy Management Systems (EMS), and trading desks, this is the most critical question. To answer it, operations teams rely on dashboards that answer:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Safety &amp;amp; Health&lt;/strong&gt;: What is the current State of Health (SoH) of my BESS operations? Is the site healthy, or are there active thermal alarms?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Bottlenecks&lt;/strong&gt;: What is limiting performance right now? (Is it a Power Conversion System [PCS] derate, a specific rack, or a container-level issue?)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Revenue&lt;/strong&gt;: What is the precise State of Charge (SoC) available for immediate dispatch?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="the-challenge-the-latest-value-bottleneck"&gt;The challenge: the “latest value” bottleneck&lt;/h2&gt;

&lt;p&gt;“Current state” dashboards create a punishing workload for standard time series databases. A single utility-scale site might generate 50,000+ distinct signals (high cardinality) from cells, inverters, and meters. To display a “Live View,” the database must repeatedly scan recent data on disk to find the most recent timestamp for every single one of those signals.&lt;/p&gt;

&lt;p&gt;At the site level, this is difficult. &lt;strong&gt;At fleet scale with more assets, more concurrent users, and millions of streams, this “scan-for-latest” pattern becomes a crippling bottleneck.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id="the-solution-last-value-cache"&gt;The solution: Last Value Cache&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 solves this architectural conflict with its built-in &lt;strong&gt;Last Value Cache (LVC)&lt;/strong&gt;. Instead of scanning historical data to compute the current state, LVC automatically caches the most recent values (or the last N values) in memory for your critical signals. This ensures that “current state” queries remain sub-millisecond (&amp;lt; 10ms) and consistent, regardless of write throughput or fleet size, bridging the gap between historical analysis and real-time situational awareness.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3P8QsCW6bSfmliLYxMmNVP/5b074db94e9b2f58b57a9f18c65922cb/Image-2026-02-23_16_33_24.png" alt="BESS LVC solution" /&gt;&lt;/p&gt;

&lt;h2 id="how-to-use-influxdbs-last-value-cache-lvc-in-memory-for-bess-operations"&gt;How to use InfluxDB’s Last Value Cache (LVC) in memory for BESS operations&lt;/h2&gt;

&lt;h4 id="define-your-hot-signals"&gt;1. Define Your “Hot” Signals&lt;/h4&gt;

&lt;p&gt;Don’t cache everything. Pick the specific high-leverage fields that power your “Current State” dashboards and safety alerts, for example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Safety&lt;/strong&gt;: Cell Temperature (&lt;code class="language-markup"&gt;temp_c&lt;/code&gt;), Voltage (&lt;code class="language-markup"&gt;volts&lt;/code&gt;), Alarm Severity (&lt;code class="language-markup"&gt;alarm_level&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt;: State of Charge (&lt;code class="language-markup"&gt;soc&lt;/code&gt;), State of Health (&lt;code class="language-markup"&gt;soh&lt;/code&gt;), Inverter Mode (&lt;code class="language-markup"&gt;inv_state&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Ops&lt;/strong&gt;: Comms Heartbeat (&lt;code class="language-markup"&gt;last_seen&lt;/code&gt;), Charge/Discharge Limits (&lt;code class="language-markup"&gt;p_limit_kw&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="design-your-keys"&gt;2. Design Your Keys&lt;/h4&gt;

&lt;p&gt;Choose the columns that define how operators slice the system. These will become your cache keys.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Best Practice&lt;/strong&gt;: Match your dashboard filters. If your dashboard filters by &lt;code class="language-markup"&gt;site_id → container_id → rack_id&lt;/code&gt;, those are your keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cardinality Note&lt;/strong&gt;: Keep keys efficient. While InfluxDB 3 handles high cardinality exceptionally well, unnecessary keys (like a unique &lt;code class="language-markup"&gt;transaction_id&lt;/code&gt; per second) waste memory. Stick to asset identifiers.&lt;/p&gt;

&lt;h4 id="shape-the-cache-behavior"&gt;3. Shape the Cache Behavior&lt;/h4&gt;

&lt;p&gt;Configure the cache to match your visualization needs:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;count&lt;/code&gt;:
    &lt;ul&gt;
      &lt;li&gt;Set to &lt;strong&gt;1&lt;/strong&gt; for Gauges, Status Lights, and “Single Value” tiles.&lt;/li&gt;
      &lt;li&gt;Set to &lt;strong&gt;3–10&lt;/strong&gt; for “Sparklines” (mini-charts) where operators need to see the immediate trend (e.g., “Is voltage diving or stable?”).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;ttl&lt;/code&gt; (&lt;strong&gt;time-to-live&lt;/strong&gt;): Define when data becomes “stale.” If a sensor stops reporting, how long should the dashboard show the last value before switching to “Offline/Unknown”? (e.g., &lt;code class="language-markup"&gt;30s&lt;/code&gt; for safety, &lt;code class="language-markup"&gt;1h&lt;/code&gt; for capacity).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="create-the-cache"&gt;4. Create the Cache&lt;/h4&gt;

&lt;p&gt;Create the Last Value Cache using the UI explorer, HTTP API or the CLI.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Key arguments:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Database name: bess_db&lt;/li&gt;
  &lt;li&gt;Table name: bess_telemetry&lt;/li&gt;
  &lt;li&gt;Cache name: bess_ops_lvc&lt;/li&gt;
  &lt;li&gt;Key columns: site_id, rack_id (field keys to cache)&lt;/li&gt;
  &lt;li&gt;Value columns: soc, temp_max, alarm_state (field values to cache)&lt;/li&gt;
  &lt;li&gt;Count: 5 (the number of values to cache per unique key column combination, range 1-10)&lt;/li&gt;
  &lt;li&gt;TTL: 30s (time duration until data becomes stale)&lt;/li&gt;
  &lt;li&gt;Token: InfluxDB 3 authentication token&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="the-warm-cache-advantage"&gt;5. The “Warm Cache” Advantage&lt;/h4&gt;

&lt;p&gt;Unlike a standard cache that starts empty, LVC in InfluxDB 3 is “warm” by default.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;On creation&lt;/strong&gt;: It instantly backfills from existing data on disk.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;On restart&lt;/strong&gt;: It automatically reloads the state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;: Ops teams never see “blank” dashboards after a maintenance window. The system is ready the moment it comes back online.&lt;/p&gt;

&lt;h4 id="querying-the-cache"&gt;6. Querying the Cache&lt;/h4&gt;

&lt;p&gt;Use standard SQL and &lt;code class="language-markup"&gt;last_cache()&lt;/code&gt; function that replaces complex analytical queries with a simple lookup.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="architecture-built-for-scale-using-influxdb-3-enterprise"&gt;7. Architecture: Built for Scale Using InfluxDB 3 Enterprise&lt;/h4&gt;

&lt;p&gt;Last Value Cache can help separate heavy “writing” from “reading” workloads:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Dedicated Ingest Nodes&lt;/strong&gt;: Handle the massive flood of 1Hz sensor data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Dedicated Query Nodes&lt;/strong&gt;: Host the LVC in memory to serve dashboards instantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --node-spec "nodes:query-01,query-02" \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;The benefit&lt;/strong&gt;: Heavy write loads (e.g., a fleet-wide firmware update logging millions of events) will never slow down the control room’s live view.&lt;/p&gt;

&lt;h4 id="the-value-of-lvc"&gt;The value of LVC&lt;/h4&gt;

&lt;p&gt;In BESS operations, latency isn’t just a delay; it’s a risk. InfluxDB 3’s Last Value Cache eliminates that risk by serving the “current state” of your entire fleet instantly from memory, removing the need for complex external caching.&lt;/p&gt;

&lt;p&gt;When you’re ready to start building, &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=bess_last_value_caching&amp;amp;utm_content=blog"&gt;download InfluxDB 3 Enterprise&lt;/a&gt;, or &lt;a href="https://www.influxdata.com/contact-sales-enterprise/?utm_source=website&amp;amp;utm_medium=bess_last_value_caching&amp;amp;utm_content=blog"&gt;contact us&lt;/a&gt; to talk about running a proof of concept.&lt;/p&gt;
</description>
      <pubDate>Thu, 26 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/bess-last-value-caching/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/bess-last-value-caching/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>What Is Predictive Analytics? A Complete Guide for 2026</title>
      <description>&lt;p&gt;In simple terms, predictive analytics is a form of analytics that tries to predict future events, trends, or behaviors based on historical and present data. You can achieve this goal in different ways, each involving trade-offs between accuracy and cost.&lt;/p&gt;

&lt;h2 id="why-is-predictive-analytics-important"&gt;Why is predictive analytics important?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/predictive-analytics-pipelines-real-world-ai-predictive-maintenance-time-series-data/"&gt;Predictive analytics&lt;/a&gt; enables organizations to be more efficient and accurate in how they plan for the future. The end result of a properly implemented predictive analytics system will depend on the industry, but at a high level, here are some common benefits:&lt;/p&gt;

&lt;h4 id="improved-strategic-decision-making"&gt;Improved Strategic Decision-Making&lt;/h4&gt;

&lt;p&gt;Predictive analytics provides insight into future trends, so business leaders can make better decisions faster rather than relying on reactivity.&lt;/p&gt;

&lt;h4 id="increased-operational-efficiency"&gt;Increased Operational Efficiency&lt;/h4&gt;

&lt;p&gt;Using predictive analytics can help businesses improve their profit margins and efficiency by predicting equipment failures and reducing downtime.&lt;/p&gt;

&lt;h4 id="improved-risk-management"&gt;Improved Risk Management&lt;/h4&gt;

&lt;p&gt;By looking at historical data where things went wrong, a business can reduce its risk by finding data that correlates with negative outcomes and avoiding them proactively. An example would be a bad investment in the finance industry.&lt;/p&gt;

&lt;h4 id="happier-customers"&gt;Happier customers&lt;/h4&gt;

&lt;p&gt;Predicting potential churn and reaching out to customers, or ensuring items are in stock by having more accurate predictions for inventory management help enhance customer experience.&lt;/p&gt;

&lt;h2 id="how-does-predictive-analytics-work"&gt;How does predictive analytics work?&lt;/h2&gt;

&lt;p&gt;The end goal of predictive analytics is to make accurate predictions based on historical data. Here is a general outline of the process for building a predictive analytics system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Determine the goal for the project&lt;/strong&gt;.
The first step is to identify the problem or opportunity you are trying to address via predictive analytics. Define your goals and success metrics upfront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Organize and collect data&lt;/strong&gt;.
The next step will be gathering the data to build your predictive analytics model, as well as the pipeline that will send fresh data to your model for generating predictions. 
This will typically be a combination of public data similar to your own, 3rd-party data relevant to your use case, and your own unique business data for fine-tuning your model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Process data&lt;/strong&gt;.
Once you have your data, one of the biggest challenges is often processing and cleaning it so it’s ready for your model. This can involve removing invalid data, filling in missing data, or transforming data into a standard format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Develop a predictive analytics model&lt;/strong&gt;.
Now that your data has been collected and cleaned, you are ready to actually develop your predictive model. The model you use will depend on your business requirements, including accuracy requirements and the type of modeling you will be doing.&lt;/p&gt;

&lt;p&gt;A predictive model can be used for trend detection, classification, clustering, and more. You can create these models using statistical methods or modern machine learning techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Validate results&lt;/strong&gt;.
Creating and deploying your model is just the first step; once the model is live, you will need to validate the results to confirm it works as expected. 
This generally involves testing against a separate dataset for accuracy, as well as running the model against live production data and evaluating the results based on the output. 
If the results aren’t as good as desired, you may need to return to the previous steps and modify factors like how data is processed and the type of model used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Deploy to production&lt;/strong&gt;.
If your predictive analytics model produces accurate, valuable results, you can now deploy it to production, where people will actually use the results. The system may need a human to confirm the action, or it may be fully automated, taking action solely based on the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Update and improve the model over time&lt;/strong&gt;.
Predictive analytics isn’t a one-time deal. You will want to constantly feed your model recent data so it stays up to date and can be aware of potential changes that need to be integrated. 
Typical tasks would involve retraining the model, adjusting parameters, or providing it with additional data to improve accuracy. The entire system can also be fine-tuned over time to be more efficient and affordable.&lt;/p&gt;

&lt;h2 id="predictive-analytics-use-cases"&gt;Predictive analytics use cases&lt;/h2&gt;

&lt;p&gt;Predictive analytics are useful across almost every industry, but let’s take a look at a few specific examples where predictive analytics are particularly valuable. 
An ideal use case for predictive analytics is any situation where data is relatively easy to collect and having more accurate predictions will generate a significant business impact, such as revenue or cost reduction.&lt;/p&gt;

&lt;h4 id="manufacturing"&gt;Manufacturing&lt;/h4&gt;

&lt;p&gt;In the &lt;a href="https://www.influxdata.com/resources/advanced-manufacturing-monitoring-using-ctrlxos-with-influxdb/"&gt;manufacturing&lt;/a&gt; sector, predictive analytics can be used to predict and prevent machinery malfunctions before they occur. This reduces maintenance costs and improves factory efficiency of factories, resulting in higher profit margins.&lt;/p&gt;

&lt;h4 id="healthcare"&gt;Healthcare&lt;/h4&gt;

&lt;p&gt;Governments and businesses both use predictive analytics to improve the healthcare industry. Governments create predictive models to try to predict and prevent the spread of diseases and also determine investments in healthcare programs. 
Hospitals can use predictive models to look at patient medical records to create personalized treatment plans.&lt;/p&gt;

&lt;h4 id="marketing"&gt;Marketing&lt;/h4&gt;

&lt;p&gt;Predictive analytics can be used for marketing purposes to predict trends in consumer demand, improve customer engagement to prevent churn, and improve sales by recommending products customers might like based on their past purchases compared to those of similar customers.&lt;/p&gt;

&lt;h4 id="supply-chain-management"&gt;Supply Chain Management&lt;/h4&gt;

&lt;p&gt;Predictive analytics can help with supply chain management by forecasting changes in product supply and demand driven by factors such as time of year or location.It can also be used to optimize logistics and manage risk.&lt;/p&gt;

&lt;h4 id="finance"&gt;Finance&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://sigmatechnology.com/articles/predictive-analytics-for-finance-insights-and-case-studies/"&gt;finance&lt;/a&gt; industry uses predictive analytics in a number of ways, ranging from predicting stock prices to detecting fraudulent transactions. Banks can use predictive analytics to assess loan applicants’ risk by comparing historical data with the applicant’s personal history.&lt;/p&gt;

&lt;h2 id="predictive-analytics-challenges"&gt;Predictive analytics challenges&lt;/h2&gt;

&lt;p&gt;While predictive analytics can offer many business benefits, implementing it can be  challenging, especially if a company lacksin-house expertise or infrastructure. Here are some of the key roadblocks to consider when getting started.&lt;/p&gt;

&lt;h4 id="data-quality"&gt;Data Quality&lt;/h4&gt;
&lt;p&gt;To make accurate predictions, you will need a large volume of high-quality data relevant to your predictive analytics use case. This means you need to have a way to collect data and store it in a long-term format that is easy to access for teams creating predictive analytics models.&lt;/p&gt;

&lt;h4 id="integration-with-legacy-systems"&gt;Integration with Legacy Systems&lt;/h4&gt;
&lt;p&gt;Many established businesses will have systems that may not be seamlessly integrated. This means engineering effort will be required to ensure that data is not siloed and that the predictive analytics team can access the systems and data they require.&lt;/p&gt;

&lt;h4 id="accuracy-of-results"&gt;Accuracy of Results&lt;/h4&gt;
&lt;p&gt;The biggest challenge with predictive analytics will be creating a model that produces results accurate enough to justify the investment in creating them and that drives business value.&lt;/p&gt;

&lt;p&gt;This will require not only the initial creation of the model but also constant updates with new data to keep it accurate as conditions change.&lt;/p&gt;

&lt;h4 id="hiring-talent"&gt;Hiring Talent&lt;/h4&gt;
&lt;p&gt;All of the above problems require highly skilled employees to be solved. These skills are in demand across many industries, making it difficult to attract and retain the workers needed to implement a predictive analytics system.&lt;/p&gt;

&lt;h4 id="security"&gt;Security&lt;/h4&gt;
&lt;p&gt;Another challenge with predictive analytics is ensuring that all the new data collected and stored is secure. This data can contain sensitive information about customers or about your business, so security must be a top priority.&lt;/p&gt;

&lt;h2 id="predictive-analytics-techniques"&gt;Predictive analytics techniques&lt;/h2&gt;

&lt;p&gt;There are a number of  models available for generating insights via predictive analytics. The type of model to use for your organization depends on the data you are working with, as well as factors such as the cost to develop the model and your accuracy requirements.
Let’s take a look at some of the most common predictive analytics techniques and models.&lt;/p&gt;

&lt;h4 id="machine-learningai-models"&gt;Machine Learning/AI Models&lt;/h4&gt;

&lt;p&gt;In the past, classical statistical models have dominated predictive analytics and forecasting because of their ease of interpretation, lower computational costs, and accuracy. 
However, in recent years, ML/AI-based models have begun to surpass traditional forecasting methods in accuracy. They also offer the benefit of being easier to generalize across different predictions and of requiring less fine-tuning by highly trained statisticians.&lt;/p&gt;

&lt;h4 id="time-series-models"&gt;Time Series Models&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/time-series-influxdb-vector-database/"&gt;Time series models&lt;/a&gt; are used to analyze temporal data and forecast future values. They are particularly useful when data shows sequential patterns or seasonality, such as stock prices, weather patterns, or sales data.&lt;/p&gt;

&lt;p&gt;Time series models are ideal for data that has seasonal variations and time-based dependencies, making them useful for forecasting.&lt;/p&gt;

&lt;p&gt;Some downsides of time series models are that they can struggle when the data isn’t at regular intervals and may assume past trends will continue, which can make them inaccurate at predicting drastic changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/python-ARIMA-tutorial-influxDB/"&gt;ARIMA&lt;/a&gt; and exponential smoothing are examples of time series models. An easy way to start testing these models for predictive analytics is to use a library like Python Statsmodels.&lt;/p&gt;

&lt;h4 id="regression-models"&gt;Regression Models&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/guide-regression-analysis-time-series-data/"&gt;Regression models&lt;/a&gt; predict a continuous outcome variable based on one or more predictor variables. They are widely used in predictive analytics, from predicting house prices to estimating stock returns.&lt;/p&gt;

&lt;p&gt;Regression models are useful for providing results that are easy to interpret and for identifying clear relationships between variables. Some downsides of regression models are that they do require a decent level of statistics knowledge and can struggle with non-linear relationships and datasets with many variables.&lt;/p&gt;

&lt;p&gt;Linear and logistic regression are examples of regression models. You can get started with regression models using the Python scikit-learn library.&lt;/p&gt;

&lt;h4 id="decision-tree-models"&gt;Decision Tree Models&lt;/h4&gt;

&lt;p&gt;Decision tree models make predictions by learning simple decision rules from the data. They can be used for both regression and classification problems. 
Decision tree models offer results that are easier to understand than those from machine learning models. A challenge is that they can be easily over- or underfit and be affected by small changes in the data.&lt;/p&gt;

&lt;h4 id="gradient-boosting-model"&gt;Gradient Boosting Model&lt;/h4&gt;

&lt;p&gt;Gradient boosting involves creating an ensemble of prediction models, typically from decision tree models. This method can be extremely accurate and has been used in recent years to win many machine learning competitions.&lt;/p&gt;

&lt;p&gt;Gradient boosting is good at providing accurate predictions for data with non-linear relationships between variables and datasets with high dimensionality.&lt;/p&gt;

&lt;p&gt;One weakness is that they can be overfit when they aren’t tuned properly and are more of a black box compared to traditional statistical models. XGBoost and LightGBM are libraries that can be used to create gradient boosting models.&lt;/p&gt;

&lt;h4 id="random-forest-models"&gt;Random Forest Models&lt;/h4&gt;

&lt;p&gt;Random forests are similar to gradient boosting in that they are ensemble models that use decision trees for making predictions. The main difference is that gradient boosting models generally use far more decision trees, and they are also trained sequentially so that errors from previous trees can be corrected.&lt;/p&gt;

&lt;p&gt;In comparison, random forest decision trees make predictions independently, and then the final prediction is created by aggregating those predictions. This makes the results easier to interpret because each decision tree’s prediction can be analyzed. You can test out random forest models on your data using a library like scikit-learn.&lt;/p&gt;

&lt;h4 id="clustering-models"&gt;Clustering Models&lt;/h4&gt;

&lt;p&gt;Clustering models, such as k-means clustering, can be used to group data points. While this is generally used for data analysis, these clusters can also serve as input features for predictive models like the ones mentioned above.&lt;/p&gt;

&lt;p&gt;Cluster modeling can help identify hidden patterns or relationships in your data, but to work, it requires a way to measure how similar data points are, and the number of clusters ‌must be chosen ahead of time.&lt;/p&gt;

&lt;h2 id="future-trends-in-predictive-analytics"&gt;Future trends in predictive analytics&lt;/h2&gt;

&lt;p&gt;The predictive analytics landscape is changing rapidly as technology advances and impacts all industries. Here are a few trends to look out for in the future:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Increased demand for real-time data&lt;/strong&gt;. To get the most accurate results, models need to be updated as frequently as possible so they aren’t out of sync with reality. This means that real-time data and systems that support it will become increasingly important.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Prescriptive analytics&lt;/strong&gt;. The term prescriptive analytics refers to the next step beyond predictive analytics. This involves taking action based on a predicted outcome before it occurs to try to influence the outcome.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Synthetic data&lt;/strong&gt;. Data is the key to making accurate predictions. The problem is that many businesses haven’t collected the data they need. 
A number of tools have been created to generate “synthetic” data, which can help get a           predictive analytics system off the ground using artificial data that mimics the use case.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Further adoption of machine learning and AI&lt;/strong&gt;. While most businesses still rely on traditional methods for prediction, cutting-edge practitioners are using ML/AI to win competitions because of its accuracy.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Easier to use predictive analytics tools&lt;/strong&gt;. Currently, implementing and using predictive analytics requires specialized skills. But domain knowledge is very important for making accurate predictions.&lt;/p&gt;

    &lt;p&gt;Future tools will focus on usability and enabling non-technical users to make predictions based   on their data. This will make implementation more affordable and drive more business value.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="best-practices"&gt;Best practices&lt;/h2&gt;

&lt;p&gt;Here are some helpful tips for using predictive analytics.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Have a well-defined objective&lt;/strong&gt;. Predictive analytics can only generate value when it influences a decision, and hence, the why should be the first thing followed by the model. Without a goal, you’ll maximize the things that make no difference. To implement this, you must clearly state what you want to predict, where you will apply the prediction, and what action you will take.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Focus more on feature engineering than model complexity&lt;/strong&gt;. Features are used to convert raw data into signals that the model can learn, and this step can be what makes the difference in determining success, more than the algorithm used. To do this effectively, design domain-aware features such as rolling averages, lagged values, and behavioral features like frequency and recency.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Measure models based on business impact&lt;/strong&gt;. Conventional measures such as accuracy may be misleading, particularly in skewed problems. It is significant because the technically correct model can be expensive or hazardous to implement. Use measures of actual trade-offs, like accuracy and accuracy of fraud detection, or average misplacement of demand forecasting.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Choose an easy, performance-dependent model&lt;/strong&gt;. Complex models may be appealing; however, they are more difficult to maintain, debug, and explain. This is important in production situations where stability and interpretability are paramount. It is better to start with baselines and simple models, and add complexity only as performance improves.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Provide quality, time-accurate data&lt;/strong&gt;. Predictive models use patterns in past records, and poor quality or poorly ordered records can lead to misleading results. Problems such as lost values, data leakage, or irregular timestamps may only inflate model performance during testing but not in production.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="common-pitfalls-to-avoid-in-predictive-analytics-projects"&gt;Common pitfalls to avoid in predictive analytics projects&lt;/h2&gt;

&lt;h4 id="overfitting-the-model"&gt;Overfitting the Model&lt;/h4&gt;

&lt;p&gt;Overfitting occurs when a model fits noise rather than any general patterns, usually because of too much complexity or too little data. This is important because these models are useful for training data but not for new data.&lt;/p&gt;

&lt;p&gt;An example of this is that a deep neural network trained on a small sample of customers might     work flawlessly at elucidating the past, but would not help predict what customers would         purchase in the future, whereas a simpler model would be more generalizable.&lt;/p&gt;

&lt;h4 id="data-leakage"&gt;Data Leakage&lt;/h4&gt;

&lt;p&gt;Data leakage occurs when the information of the future accidentally affects the model during training. This will happen when it has features with data that cannot be known at prediction time, achieving unrealistically high test performance but failing in practice.&lt;/p&gt;

&lt;p&gt;One such example is the use of the account closed date or an order completion status as an       input into the churn or demand prediction model, which makes the model seem very accurate, but   is not usable in practice.&lt;/p&gt;

&lt;h4 id="using-the-wrong-evaluation-metrics"&gt;Using the Wrong Evaluation Metrics&lt;/h4&gt;

&lt;p&gt;Accuracy alone can be a bad way to measure model performance, especially for use cases where positives are rare and costly when missed. An example would be fraud detection, a model that simply classifies all transactions as non-frauds would be very accurate(due to over 99% of transactions being legitimate), but in reality it’s still missing every case of fraud. For use cases like this teams need to use metrics that track actual business impact when evaluating their models.&lt;/p&gt;

&lt;h4 id="ignoring-changes-in-data-patterns"&gt;Ignoring Changes in Data Patterns&lt;/h4&gt;

&lt;p&gt;Predictive models assume that future data will behave like past data; however, in reality, systems continue to evolve. This is particularly problematic in areas such as retail or finance, where seasonality, promotions, or changes in user behaviour often change.&lt;/p&gt;

&lt;h2 id="faqs"&gt;FAQs&lt;/h2&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;Predictive Analytics vs Predictive Maintenance&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Predictive analytics is a broad field that uses statistical algorithms, machine learning, and data to anticipate future events across many domains. It identifies patterns in historical and current data to predict future trends, behaviors, and activities. Predictive analytics is used across industries such as finance, healthcare, and marketing to inform decision-making and develop proactive strategies. Predictive maintenance, on the other hand, is a specific application of predictive analytics in maintenance and asset management. It uses predictive analytics techniques to anticipate when equipment might fail or require maintenance. By analyzing data from sensors, logs, and historical maintenance records, predictive maintenance models can forecast equipment failures before they happen. The goal is to perform maintenance in time to prevent failures, improving efficiency and reducing downtime. In short, predictive maintenance is a subset of the broader predictive analytics ecosystem.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;Traditional Statistical Models vs Machine Learning and AI Models for Predictive Analytics&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                More traditional techniques, such as regression models and decision trees, have been used for decades in predictive analytics. This is due to their simplicity, lower computational requirements, and ability to show the relationship between specific variables and the impact of changing them on business outcomes. In recent years, AI/ML techniques like neural networks and gradient boosting have grown in popularity for predictive analytics use cases. The primary reason is that ML techniques can perform better with higher-dimensional data, where relationships among numerous variables are harder to define. These AI/ML models can learn from data without explicit tuning and can uncover relationships between variables that aren't obvious, resulting in higher accuracy. Some downsides of AI/ML for predictive analytics are that they tend to require more hardware for computation and are harder to interpret in terms of how they produce results, in some ways acting as black boxes.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;
&lt;/div&gt;
</description>
      <pubDate>Tue, 24 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/predictive-analytics-guide-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/predictive-analytics-guide-2026/</guid>
      <category>Getting Started</category>
      <category>Developer</category>
      <author>Company (InfluxData)</author>
    </item>
    <item>
      <title>Node-RED Dashboard Tutorial</title>
      <description>&lt;p&gt;If you’re already familiar with &lt;a href="https://www.influxdata.com/blog/iot-easy-node-red-influxdb/"&gt;Node-RED&lt;/a&gt;, you know how useful it is for automation. This post is a guide to getting started with the Node-RED dashboard 2.0. We’ll cover how to install the Node-RED dashboard 2.0 nodes and provide examples of how to create a graphical user interface (GUI) for your automation.&lt;/p&gt;

&lt;h2 id="what-is-node-red"&gt;What is Node-RED?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt; is a programming tool built on Node.js that lets you create automated workflows with minimal code. It wires different nodes together, with each node performing a specific function that links them to create a flow that carries out an automation task (e.g., switching off the light in a room or closing a door).&lt;/p&gt;

&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;We assume you’ve already installed and set up Node-RED. If that isn’t the case, here are some guides that can help you achieve that a few different ways:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://nodered.org/docs/getting-started/raspberrypi"&gt;Run Node-RED on Raspberry PI&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://randomnerdtutorials.com/getting-started-node-red-raspberry-pi/"&gt;Getting Started with Node-RED on Raspberry Pi&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://nodered.org/docs/getting-started/local"&gt;Running Node-RED Locally&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="node-red-dashboard-installation"&gt;Node-RED dashboard installation&lt;/h2&gt;

&lt;p&gt;The first step in the process is installing the dashboard module in Node-RED.&lt;/p&gt;

&lt;p&gt;In the browser’s Node-RED window, click on the &lt;strong&gt;Menu&lt;/strong&gt; icon at the top right corner of the page, find &lt;strong&gt;Manage Palette&lt;/strong&gt; on the menu items, click &lt;strong&gt;Install&lt;/strong&gt;, and search for &lt;code class="language-markup"&gt;@flowfuse/node-red-dashboard&lt;/code&gt;. Install it and make sure the module reads &lt;strong&gt;@flowfuse/node-red-dashboard&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1Hk7SrlPAjmL8wC7zA40Pa/665ae128d3617f53fd3a1662f8f3e5f5/Node_red_1.png" alt="Node Red 1" /&gt;&lt;/p&gt;

&lt;p&gt;After successful installation, all the Dashboard 2.0 nodes will appear in the palette section. Each dashboard node provides a widget that you can display in a user interface (UI) (e.g., a gauge, chart, or button).&lt;/p&gt;

&lt;h2 id="creating-a-user-interface"&gt;Creating a user interface&lt;/h2&gt;

&lt;p&gt;In this section, we’ll walk through how to create a dashboard UI using the Dashboard 2.0 nodes on Node-RED. But first, let’s understand the dashboard hierarchy.&lt;/p&gt;

&lt;h4 id="dashboard-hierarchy"&gt;Dashboard Hierarchy&lt;/h4&gt;

&lt;p&gt;Dashboard 2.0 uses this hierarchical structure:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Base&lt;/strong&gt;: Defines the base URL for your dashboard. The default is /dashboard.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Page&lt;/strong&gt;: Individual pages that users can navigate to.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Theme&lt;/strong&gt;: Offers different options for styling the dashboard.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Group&lt;/strong&gt;: Collections of widgets within a page&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Widget&lt;/strong&gt;: Individual UI elements, like buttons, text, charts, and others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use pages and groups to arrange the UI in the Node-RED Dashboard 2.0. Pages are the main navigation section that hold or display different groups, which separate a page into several sections. In each group, you organize your node widgets (buttons, texts, charts, etc.)&lt;/p&gt;

&lt;h2 id="creating-your-first-widget"&gt;Creating your first widget&lt;/h2&gt;

&lt;p&gt;Now let’s create your first widget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Grab a button node&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Look at your palette on the left side and find the button node (It’s in the dashboard 2 section). Click and drag it to your workspace (the big grid area in the middle of your screen).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Open the configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Double-click on the button you just placed. A configuration window will pop up with a bunch of options.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2PvqWP8G2jK0ohaYL6fKh6/85ceb83ec7758be4f3d0105021dcc925/NR_2.png" alt="NR 2" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See &lt;strong&gt;Group&lt;/strong&gt; near the top? Click the pencil icon next to the dropdown. This edits the default group Node-RED created when you added the widget. Give it a name like “My Controls.” If you want to create a completely new group instead, click the plus sign next to the pencil icon.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1u10hF9modcL7DKo4njkjb/83b114b00cd2d836625ffcf7b1393866/NR_3.png" alt="NR 3" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create a page&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, in that same group window, you’ll see a field that says &lt;strong&gt;Page&lt;/strong&gt;. Click the pencil icon next to it and then give the page a name, maybe “Dashboard,” for your first one. Now click &lt;strong&gt;Add&lt;/strong&gt; ( or &lt;strong&gt;Update&lt;/strong&gt; if you’re editing the default, as in my case).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6nGJkbHPoosBuPvGvpqb92/ae280d129c9d3304ca70439dbfcabfae/NR_4.png" alt="NR 4" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Finish up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ll be back at the group window. Click &lt;strong&gt;Add&lt;/strong&gt; (or &lt;strong&gt;Update&lt;/strong&gt;) there as well, and you should be returned to the button configuration window. Click &lt;strong&gt;Done&lt;/strong&gt;, and everything should be set.&lt;/p&gt;

&lt;p&gt;Now we have successfully created our first page and group. The next step will be displaying something on the user interface.&lt;/p&gt;

&lt;h2 id="display-a-widget-on-the-ui"&gt;Display a widget on the UI&lt;/h2&gt;

&lt;p&gt;In this example, we want to create a button that displays text when clicked on.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Add a text node from your palette and place it near your button.&lt;/li&gt;
  &lt;li&gt;Wire them together. You can do this by dragging from the button’s right port to the text node’s left port.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3FwpfXSQRz6GJwAgS8iEpU/ede789b4db8464019b0d5151cef6b231/NR_5.png" alt="NR 5" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Double-click the text node and set the &lt;strong&gt;Group&lt;/strong&gt; to match your button’s group (in my case, “My Controls”). Also, give a name and a label, then click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6skwulFLfLIcIBEVqdbz5G/452c539b825ad4b88ea376b34d7daa96/NR_6.png" alt="NR 6" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Next, double-click the button, scroll to &lt;strong&gt;Payload&lt;/strong&gt;, type “Welcome to Dashboard 2.0!”, enter a name and a label like “Click Me,” and click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Now click the red &lt;strong&gt;Deploy&lt;/strong&gt; button in the top right corner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3Xz3OftjpYJ7yBIH0spfa0/384d8a9acea4f35f56216caa79ec5c8c/NR_7.png" alt="NR 7" /&gt;&lt;/p&gt;

&lt;p&gt;To view the dashboard UI, follow this URL: &lt;strong&gt;http://localhost:1880/dashboard&lt;/strong&gt; or &lt;strong&gt;http://Your_RPi_IP_address:1880/dashboard&lt;/strong&gt;, where Your_RPi_IP_address is the address of the Raspberry Pi machine you’re using, 1880 is the port where Node-RED is exposed, and /dashboard displays the dashboard user interface.&lt;/p&gt;

&lt;p&gt;If everything works correctly, you’ll see the same window shown below.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/38YMP6KccaYsNuKWmHQKrL/488b0a9d1366c1aa8393d4d492e7fec4/NR_8.png" alt="NR 8" /&gt;&lt;/p&gt;

&lt;p&gt;When you click the button, the “Welcome to Dashboard 2.0” text will appear as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6ajhqQpArpTyhOstyjuR7U/484fbd9e7f66bef4a1b1a59ceb5c17db/NR_9.png" alt="NR 9" /&gt;&lt;/p&gt;

&lt;p&gt;You might see a light theme background. We’ll get to how to change themes later in the post.&lt;/p&gt;

&lt;h2 id="more-examples-of-widgets"&gt;More examples of widgets&lt;/h2&gt;

&lt;p&gt;Let’s create a simple dashboard example with two tabs and UI elements (widgets) in each.&lt;/p&gt;

&lt;h4 id="creating-additional-pages"&gt;Creating Additional Pages&lt;/h4&gt;

&lt;p&gt;Open the Dashboard 2.0 sidebar (right side of your editor) and click the &lt;strong&gt;+ Page&lt;/strong&gt; button at the top.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6ySE2mN8LlxueXAWBj63TN/1ba40be315933251dda9a2130b294369/NR_10.png" alt="NR 10" /&gt;&lt;/p&gt;

&lt;p&gt;This will create a new page and pop up a window with options to edit the page’s properties (such as name, path, and others). Remember, you can also create pages when configuring a widget (as we did earlier).&lt;/p&gt;

&lt;h2 id="building-an-interactive-data-visualization"&gt;Building an interactive data visualization&lt;/h2&gt;

&lt;p&gt;Now, we’ll create a slider that controls both a gauge and a chart in real-time. This is the setup:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Drag a slider, gauge, and chart node into your workspace.
&lt;strong&gt;Step 2:&lt;/strong&gt; Double-click the slider, set min to 0, max to 100, and assign to a new group on your second page, which you just created (mine is “Monitoring Dashboard”).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/TnzXMlz6PJKg9uKWbTzQB/7d0b08a5a0205421c8ecfee3d4e3df1e/NR_11.png" alt="NR 11" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Next, double-click the gauge. Assign it to the same group and range of 0-100, and add a color segment (green 0-30, yellow 30-70, red 70-100).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/70sDgKnPHNwFm4rXnwcqJW/30dac2bdc5faafec5b54f4a916459cb8/NR_12.png" alt="NR 12" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Next, configure the chart. Same group and set Type to “Line.”&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/OGKPTHO8ZP9xCvJmdZmRR/5197b48b02bfebc85473f51ef7b100aa/NR_13.png" alt="NR 13" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; After this, you need to wire the gauge and chart to the slider, as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5cuUQySAXoXAAsGgbusG9n/dbad57c8d2fd66739419306b0ff0d7fa/Screenshot_2026-02-18_at_1.06.21â__AM.png" alt="NR 14" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Deploy and go to your dashboard UI (&lt;strong&gt;http://localhost:1880/dashboard&lt;/strong&gt; or &lt;strong&gt;http://Your_RPi_IP_address:1880/dashboard&lt;/strong&gt;).
&lt;strong&gt;Step 7:&lt;/strong&gt; In your dashboard UI, navigate to the next page. Click the hamburger icon in the top left corner, and you will see a list of your pages. Select your second page (mine is “Monitoring Dashboard”).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4FQZQuKkk7qWwU83iVzMDG/88f9a3e773ab36eeeb727d32a0bfc9b1/NR_14.png" alt="NR 14/2" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; On your second page, notice the initial state of your dashboard.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2MaKTPvxzcfryDshVRmMR9/1a856b4321821f56825f83690c5d9b9c/NR_15.png" alt="NR 15" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; Move the slider on your dashboard and watch everything update instantly (the gauge changes color, the chart plots, and Z the values).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4xzX407VuoK6ahonnh0dS7/4b21bbca2f5b5b5ba22d0a1abfc62c49/NR_16.png" alt="NR 16" /&gt;&lt;/p&gt;

&lt;h2 id="connecting-to-real-weather-data"&gt;Connecting to real weather data&lt;/h2&gt;

&lt;p&gt;Now, let’s enhance our example by connecting to a real data source. In this example, we would fetch real weather data and display it on our dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Go to your &lt;strong&gt;Manage Palette&lt;/strong&gt; on your &lt;strong&gt;Menu&lt;/strong&gt;, search and install node-red-node-openweathermap (Like we did with @flowfuse/node-red-dashboard in the previous example).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Get an API Key from http://openweathermap.org (New keys take up to two hours to activate, so be patient if you’re just signing up).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Drag &lt;strong&gt;openweathermap&lt;/strong&gt; node to your workspace, double-click it, paste your API key and enter your city and country (e.g., “New York City, US”), and then click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/13EmvDO0POWTT0cHNS3aXF/e1b1122067660cc2df5e851f910d930d/NR_17.png" alt="NR 17" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Next, drag an &lt;strong&gt;inject&lt;/strong&gt; node to your workspace, double-click it, and set &lt;strong&gt;Repeat&lt;/strong&gt; to “interval.” Also, check “Inject once after” so it runs on startup. Then wire the &lt;strong&gt;inject&lt;/strong&gt; node to your &lt;strong&gt;openweathermap&lt;/strong&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/27d2Ljk4kAuUMRCyGzN7nv/9e307ac565fc1293fc7c9f22ca29b232/NR_18.png" alt="NR 18" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Drag a &lt;strong&gt;debug&lt;/strong&gt; node, wire it to the &lt;strong&gt;openweathermap&lt;/strong&gt; node, and &lt;strong&gt;Deploy&lt;/strong&gt;. Open the &lt;strong&gt;Debug&lt;/strong&gt; panel (in the right sidebar, bug icon). You’ll see the weather data structure showing what’s available.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1lSaUl4BslgxhxEEAlNeWB/98a339bb4cfdc518259e2df11d635362/NR_19.png" alt="NR 19" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Next, add three function nodes to extract data:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Double-click the first function node, name it “Get Temperature,” and add the code below:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-javascript"&gt;msg.payload = msg.payload.tempc; 
return msg;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4zn26Af28J5AqGKEFSfgxY/e97e11f202952c7d2e02b43e6fad0472/NR_20.png" alt="NR 20" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Name the second “Get Humidity” and add:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-javascript"&gt;msg.payload = msg.payload.humidity; 
return msg;&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Name the last function “Get Conditions” and add:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-javascript"&gt;msg.payload = msg.payload.description;
return msg;&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Wire all three functions from the &lt;strong&gt;openweathermap&lt;/strong&gt; node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Next, add display widgets:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Add two &lt;strong&gt;gauge&lt;/strong&gt; nodes for temperature (set unit as “°C”, range -10 to 40) and humidity (% as unit, range 0-100).&lt;/li&gt;
  &lt;li&gt;Add one &lt;strong&gt;text&lt;/strong&gt; node for the weather conditions.&lt;/li&gt;
  &lt;li&gt;Assign all to the same group (preferably a new group and page) and wire each function to its corresponding display widget.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6l9J6Yp69hbfXM4JjXlOEq/ece07849ccc75a8e5aef240af5835735/NR_21.png" alt="NR 21" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Finally, deploy and visit your dashboard UI.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3Hf47PDNUor8qqH9cSHRnE/fa717fc26ff025fd1c50a5d392aaa4b7/NR_22.png" alt="NR 22" /&gt;&lt;/p&gt;

&lt;p&gt;Now, you’ve gotten a live weather dashboard updating every 10 minutes!&lt;/p&gt;

&lt;h2 id="dashboard-theme-and-styling"&gt;Dashboard theme and styling&lt;/h2&gt;

&lt;p&gt;By default, the Node-RED dashboard comes with a light theme, but you can also customize it. To edit the theme on your UI, open the Dashboard 2.0 sidebar, click on the Theme section, and you can:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Customize themes and colors&lt;/li&gt;
  &lt;li&gt;Set custom fonts&lt;/li&gt;
  &lt;li&gt;Adjust spacing and sizing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Z3yPbMMtfPMO3KaaAlMXO/a7e206cc780611432cb984128abf8280/NR_23.png" alt="NR 23" /&gt;&lt;/p&gt;

&lt;p&gt;After you’re done, don’t forget to deploy so you can view your changes.&lt;/p&gt;

&lt;h2 id="common-issues-and-fixes"&gt;Common issues and fixes&lt;/h2&gt;

&lt;p&gt;While following this tutorial, here are some common hurdles you might encounter and how to fix them:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If your dashboard fails to open, look at the URL. Try using http://localhost:1880/dashboard, otherwise, use your PI’s IP instead.&lt;/li&gt;
  &lt;li&gt;If your widgets are not appearing on your dashboard, ensure that they’re connected to a group and a page before deploying.&lt;/li&gt;
  &lt;li&gt;If OpenWeatherMap returns “Invalid API key,” just wait a bit (activation can last two hours).
Clicking buttons, but no response? Ensure your nodes are well-connected.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id="wrapping-up-node-red-dashboard"&gt;Wrapping up Node-RED dashboard&lt;/h2&gt;

&lt;p&gt;The aim of this guide was to provide basic knowledge of how the Node-RED dashboard 2.0 works on Raspberry Pi. I believe, with all the examples we covered in this post, you should have an idea of what you want to do next with the Node-RED dashboard on your journey to an automated life.&lt;/p&gt;

&lt;h2 id="node-red-dashboard-faqs"&gt;Node-RED Dashboard FAQs&lt;/h2&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between Node-RED Dashboard 1.0 and Dashboard 2.0?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Dashboard 2.0, published by FlowFuse under the package &lt;code&gt;@flowfuse/node-red-dashboard&lt;/code&gt;, is a ground-up rewrite of the original Node-RED dashboard. It introduces a new hierarchy (Base, Page, Theme, Group, Widget), improved theming and styling controls, and a more modern component set. If you're starting a new project, Dashboard 2.0 is the recommended choice, as it is actively maintained and better suited for complex, multi-page UIs.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Can I run the Node-RED dashboard without a Raspberry Pi?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Yes. Node-RED runs on any system that supports Node.js, including Windows, macOS, and Linux. The dashboard is accessed through a browser at &lt;code&gt;http://localhost:1880/dashboard&lt;/code&gt; regardless of which machine Node-RED is installed on. Raspberry Pi is a popular choice for home automation projects, but it is not required to use the Node-RED dashboard.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Why are my Node-RED dashboard widgets not showing up?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                The most common reason widgets don't appear is that they haven't been assigned to both a Group and a Page before deploying. Every widget in Dashboard 2.0 must be linked to a group, which must itself be linked to a page. Double-click the widget node to verify those assignments, then click the red Deploy button and refresh your dashboard URL.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I display live data from an external API in the Node-RED dashboard?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Use an inject node set to a repeating interval to trigger the flow on a schedule, then connect it to a data-source node (such as &lt;code&gt;node-red-node-openweathermap&lt;/code&gt; for weather data). Wire one or more function nodes after the data source to extract the specific fields you need from the payload, then connect each function node to a corresponding display widget — such as a gauge or text node — on your dashboard. Deploy the flow and the dashboard will update automatically at each interval.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-5"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I add multiple pages to my Node-RED dashboard?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-5" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Open the Dashboard 2.0 sidebar on the right side of the Node-RED editor and click the &lt;strong&gt;+ Page&lt;/strong&gt; button to create a new page. You can also create a page directly when configuring any widget node — click the pencil icon next to the Page field to edit or create one inline. Once pages are created, users can navigate between them via the hamburger menu in the top-left corner of the dashboard UI.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-6"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I change the theme or colors of my Node-RED dashboard?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-6" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In the Dashboard 2.0 sidebar, navigate to the Theme section. From there you can customize colors, set custom fonts, and adjust spacing and sizing. The default is a light theme, but dark and custom themes are supported. After making changes, click Deploy to apply them to your live dashboard.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-7"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What types of widgets are available in Node-RED Dashboard 2.0?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-7" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Dashboard 2.0 includes a range of UI widgets covering inputs (buttons, sliders), displays (gauges, charts, text), and layout controls (groups, pages). Gauges support color-coded segments, charts support line and other types, and sliders can drive real-time updates to other widgets. All widgets appear in the palette under the "dashboard 2" section after installing &lt;code&gt;@flowfuse/node-red-dashboard&lt;/code&gt;.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
</description>
      <pubDate>Wed, 18 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/node-red-dashboard-tutorial/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/node-red-dashboard-tutorial/</guid>
      <category>Developer</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>From Legacy Data Historians to a Modern, Open Industrial Data Stack</title>
      <description>&lt;p&gt;We recently sat down with founder and principal consultant at recultiv8, &lt;a href="https://za.linkedin.com/in/coenraadpretorius"&gt;Coenraad Pretorius&lt;/a&gt;, who drew on his years of data engineering experience in the manufacturing and energy sectors to share key industrial IoT insights. In this article, I list the top takeaways; you can also watch the full webinar recording &lt;a href="https://www.influxdata.com/resources/modernizing-industrial-data-stacks-energy-optimization-with-recultiv8-influxdb"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-challenge-with-traditional-data-historians"&gt;The challenge with traditional data historians&lt;/h2&gt;

&lt;p&gt;Industrial systems generate large volumes of time series data from machines, sensors, and control systems. Historically, this data has been managed using proprietary data historian platforms.&lt;/p&gt;

&lt;p&gt;These systems often lead to the following challenges:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Complexity&lt;/strong&gt;: Traditional stacks involve many tightly coupled components: SCADA systems, OPC servers, historians, data extraction tools, and analytics layers. Each layer requires specialized skills, making systems difficult to debug, extend, or modernize.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;High cost&lt;/strong&gt;: Per-tag licensing, annual maintenance fees, and specialized training significantly increase the total cost of ownership, particularly as systems scale.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Slow time to insight&lt;/strong&gt;: Extracting and analyzing data often takes days or weeks, delaying decisions and limiting optimization opportunities.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The analytics gap&lt;/strong&gt;: Traditional historians prioritize &lt;strong&gt;data storage&lt;/strong&gt;, not &lt;strong&gt;data analysis&lt;/strong&gt;. Common pain points include proprietary query languages, reliance on Excel exports, overloaded BI integrations, and additional licensing for advanced features. As a result, time to action is measured in days or weeks rather than hours, which is an unacceptable delay for modern industrial operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="data-historian-technical-architecture"&gt;Data Historian Technical Architecture&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Atyss4Y4ewXA83dyvy5tP/210bc838fdb9f5c416b8c1af0603f021/Screenshot_2026-02-11_at_9.41.38â__PM.png" alt="Data Historian Traditional Architecture" /&gt;&lt;/p&gt;

&lt;h2 id="a-modern-open-architecture-edge--cloud"&gt;A modern, open architecture: edge + cloud&lt;/h2&gt;

&lt;p&gt;To address these limitations, Coenraad presented a modern architecture built around InfluxDB 3, open source tooling, and cloud analytics. The core idea is a &lt;strong&gt;clear separation of responsibilities&lt;/strong&gt; that leads to improved performance, security, cost efficiency, and scalability while keeping systems simpler and easier to operate.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Edge systems&lt;/strong&gt; handle real-time ingestion, short-term storage, and operational dashboards close to the data source.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cloud systems&lt;/strong&gt; handle long-term storage, historical analysis, and advanced analytics without impacting operational performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="modern-iiot-technical-architecture"&gt;Modern IIoT Technical Architecture&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3LQnfjtGdArcpRRAPgZnel/a49068e5d20e85b5f92e3f927231f93b/Screenshot_2026-02-11_at_9.56.11â__PM.png" alt="Modern Stack Overview" /&gt;&lt;/p&gt;

&lt;h2 id="example-from-coenraads-case-study"&gt;Example from Coenraad’s case study&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Typical deployment setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Four OPC UA servers&lt;/li&gt;
  &lt;li&gt;10k+ tags&lt;/li&gt;
  &lt;li&gt;Windows-based servers&lt;/li&gt;
  &lt;li&gt;Telegraf running as Windows service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration approach&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Split config files (agent, inputs, outputs)&lt;/li&gt;
  &lt;li&gt;Custom Starlark processor for schema management&lt;/li&gt;
  &lt;li&gt;Environment variables for cloud credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: Rapid implementation of the modern data stack using open source solution resulted in saving $70k (once off) and $5 (annually).&lt;/p&gt;

&lt;h2 id="why-this-approach-works"&gt;Why this approach works&lt;/h2&gt;

&lt;p&gt;This modern stack delivers several practical benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Simpler systems&lt;/strong&gt; built with familiar tools like SQL and Python that most developers are familiar with.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Faster dashboards&lt;/strong&gt; move from multi-second load times to near instant response as detailed in this &lt;a href="https://h3xagn.com/blazingly-fast-dashboards-with-influxdb"&gt;blog post&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Lower costs&lt;/strong&gt; are incurred by replacing proprietary licensing with open source and consumption-based services.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Flexible data pipelines&lt;/strong&gt; use Telegraf to ingest data from industrial protocols such as OPC UA, MQTT, and Modbus into InfluxDB Core with optional streaming to the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="recap"&gt;Recap&lt;/h2&gt;

&lt;p&gt;The difference is fairly cut and dry: traditional data historians often limit agility and slow down insights, while modern industrial data stacks focus on speed, openness, and maintainability by separating edge operations from cloud analytics and using familiar, developer-friendly tools. For industrial and IIoT teams, modernizing the data pipeline is now foundational. To learn more, read the Teréga &lt;a href="https://www.influxdata.com/blog/terega-replaced-legacy-data-historian-with-influxdb-aws-io-base/"&gt;case study&lt;/a&gt; and connect with our community in the InfluxDB forums.&lt;/p&gt;
</description>
      <pubDate>Thu, 12 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/modern-industrial-stack-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/modern-industrial-stack-influxdb/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
  </channel>
</rss>
