<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog</title>
    <description>The place for technical guides, customer observability &amp; IoT use cases, product info, and news on leading time series platform InfluxDB, Telegraf, SQL, &amp; more.</description>
    <link>https://www.influxdata.com/blog/</link>
    <language>en-us</language>
    <lastBuildDate>Thu, 02 Apr 2026 12:00:00 +0000</lastBuildDate>
    <pubDate>Thu, 02 Apr 2026 12:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>What’s New in InfluxDB 3.9: More Operational Control and a New Performance Preview</title>
      <description>&lt;p&gt;We’ve spent the last few months listening to how teams are running InfluxDB 3 in the wild. The feedback was clear: as you scale, you need less “guesswork” and more control. Today’s release of InfluxDB 3.9 is our answer to that.&lt;/p&gt;

&lt;p&gt;As more teams move InfluxDB 3 into production, our focus has shifted toward the operational experience: how you manage the database at scale, how you ensure it remains secure, and how you provide a seamless experience for users. This release is packed with a host of quality-of-life improvements and a beta of the key features we have planned for upcoming releases.&lt;/p&gt;

&lt;p&gt;Whether you’re using the open source &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; for recent data and local workloads or scaling with &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt; for the full clustering and security suite, these 3.9 updates are designed to make your stack more predictable.&lt;/p&gt;

&lt;h2 id="operational-maturity-and-system-transparency"&gt;Operational maturity and system transparency&lt;/h2&gt;

&lt;p&gt;In 3.9, we’ve focused on making the database more predictable and transparent for operators. We have organized these refinements into three key areas:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Advanced CLI &amp;amp; Automation&lt;/strong&gt;: We’ve expanded the CLI to better support complex, headless environments. This includes new flags for non-interactive automation and data validation, alongside support for unique host overrides to target specific node types in a cluster. We’ve also improved how Parquet query outputs are piped, making it easier to integrate InfluxDB into automated data pipelines.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;System Reliability &amp;amp; Resource Management&lt;/strong&gt;: We’ve refined how the database handles resources and large-scale schemas. To better support complex data, we’ve increased the default string field limit to 1MB. We’ve also hardened the database lifecycle; administrative controls are now more rigorous, and we’ve ensured that background resources, such as triggers, are cleanly decommissioned whenever a database is removed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Visibility &amp;amp; Under-the-Hood Infrastructure&lt;/strong&gt;: We’ve upgraded our core infrastructure to improve both security and operational clarity. This includes upgrading DataFusion and the bundled Python for more efficient query execution and plugin security. Additionally, the system now provides better visibility into access control and product identity, updating metrics, headers, and metadata access to clearly distinguish between Core and Enterprise builds across your stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Collectively, these refinements remove the subtle points of friction that can accumulate as a system scales in production. By hardening resource management and streamlining automation, we’re ensuring that InfluxDB 3 remains a predictable, “set-it-and-forget-it” core for your infrastructure.&lt;/p&gt;

&lt;h2 id="now-in-beta-a-new-performance-preview"&gt;Now in beta: A new performance preview&lt;/h2&gt;

&lt;p&gt;Behind the scenes, we’ve been working on performance updates to InfluxDB 3. These improvements support large-scale time series workloads without sacrificing predictability or operational simplicity. This work lays the foundation for what’s coming in 3.10 and 3.11, specifically focusing on smoothing behavior under load and expanding the range of schemas InfluxDB 3 can handle.&lt;/p&gt;

&lt;p&gt;Because performance in time series is highly dependent on specific workloads and cardinality, we are introducing these updates as a beta in InfluxDB 3 Enterprise. The beta is intended for testing in staging or development environments only. It allows you to explore and provide feedback on:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Optimized single-series queries&lt;/strong&gt;: Targeting reduced latency when fetching single-series data over long time windows.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Resource smoothing&lt;/strong&gt;: Testing reduced CPU and memory spikes during heavy compaction or ingestion bursts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Wide-and-sparse table support&lt;/strong&gt;: For handling schemas ranging from extreme column counts to ultra-sparse data tables (or any combination).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automatic distinct value caches&lt;/strong&gt;: Early-stage, auto-creation of caches designed to reduce friction and eliminate metadata query latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates are available as an optional, flag-gated preview in InfluxDB 3.9 Enterprise. &lt;strong&gt;They are not recommended for production workloads&lt;/strong&gt;. We encourage Enterprise users to test these capabilities against their specific use cases to help us refine the features for GA. InfluxDB 3 Core will also support many of these new features in the coming releases.&lt;/p&gt;

&lt;p&gt;For instructions on how to enable these preview flags and to view the full technical requirements, visit our &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;official Enterprise documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="get-started-and-share-your-feedback"&gt;Get started and share your feedback:&lt;/h5&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Download InfluxDB 3.9&lt;/strong&gt;: Available now via our &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=influxdb_3_9&amp;amp;utm_content=blog"&gt;downloads page&lt;/a&gt; or latest Docker images.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Join the beta&lt;/strong&gt;: If you are an InfluxDB 3 Enterprise Trial user, reach out to me in our &lt;a href="https://discord.com/invite/9zaNCW2PRT"&gt;Discord&lt;/a&gt; or &lt;a href="https://influxcommunity.slack.com/join/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA#/shared-invite/email"&gt;Community Slack&lt;/a&gt; to learn how to enable these beta features.&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Thu, 02 Apr 2026 12:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-9/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-9/</guid>
      <category>Product</category>
      <category>Developer</category>
      <category>news</category>
      <author>Peter Barnett (InfluxData)</author>
    </item>
    <item>
      <title>What is MRO? Maintenance, Repair, and Operations Explained</title>
      <description>&lt;p&gt;MRO stands for &lt;strong&gt;maintenance, repair, and operations&lt;/strong&gt;. It refers to the activities, supplies, and services that keep equipment, facilities, and infrastructure running safely and efficiently. Every industry that relies on physical assets depends on MRO, whether that means replacing a worn bearing on a production line, restocking safety gloves in a warehouse, or servicing an HVAC system in a hospital.&lt;/p&gt;

&lt;p&gt;Despite being one of the largest categories of indirect spending in most organizations, MRO is chronically under-managed. This article explains what MRO covers, why it matters, how maintenance strategies differ, and how it plays out across industries.&lt;/p&gt;

&lt;h2 id="what-is-mro"&gt;What is MRO?&lt;/h2&gt;

&lt;p&gt;MRO is a broad category that encompasses the indirect materials, maintenance activities, and operational support required to keep a business functioning. MRO includes everything from spare parts and lubricants to safety equipment, cleaning supplies, and the labor required to inspect, fix, and service physical assets.&lt;/p&gt;

&lt;p&gt;The scope of MRO varies by organization, but it always sits outside of direct production. A replacement motor for a conveyor belt is an MRO item. The raw steel that travels on that conveyor is not. This distinction matters for accounting, procurement strategy, and inventory management.&lt;/p&gt;

&lt;h4 id="common-mro-supplies-and-activities"&gt;Common MRO Supplies and Activities&lt;/h4&gt;

&lt;p&gt;MRO is easier to understand through concrete examples:&lt;/p&gt;

&lt;div&gt;
  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;Category&lt;/th&gt;
        &lt;th&gt;Description&lt;/th&gt;
        &lt;th&gt;Examples&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;MRO supplies&lt;/td&gt;
        &lt;td&gt;Parts, materials, and consumables used to maintain equipment and facilities.&lt;/td&gt;
        &lt;td&gt;Spare parts (bearings, seals, belts, filters, motors), lubricants and greases, fasteners, hand and power tools, electrical components (fuses, contactors, wiring), safety equipment (gloves, goggles, hard hats, respirators), cleaning and janitorial products, adhesives and tapes, and facility consumables (light bulbs, HVAC filters).&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;MRO activities&lt;/td&gt;
        &lt;td&gt;Hands-on maintenance and repair work performed on assets.&lt;/td&gt;
        &lt;td&gt;Routine inspections, lubrication, electrical testing, equipment alignment, welding repairs, painting and corrosion protection, calibration, and full equipment rebuilds.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;MRO services&lt;/td&gt;
        &lt;td&gt;Outsourced or contracted maintenance support.&lt;/td&gt;
        &lt;td&gt;Third-party maintenance contracts, on-call repair technicians, specialized inspections (non-destructive testing), and outsourced maintenance for complex assets.&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="why-mro-matters"&gt;Why MRO matters&lt;/h2&gt;

&lt;p&gt;MRO spending often accounts for a significant share of an organization’s operating costs, yet it receives a fraction of the strategic attention that direct materials get. The numbers make a compelling case for changing that.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;The market is massive&lt;/strong&gt;. The global MRO market was valued at roughly $715 billion in 2025 and is projected to grow steadily through the next decade, driven by aging infrastructure, the rise of predictive maintenance, and increasing demand for operational efficiency.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Downtime is extraordinarily expensive&lt;/strong&gt;. &lt;a href="https://www.ismworld.org/supply-management-news-and-reports/news-publications/inside-supply-management-magazine/blog/2024/2024-08/the-monthly-metric-unscheduled-downtime/"&gt;A 2024 Siemens report&lt;/a&gt; found that unplanned downtime costs the world’s 500 largest companies a combined $1.4 trillion per year, roughly 11% of their annual revenues. At a facility level, costs vary by industry, but the averages are sobering: approximately $260,000 per hour in general manufacturing, and over $2 million per hour in automotive production. Even smaller manufacturers typically lose over $100,000 per hour of unexpected downtime.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Equipment failure is the leading cause of downtime&lt;/strong&gt;. The average manufacturer faces an estimated 800 hours of equipment downtime annually. Equipment failure accounts for roughly 42% of unplanned downtime incidents, and base components like bearings, seals, and motors are the most common culprits. These are precisely the kinds of failures that a well-run MRO program is designed to prevent.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Proactive maintenance pays for itself&lt;/strong&gt;. Research from McKinsey and others consistently shows that organizations implementing predictive maintenance programs see &lt;a href="https://www.iiot-world.com/predictive-analytics/predictive-maintenance/predictive-maintenance-cost-savings/"&gt;18–25% reductions&lt;/a&gt; in overall maintenance costs and 30–50% reductions in unplanned downtime. The U.S. Department of Energy has reported a potential &lt;strong&gt;ROI of up to 10x on predictive maintenance investments&lt;/strong&gt;. Reactive repairs, by contrast, cost three to five times more than planned maintenance once you account for emergency labor, expedited parts shipping, and cascading production losses.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Safety and compliance depend on it&lt;/strong&gt;. Regulatory bodies across industries mandate specific maintenance activities and intervals. Falling behind on MRO creates safety hazards for workers, compliance risk for the organization, and potential legal liability.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="maintenance-strategies-preventive-predictive-planned-and-condition-based"&gt;Maintenance strategies: preventive, predictive, planned, and condition-based&lt;/h2&gt;

&lt;p&gt;Organizations typically employ a mix of strategies, and the trend across industries is a steady shift from reactive to proactive, data-driven approaches.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3xBRG5cCTK4CqGAImWHorU/6d8cafbd1630cb9d3bfdddcd1218e482/Diagram_01.png" alt="Reactive to Predictive MRO" /&gt;&lt;/p&gt;

&lt;h4 id="preventive-maintenance"&gt;Preventive Maintenance&lt;/h4&gt;

&lt;p&gt;Preventive maintenance is scheduled work performed at fixed intervals to reduce the likelihood of failure. Oil changes every 500 operating hours, filter replacements every quarter, and belt inspections every month are all preventive activities. The advantage is predictability: you know what work is coming and can plan parts and labor accordingly. The drawback is that you may be replacing components that still have significant useful life remaining, which wastes money and materials.&lt;/p&gt;

&lt;h4 id="planned-maintenance"&gt;Planned Maintenance&lt;/h4&gt;

&lt;p&gt;Planned maintenance is a broader category that includes any maintenance activity scheduled in advance, whether it follows a calendar-based interval, a usage-based trigger, or a condition-based alert. The defining characteristic is that the work is anticipated and resourced before it begins, as opposed to reactive or emergency maintenance. Planned maintenance also encompasses scheduled shutdowns and turnarounds, where equipment is taken offline deliberately for extensive servicing.&lt;/p&gt;

&lt;h4 id="condition-based-maintenance"&gt;Condition-Based Maintenance&lt;/h4&gt;

&lt;p&gt;Condition-based maintenance (CBM) uses real-time monitoring of equipment health indicators like vibration, temperature, oil quality, and electrical signatures to trigger maintenance only when those indicators show that maintenance is actually needed. Rather than replacing a bearing on a fixed schedule, CBM replaces it when vibration analysis shows degradation has reached a threshold. This approach eliminates much of the waste inherent in time-based schedules while still catching problems before failure.&lt;/p&gt;

&lt;h4 id="predictive-maintenance"&gt;Predictive Maintenance&lt;/h4&gt;

&lt;p&gt;Predictive maintenance takes condition-based monitoring a step further by applying machine learning, statistical models, and trend analysis to forecast when a component is likely to fail. Where CBM reacts to current conditions, predictive maintenance anticipates future conditions based on patterns in historical and real-time data. Sensors tracking vibration, temperature, pressure, and acoustic signatures feed data into analytics platforms that can predict failures days or weeks in advance.&lt;/p&gt;

&lt;p&gt;The results are striking: organizations with mature predictive maintenance programs report 35–45% reductions in unplanned downtime and an average ROI of around 250% within the first 18 months.&lt;/p&gt;

&lt;p&gt;The movement from reactive to predictive maintenance is one of the defining trends in MRO. As IIoT sensors become cheaper and more accessible, even smaller manufacturers can begin shifting toward condition-based and predictive approaches.&lt;/p&gt;

&lt;h3 id="mro-in-manufacturing"&gt;MRO in manufacturing&lt;/h3&gt;

&lt;p&gt;In the manufacturing industry, MRO encompasses all indirect materials and maintenance activities required to keep a production facility running. It is everything that supports the production process without becoming part of the finished product.&lt;/p&gt;

&lt;p&gt;Manufacturing MRO spending is often highly fragmented. A single plant might purchase thousands of distinct SKUs, such as motor drives, conveyor belts, lubricants, rags, and safety boots, from dozens of suppliers. The proportion of organizations using more than 250 MRO suppliers has grown from 6% to 15% in recent years. This fragmentation makes it difficult to negotiate volume discounts, track usage, or identify waste.&lt;/p&gt;

&lt;p&gt;Common MRO priorities in manufacturing include reducing unplanned downtime on production lines, maintaining critical spares inventory for high-impact equipment, shifting from reactive to preventive or predictive maintenance, standardizing parts and suppliers to simplify procurement, and ensuring compliance with OSHA and environmental regulations.&lt;/p&gt;

&lt;p&gt;Manufacturers that invest in structured MRO programs typically see improvements in overall equipment effectiveness (OEE), lower maintenance costs per unit of output, and fewer safety incidents.&lt;/p&gt;

&lt;h3 id="mro-in-aviation"&gt;MRO in aviation&lt;/h3&gt;

&lt;p&gt;Aviation has one of the most rigorous and regulated MRO environments of any industry. Aircraft MRO is governed by strict regulatory frameworks like the FAA in the United States and EASA in Europe. Every maintenance activity must be performed by certified repair stations, documented in detail, and traceable.&lt;/p&gt;

&lt;p&gt;The four main categories of aviation MRO are airframe maintenance, engine maintenance, component maintenance, and line maintenance.&lt;/p&gt;

&lt;p&gt;Aviation MRO is also where data-driven maintenance has seen some of its most advanced applications. Airlines use predictive maintenance platforms that analyze sensor data from aircraft systems to forecast component failures before they occur, minimizing aircraft-on-ground events and improving safety.&lt;/p&gt;

&lt;h3 id="mro-in-energy-and-utilities"&gt;MRO in energy and utilities&lt;/h3&gt;

&lt;p&gt;Energy and utilities represent one of the most asset-intensive sectors for MRO. Power plants, refineries, pipelines, water treatment facilities, and electrical grids all require continuous maintenance to remain operational and safe.&lt;/p&gt;

&lt;p&gt;The consequences of downtime in energy are particularly severe. Utilities face additional complexity from regulatory oversight and public safety requirements; a failed transformer or water treatment system affects entire communities.&lt;/p&gt;

&lt;p&gt;This sector has been an early adopter of IIoT and predictive maintenance technologies. Real-time monitoring of turbines, generators, transformers, and pipeline infrastructure allows operators to detect degradation early and schedule maintenance during planned outages rather than responding to emergencies.&lt;/p&gt;

&lt;h2 id="mro-procurement-inventory-and-software"&gt;MRO procurement, inventory, and software&lt;/h2&gt;

&lt;p&gt;Three operational areas determine how well an MRO program actually performs on a day-to-day basis.&lt;/p&gt;

&lt;div&gt;
  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;Area&lt;/th&gt;
        &lt;th&gt;Description and Key Strategies&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;Procurement&lt;/td&gt;
        &lt;td&gt;The process of sourcing and purchasing indirect materials. High transaction volume but low individual dollar value. Improvement strategies include consolidating suppliers, using blanket purchase orders, and implementing e-procurement platforms.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;Inventory&lt;/td&gt;
        &lt;td&gt;Balancing part availability against carrying costs. Effective management relies on criticality-based stocking, min/max levels, and regular cycle counts. MRO inventory supports production but is not part of the finished product.&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;Software&lt;/td&gt;
        &lt;td&gt;Tools to plan, track, and optimize maintenance. Includes CMMS for work orders, EAM for lifecycle planning, and e-procurement tools to streamline purchasing.&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;The process of sourcing and purchasing indirect materials. High transaction volume but low individual dollar value. Improvement strategies include consolidating suppliers, using blanket purchase orders, and implementing e-procurement platforms.&lt;/p&gt;

&lt;h4 id="inventory"&gt;Inventory&lt;/h4&gt;

&lt;p&gt;Balancing part availability against carrying costs. Effective management relies on criticality-based stocking, min/max levels, and regular cycle counts. MRO inventory supports production but is not part of the finished product.&lt;/p&gt;

&lt;h4 id="software"&gt;Software&lt;/h4&gt;

&lt;p&gt;Tools to plan, track, and optimize maintenance. Includes CMMS for work orders, EAM for lifecycle planning, and e-procurement tools to streamline purchasing.&lt;/p&gt;

&lt;h2 id="where-time-series-databases-fit-in-an-mro-strategy"&gt;Where time series databases fit in an MRO strategy&lt;/h2&gt;

&lt;p&gt;The shift toward predictive maintenance creates a data infrastructure challenge that traditional systems were never designed to handle. A modern manufacturing facility with thousands of IIoT sensors can generate billions of data points daily. This is time series data, and it requires specialized tools at scale.&lt;/p&gt;

&lt;p&gt;Traditional relational databases and legacy data historians struggle with the volume, velocity, and query patterns of high-frequency sensor data. Time series databases are built for this workload. They are designed to ingest large volumes of timestamped data at high speed, compress it efficiently for long-term storage, and support the kinds of queries that maintenance and operations teams actually need: trend analysis over time windows, anomaly detection, and correlation across multiple sensor streams.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5GIp6lyhNY9PPBrYRlO000/d5336a5398aa3ae4137af83384c737db/Diagram_02.png" alt="Telegraf Agent MRO" /&gt;&lt;/p&gt;

&lt;p&gt;InfluxDB is one of the most widely adopted time series databases in industrial environments. It is built to handle the data patterns that MRO and predictive maintenance generate, and it fits into the maintenance technology stack in several important ways.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Real-time equipment monitoring&lt;/strong&gt;: InfluxDB ingests data from PLCs, SCADA systems, and IIoT sensors via standard industrial protocols like MQTT, OPC UA, and Modbus through its Telegraf agent. This creates a live feed of equipment health data that maintenance teams can use to spot anomalies as they develop.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Historical context for predictive models&lt;/strong&gt;: Effective predictive maintenance depends on having deep historical data to train machine learning models. InfluxDB stores years of sensor data in a compressed columnar format, making it practical and cost-effective to retain the historical depth that ML models need to identify failure patterns.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Bridging OT and IT systems&lt;/strong&gt;: One of the persistent challenges in MRO is that operational technology and information technology often exist in separate silos. InfluxDB integrates with both sides of this divide, connecting industrial data sources at the edge with analytics tools, cloud platforms, and AI/ML pipelines on the IT side.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Edge-to-cloud flexibility&lt;/strong&gt;: Not every facility has the same infrastructure. Some need on-premises data processing for latency or security reasons; others want cloud-based analytics. InfluxDB supports deployment at the edge, in private clouds, or in fully-managed cloud environments, allowing organizations to match their data architecture to their operational reality.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The practical impact is tangible. &lt;a href="https://www.influxdata.com/resources/how-seadrill-transformed-billions-sensor-data-into-actionable-insights-with-influxdb/"&gt;Seadrill&lt;/a&gt; has reported saving over $1.6 million in a single year by using InfluxDB as its time series database for equipment monitoring. &lt;a href="https://www.influxdata.com/blog/siemens-energy-standardizes-predictive-maintenance-influxdb/"&gt;Siemens Energy uses InfluxDB to monitor 23,000 battery modules across more than 70 sites&lt;/a&gt;, analyzing billions of sensor readings to prevent downtime and ensure quality.&lt;/p&gt;

&lt;p&gt;For operations and maintenance teams evaluating their data infrastructure, the key question is whether their current systems can handle the data volumes that condition-based and predictive maintenance demand. If the answer is no, a time series database is the foundational layer that makes advanced maintenance strategies possible.&lt;/p&gt;

&lt;h2 id="common-mro-challenges"&gt;Common MRO challenges&lt;/h2&gt;

&lt;p&gt;Even well-intentioned MRO programs run into recurring problems.&lt;/p&gt;

&lt;h4 id="fragmented-spending"&gt;Fragmented Spending&lt;/h4&gt;

&lt;p&gt;This is the most widespread issue. When every department or site purchases MRO supplies independently, organizations lose leverage with suppliers and have no visibility into total spend.&lt;/p&gt;

&lt;h4 id="reactive-maintenance-culture"&gt;Reactive Maintenance Culture&lt;/h4&gt;

&lt;p&gt;This culture remains entrenched in many organizations. ABB’s Value of Reliability research found that two-thirds of companies experience unplanned downtime at least once per month, and a full third have not undertaken motor or drive modernization projects in the past two years, even though upgrading obsolete equipment can generate ROI in less than two years.&lt;/p&gt;

&lt;h4 id="poor-data-quality"&gt;Poor Data Quality&lt;/h4&gt;

&lt;p&gt;Poor data quality undermines almost every MRO improvement effort. Incomplete asset records, mislabeled parts, and patchy work-order histories make it difficult to decide what to stock, when to maintain, and where to invest. This problem compounds as organizations try to implement predictive maintenance, which depends entirely on clean, structured, time-stamped data.&lt;/p&gt;

&lt;h4 id="excess-and-obsolete-inventory"&gt;Excess and Obsolete Inventory&lt;/h4&gt;

&lt;p&gt;Excess and obsolete inventory tie up capital and warehouse space. Parts ordered for equipment that has since been retired, or spares stocked based on outdated failure rates, accumulate quietly until someone audits the stockroom.&lt;/p&gt;

&lt;h2 id="how-to-improve-an-mro-strategy"&gt;How to improve an MRO strategy&lt;/h2&gt;

&lt;p&gt;There is no single playbook for MRO improvement, but a few principles apply broadly.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Start with visibility&lt;/strong&gt;. Before you optimize anything, you need a clear picture of what you are spending, where your inventory sits, and how your assets are performing. Consolidating data from procurement, maintenance, and inventory systems is almost always the first step.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Classify assets by criticality&lt;/strong&gt;. Not all equipment deserves the same level of attention. Focus preventive and predictive maintenance resources on the assets whose failure would cause the greatest impact on safety, production, or cost.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Consolidate suppliers and standardize parts&lt;/strong&gt;. Reducing the number of MRO suppliers simplifies procurement, improves negotiating leverage, and makes it easier to manage inventory. Standardizing on common parts across similar equipment reduces the total number of SKUs you need to carry.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Shift from reactive to proactive maintenance&lt;/strong&gt;. This is a long-term cultural change, not a one-time project. Start with the highest-criticality assets, prove the value with condition monitoring and predictive analytics, and then scale. Organizations that make this transition consistently report dramatic reductions in both downtime and total maintenance cost.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Invest in the right data infrastructure&lt;/strong&gt;. Advanced maintenance strategies are only as good as the data infrastructure behind them. This means CMMS/EAM software for work order management, time series databases for high-frequency sensor data, and integration layers that connect these systems so that insights flow from the sensor to the decision-maker without friction.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Measure what matters&lt;/strong&gt;. Track metrics that connect MRO performance to business outcomes: planned vs. unplanned maintenance ratio, spare parts availability, mean time between failures (MTBF), overall equipment effectiveness (OEE), and maintenance cost as a percentage of asset replacement value.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="wrapping-up"&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;MRO may not be the most glamorous line item in an operating budget, but it is one of the most consequential. The organizations that treat maintenance, repair, and operations as a strategic function consistently outperform those that don’t. As sensor technology gets cheaper, predictive analytics gets smarter, and the data infrastructure to support them becomes more accessible, the gap between reactive and proactive organizations will only widen. The best time to invest in your MRO strategy was five years ago. The second-best time is now.&lt;/p&gt;

&lt;p&gt;MRO FAQs&lt;/p&gt;
&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What does MRO stand for?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                MRO most commonly stands for maintenance, repair, and operations—the activities, supplies, and services that keep equipment and facilities running. In aviation and heavy industry, MRO can also stand for maintenance, repair, and overhaul, where "overhaul" refers to the complete teardown, inspection, and rebuild of a component or system to original specifications. Both meanings describe the same core concept: sustaining operational readiness of physical assets.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is MRO in business?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In a business context, MRO refers to all indirect spending related to keeping operations running. This includes everything from preventive maintenance schedules and spare parts to safety equipment, cleaning supplies, and facility consumables. MRO sits outside of direct production costs but has a significant impact on uptime, safety, and total operating expense.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between MRO inventory and production inventory?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Production inventory consists of raw materials and components that become part of the finished product. MRO inventory includes spare parts, tools, consumables, and supplies used to maintain equipment and facilities; items that support production but never appear in the final product. Both require management, but they serve different purposes and are often handled by different teams with different procurement strategies.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is MRO in manufacturing?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In manufacturing, MRO covers the indirect materials (lubricants, filters, PPE, tools, electrical components) and maintenance activities (inspections, repairs, preventive maintenance) required to keep production equipment operational. It is one of the largest categories of indirect spending in most manufacturing organizations.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-5"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is MRO in aviation?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-5" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In aviation, MRO stands for maintenance, repair, and overhaul. It is a heavily regulated segment that includes line maintenance, airframe and engine maintenance, component repair, and full overhauls of aircraft systems. Aviation MRO is essential for airworthiness certification and passenger safety, and it is governed by regulatory bodies like the FAA and EASA.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-6"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What are MRO supplies?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-6" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                MRO supplies are the materials purchased to support maintenance and operational activities. Common examples include spare parts, fasteners, lubricants, hand tools, safety gear, cleaning products, electrical components, and facility consumables like light bulbs and HVAC filters. These items are consumed during the maintenance process rather than incorporated into a finished product.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-7"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Why is MRO important?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-7" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                MRO directly affects equipment uptime, workplace safety, regulatory compliance, and operating costs. Unplanned downtime alone costs U.S. manufacturers an estimated $50 billion per year. Organizations that manage MRO effectively experience fewer breakdowns, lower total maintenance costs, longer asset lifespans, and better safety records. As maintenance strategies evolve from reactive to predictive, the strategic importance of MRO continues to grow.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-8"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between preventive and predictive maintenance?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-8" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Preventive maintenance follows a fixed schedule. For example, replacing a filter every 90 days regardless of its condition. Predictive maintenance uses real-time data from sensors to forecast when maintenance is actually needed, based on the condition and performance trends of the equipment. Predictive approaches reduce both unnecessary maintenance and unexpected failures, but they require investment in sensors, data infrastructure, and analytics tools.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-9"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is a CMMS and how does it relate to MRO?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-9" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                A CMMS (computerized maintenance management system) is software used to schedule, track, and document maintenance activities. It is one of the core tools in an MRO program, helping teams manage work orders, track asset history, plan preventive maintenance schedules, and monitor spare parts inventory. More advanced platforms (often called EAM, or enterprise asset management systems) add lifecycle planning, capital project tracking, and integration with other enterprise systems.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

&lt;/div&gt;
</description>
      <pubDate>Tue, 31 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/mro-explained-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/mro-explained-influxdb/</guid>
      <category>Developer</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
    <item>
      <title>Telegraf Enterprise Beta is Now Available: Centralized Control for Telegraf at Scale</title>
      <description>&lt;p&gt;Telegraf is incredibly good at what it does: collecting metrics, logs, and events from just about anywhere and sending them wherever you need. But once Telegraf becomes part of your production telemetry pipeline, spread across environments, teams, regions, and edge locations, the hard part isn’t installing agents; it’s operating them.&lt;/p&gt;

&lt;p&gt;Configs drift. “Temporary” overrides linger. Rolling out changes across hundreds (or thousands) of agents becomes a careful, manual process. And when something breaks, the first question is rarely about the data—it’s about the fleet:&lt;/p&gt;

&lt;p&gt;which configuration is running where, and is every agent healthy?&lt;/p&gt;

&lt;p&gt;That’s the problem Telegraf Enterprise is built to solve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Today, we’re opening the Telegraf Enterprise beta to the broader Telegraf community so you can help us validate the product where it matters most: at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8J9tj2g9cNGnqtL94tMOn/adf53d91e1e98a76f8c9461186b1cccf/Screenshot_2026-03-25_at_10.59.07â__AM.png" alt="Telegraf Enterprise SS 1" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="what-is-telegraf-enterprise"&gt;What is Telegraf Enterprise?&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Telegraf Enterprise&lt;/strong&gt; is a commercial offering for organizations running Telegraf at scale and needing centralized management, governance, and support. It brings together two key components:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf Controller&lt;/strong&gt;: A control plane (UI + API) that centralizes Telegraf configuration management and fleet health visibility.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf Enterprise Support&lt;/strong&gt;: Official support for Telegraf Controller and official Telegraf plugins, designed for teams that need dependable response times and expert guidance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s built for real-world, large-scale agent deployments, where Telegraf isn’t a tool you occasionally touch, but a platform you rely on.&lt;/p&gt;

&lt;h2 id="meet-telegraf-controller-your-telegraf-control-plane"&gt;Meet Telegraf Controller: your Telegraf control plane&lt;/h2&gt;

&lt;p&gt;At the heart of Telegraf Enterprise is &lt;strong&gt;Telegraf Controller&lt;/strong&gt;, which centralizes two things teams struggle with most at scale:&lt;/p&gt;

&lt;h4 id="configuration-management-that-doesnt-collapse-under-growth"&gt;Configuration Management That Doesn’t Collapse Under Growth&lt;/h4&gt;

&lt;p&gt;Telegraf Controller helps you create and manage configurations to support consistency across environments while still allowing necessary variation. Core capabilities include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Centralized configuration creation and editing&lt;/li&gt;
  &lt;li&gt;Templates and parameterization to reuse configs safely&lt;/li&gt;
  &lt;li&gt;Label-based organization (so fleets don’t devolve into a long list of “agent-123”)&lt;/li&gt;
  &lt;li&gt;Bulk operations for fleet-wide changes&lt;/li&gt;
  &lt;li&gt;Environment variable and parameter management&lt;/li&gt;
  &lt;li&gt;Plugin metadata visibility to simplify config authoring and maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/63My9Gr4T1fkbk4tXziKRL/535ae3a8d927ddfe52e47d3596cd8b79/Screenshot_2026-03-25_at_11.00.14â__AM.png" alt="Telegraf Enterprise SS 2" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h4 id="fleet-wide-health-visibility"&gt;Fleet-Wide Health Visibility&lt;/h4&gt;

&lt;p&gt;Telegraf Controller gives you a single view into the overall status of your agent deployments, so you can understand:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Which agents are reporting as expected&lt;/li&gt;
  &lt;li&gt;Where health issues are clustering&lt;/li&gt;
  &lt;li&gt;What changed recently, and what might be correlated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, you don’t just manage Telegraf. You &lt;strong&gt;operate&lt;/strong&gt; it.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6LcWrqwByO7CtGvf8cDT3C/b2d04ee37b9b14bffec9e77693a716af/Screenshot_2026-03-25_at_11.01.30â__AM.png" alt="Telegraf Enterprise SS 3" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="designed-to-fit-your-telemetry-stack"&gt;Designed to fit your telemetry stack&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is designed to work with the way teams actually deploy Telegraf.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;It does not require InfluxDB&lt;/strong&gt;. You can use the Telegraf Controller regardless of where your telemetry data is going.&lt;/li&gt;
  &lt;li&gt;Configuration delivery follows a &lt;strong&gt;pull-based model&lt;/strong&gt;, where agents fetch configuration over HTTP. This keeps change management predictable and compatible with locked-down environments.&lt;/li&gt;
  &lt;li&gt;It’s built to support &lt;strong&gt;hundreds to thousands of agents&lt;/strong&gt;, with production-grade storage options and a modern UI + API architecture for automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="why-were-running-this-beta"&gt;Why we’re running this beta&lt;/h2&gt;

&lt;p&gt;This beta is open to any Telegraf user who wants to test-drive Telegraf Controller.&lt;/p&gt;

&lt;p&gt;The focus of the beta is simple:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Test Telegraf Controller at scale&lt;/strong&gt;: We want to validate how well Telegraf Controller holds up when you connect real fleets—hundreds or thousands of agents—with real operational behaviors.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Gather feedback from the community:&lt;/strong&gt; We’re intentionally inviting community input early, while we’re still shaping the GA experience. What workflows are missing? What’s confusing? What would make this tool indispensable in your environment?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this stage, your feedback directly influences what Telegraf Enterprise becomes.&lt;/p&gt;

&lt;h2 id="enterprise-support-that-matches-production-expectations"&gt;Enterprise support that matches production expectations&lt;/h2&gt;

&lt;p&gt;Operating telemetry pipelines is a production responsibility, and when Telegraf is part of that pipeline, you need support that understands the stakes.&lt;/p&gt;

&lt;p&gt;Telegraf Enterprise includes support designed for teams that need:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Clear expectations for response and escalation&lt;/li&gt;
  &lt;li&gt;Coverage for Telegraf Controller and official Telegraf plugins&lt;/li&gt;
  &lt;li&gt;Help diagnosing issues and reducing operational risk as fleets grow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially valuable when Telegraf is deployed across multiple teams, environments, or customer sites, where operational consistency matters as much as collection capability.&lt;/p&gt;

&lt;h2 id="who-is-telegraf-enterprise-for"&gt;Who is Telegraf Enterprise for?&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is built for organizations that manage Telegraf fleets at a meaningful scale, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Platform engineering and SRE teams&lt;/li&gt;
  &lt;li&gt;DevOps organizations operating across multi-cloud / hybrid / edge&lt;/li&gt;
  &lt;li&gt;Managed service providers delivering telemetry as a service&lt;/li&gt;
  &lt;li&gt;Compliance-sensitive teams that need standardized configurations and governance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re running a small number of agents and are comfortable managing configs manually, you may not need Telegraf Enterprise today. But if Telegraf is everywhere—and your team is responsible for keeping it reliable—centralized control quickly becomes less “nice to have” and more “how did we operate without this?”&lt;/p&gt;

&lt;h2 id="packaging-free-and-enterprise-options"&gt;Packaging: free and enterprise options&lt;/h2&gt;

&lt;h4 id="telegraf-controller"&gt;Telegraf Controller&lt;/h4&gt;

&lt;p&gt;A free tier is available for teams that want centralized configuration management and visibility with pre-defined limits.&lt;/p&gt;

&lt;h4 id="telegraf-enterprise"&gt;Telegraf Enterprise&lt;/h4&gt;

&lt;p&gt;For teams operating Telegraf as critical infrastructure, &lt;strong&gt;Telegraf Enterprise&lt;/strong&gt; includes the Telegraf Controller packaged with enterprise support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key difference&lt;/strong&gt;: the Telegraf Enterprise is built for scale and operational reliability, with support and capabilities aligned to production fleet management.&lt;/p&gt;

&lt;h2 id="getting-started-with-telegraf-controller"&gt;Getting started with Telegraf Controller&lt;/h2&gt;

&lt;p&gt;Telegraf Enterprise is designed for teams operating Telegraf as a core part of production telemetry pipelines. If Telegraf is already how you collect metrics, logs, and events across your infrastructure, Telegraf Controller is the missing piece that helps you operate that collection layer like a platform—not a pile of configs.&lt;/p&gt;

&lt;p&gt;To join the beta, &lt;a href="https://influxdata.com/products/telegraf-enterprise"&gt;click here&lt;/a&gt; to opt in. Please share your feedback in-app with the feedback button or our slack channel #telegraf-enterprise-beta.&lt;/p&gt;

&lt;p&gt;Join the beta, push it hard, share your use case, and what makes your workflow easier!&lt;/p&gt;
</description>
      <pubDate>Thu, 26 Mar 2026 07:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/telegraf-enterprise-beta/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/telegraf-enterprise-beta/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Scott Anderson (InfluxData)</author>
    </item>
    <item>
      <title>Unifying Telemetry in Battery Energy Storage Systems</title>
      <description>&lt;p&gt;&lt;a href="https://www.influxdata.com/solutions/battery-energy-storage-systems/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Battery energy storage systems (BESS)&lt;/a&gt; play a critical role in modern energy infrastructure. Utilities rely on these systems to balance renewable generation, stabilize grid operations, and respond to changing electricity demand. As deployments scale in size and complexity, operators require continuous insight into battery health, system performance, and grid interaction.
Operators rely on telemetry generated across several operational platforms. Battery management systems monitor cell behavior, power conversion systems, and regulate energy flow, while plant control platforms track facility status. Energy management software and environmental sensors provide additional context about facility conditions.&lt;/p&gt;

&lt;p&gt;In many deployments, however, this information remains scattered across separate monitoring environments. Operators often move between multiple dashboards to understand activity across a single facility. Many BESS operators are now adopting unified telemetry platforms that consolidate operational signals and create a clearer operational view of system behavior.&lt;/p&gt;

&lt;h2 id="the-operational-reality-of-modern-bess-systems"&gt;The operational reality of modern BESS systems&lt;/h2&gt;

&lt;p&gt;A battery energy storage facility is not a single system but a collection of specialized subsystems that manage energy storage, power conversion, and grid interaction. Each subsystem monitors a different aspect of facility performance and generates operational signals that help operators understand how the system behaves.&lt;/p&gt;

&lt;p&gt;Several platforms produce these signals. Battery Management Systems (BMS) track cell-level conditions such as voltage, temperature, and state of charge to protect battery health. Power Conversion Systems (PCS), typically implemented through inverters, regulate how electricity flows between the battery and the grid.&lt;/p&gt;

&lt;p&gt;Plant-level monitoring runs through &lt;a href="https://www.influxdata.com/glossary/SCADA-supervisory-control-and-data-acquisition/"&gt;SCADA platforms&lt;/a&gt;, which provide alarms, system status, and operational controls. Energy Management Systems (EMS) determine when energy should be stored or dispatched based on grid signals and market conditions, while environmental sensors monitor external factors such as ambient temperature.&lt;/p&gt;

&lt;p&gt;Together, these systems create a continuous operational record of facility performance, but the resulting information does not always exist in a shared environment.&lt;/p&gt;

&lt;h2 id="the-fragmented-reality-of-bess-telemetry"&gt;The fragmented reality of BESS telemetry&lt;/h2&gt;

&lt;p&gt;In most battery energy storage deployments, operational data originates from multiple independent platforms, as described above. This fragmentation reflects the modular design and deployment of energy storage facilities. Battery systems, power conversion equipment, and plant control platforms are frequently delivered by different vendors, each with its own software, data models, and monitoring tools.&lt;/p&gt;

&lt;p&gt;Because these platforms monitor individual components rather than the entire facility, data is rarely consolidated automatically. Operators often rely on multiple dashboards to understand activity across a single storage site. Correlating events between subsystems may require switching between tools and manually comparing timestamps or operational signals.&lt;/p&gt;

&lt;p&gt;The result? Operators have access to large volumes of operational information but lack a unified view of the facility as a whole. When events occur across multiple subsystems, understanding how those signals relate to one another requires time and effort.&lt;/p&gt;

&lt;h2 id="operational-cost-of-data-silos"&gt;Operational cost of data silos&lt;/h2&gt;

&lt;p&gt;Even small issues can require significant labor to diagnose. The &lt;a href="https://www.influxdata.com/blog/breaking-data-silos-influxdb-3/#heading0"&gt;data silos&lt;/a&gt; created by ala carte technologies prevent engineers from seeing how signals across the storage system relate.For example, a thermal anomaly—an unexpected rise in battery temperature—may require operators to review battery readings, compare inverter load behavior, and examine environmental conditions. Without a unified view of these signals, determining the cause can take time.&lt;/p&gt;

&lt;p&gt;These delays affect both system reliability and financial performance. If operators cannot quickly determine why system capacity dropped or alarms triggered, dispatch readiness may be affected during critical market windows. Over time, slower investigations and delayed anomaly detection can lead to reduced system availability, higher operational overhead, and missed revenue opportunities.&lt;/p&gt;

&lt;h2 id="what-unified-telemetry-actually-means"&gt;What unified telemetry actually means&lt;/h2&gt;

&lt;p&gt;Unified telemetry consolidates operational signals from across the storage system into a shared data environment. Instead of storing data separately within subsystem platforms, telemetry from across the facility enters a common dataset.&lt;/p&gt;

&lt;p&gt;In this environment, operational signals are stored as time-series data, or measurements organized by timestamp, allowing signals from different subsystems to be synchronized and analyzed together.&lt;/p&gt;

&lt;p&gt;This shared dataset allows engineers to correlate signals that were previously isolated. Battery temperature trends can be examined alongside inverter load behavior, dispatch signals, and environmental conditions to better understand system performance. Instead of switching between monitoring platforms, operators can observe how signals across subsystems evolve together within a unified operational timeline.&lt;/p&gt;

&lt;h2 id="how-unified-telemetry-works"&gt;How unified telemetry works&lt;/h2&gt;

&lt;p&gt;In many deployments, telemetry aggregation begins at the edge of the facility. Edge collectors connect to operational systems such as the BMS, PCS, SCADA platform, EMS and environmental sensors using industrial protocols such as &lt;a href="https://www.influxdata.com/integration/modbus/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Modbus&lt;/a&gt;, &lt;a href="https://www.influxdata.com/integration/opcua/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;OPC-UA&lt;/a&gt;, or CANbus. These collectors ingest operational signals and convert them into structured telemetry streams.&lt;/p&gt;

&lt;p&gt;From there, the data flows through streaming pipelines into centralized platforms. These pipelines handle ingestion, buffering, and transport of high-frequency signals so information from across the facility can be processed as a continuous operational stream.&lt;/p&gt;

&lt;p&gt;Time series databases store and index this telemetry by timestamp, allowing engineers to query system behavior over time. Organizing operational signals this way enables teams to correlate events across subsystems, analyze performance trends, and investigate anomalies.&lt;/p&gt;

&lt;p&gt;Because signals from different systems exist in the same time-aligned dataset, engineers can examine battery performance, inverter activity, dispatch signals, and environmental conditions together. This enables faster incident investigation and supports advanced analysis such as anomaly detection and &lt;a href="https://www.influxdata.com/glossary/predictive-maintenance/"&gt;predictive maintenance&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="operational-impact"&gt;Operational impact&lt;/h2&gt;

&lt;p&gt;Unified telemetry changes how energy storage facilities are operated and how organizations manage risk, reliability, and revenue. When signals from battery systems, power electronics, and plant controls are  analyzed together, operators gain a comprehensive view of facility behavior rather than having to reconstruct events across multiple monitoring platforms.&lt;/p&gt;

&lt;p&gt;This visibility allows teams to detect anomalies earlier and respond to operational issues before they escalate. Faster diagnosis reduces downtime and helps maintain system availability during critical dispatch windows. In energy markets, maintaining dispatch readiness helps protect revenue during high-value trading periods.&lt;/p&gt;

&lt;h4 id="juniz-energy-deployment"&gt;ju:niz Energy Deployment&lt;/h4&gt;

&lt;p&gt;ju:niz Energy operates large-scale battery storage systems that provide grid services and trading flexibility in energy markets. Their systems collect thousands of data points per second on battery health, temperature, climate conditions, and system performance.&lt;/p&gt;

&lt;p&gt;To manage this telemetry, ju:niz built a centralized monitoring architecture using Telegraf, Modbus, MQTT, Grafana, Docker, AWS, and InfluxDB. Operational signals from battery systems stream into a centralized time series platform, giving engineers a unified view of system behavior and eliminating the need for legacy Python monitoring scripts.&lt;/p&gt;

&lt;p&gt;This architecture enables the ju:niz team to analyze battery telemetry in real-time, improve alerting accuracy, and support predictive maintenance strategies across their storage infrastructure.To see how ju:niz implemented unified telemetry for its operations, read the full &lt;a href="https://get.influxdata.com/rs/972-GDU-533/images/Customer_Case_Study_Juniz.pdf?version=0"&gt;case study&lt;/a&gt; or watch the &lt;a href="https://www.influxdata.com/resources/how-to-improve-renewable-energy-storage-with-mqtt-modbus-and-influxdb-cloud/"&gt;webinar&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-bottom-line"&gt;The bottom line&lt;/h2&gt;

&lt;p&gt;Battery energy storage systems generate telemetry across multiple operational platforms, but when that data remains fragmented, operators struggle to understand how the system behaves as a whole.
Unified telemetry solves this by bringing operational signals into a shared, time-aligned dataset. As BESS deployments scale, this capability will become foundational for operating energy storage systems reliably, efficiently, and profitably.&lt;/p&gt;

&lt;p&gt;Ready to build a unified telemetry architecture? Get started with a free download of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Core OSS&lt;/a&gt; or a trial of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb-enterprise/?utm_source=website&amp;amp;utm_medium=unified_telemetry_BESS&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Thu, 19 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/unified-telemetry-BESS/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/unified-telemetry-BESS/</guid>
      <category>Developer</category>
      <author>Allyson Boate (InfluxData)</author>
    </item>
    <item>
      <title>A New Scale Tier for Time Series on Amazon Timestream for InfluxDB</title>
      <description>
&lt;p&gt;When we first announced the &lt;a href="https://www.influxdata.com/blog/influxdb3-on-amazon-timestream/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;availability&lt;/a&gt; of &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3 Core&lt;/a&gt; and &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=scaling_amazon_timestream_influxdb&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt; on Amazon Timestream for InfluxDB last year, we set a new standard for managed time series on AWS. We gave developers a simple way to harness high performance at scale while removing the burden of infrastructure management.&lt;/p&gt;

&lt;p&gt;But as our customers have taught us, “at scale” is a moving target. Across Industrial IoT, physical AI, and real-time observability, data is growing in both volume and resolution. When you move from minute-by-minute polling to sub-millisecond, high-fidelity telemetry, the pressure on the underlying database compounds. To stay ahead of that curve, developers need a platform that scales as fast as their workloads.&lt;/p&gt;

&lt;p&gt;Today, we’re delivering that by expanding InfluxDB 3 on Amazon Timestream for InfluxDB to &lt;a href="https://aws.amazon.com/timestream/"&gt;support expanding clusters up to 15 nodes&lt;/a&gt;. We’re also introducing a seamless migration path from InfluxDB 3 Core to InfluxDB 3 Enterprise, allowing teams to unlock this massive performance tier without friction, risk of a manual architectural overhaul, or any data loss.&lt;/p&gt;

&lt;h2 id="scaling-for-the-mission-critical"&gt;Scaling for the mission-critical&lt;/h2&gt;

&lt;p&gt;At InfluxData, we’re seeing time series expand from infrastructure monitoring to the foundation for autonomous systems. In high-stakes environments like power grid management or autonomous vehicle navigation, increased latency is a significant operational risk rather than just a performance metric.&lt;/p&gt;

&lt;p&gt;Previously, AWS Timestream’s support of InfluxDB 3 was focused on smaller, highly efficient configurations. By expanding to 15 nodes, we are providing major upgrades across three important areas:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Query concurrency&lt;/strong&gt;: More nodes mean more hands on deck to process complex, concurrent queries. Large teams can now run heavy analytical workloads without impacting real-time dashboards or critical alerts.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Massive throughput&lt;/strong&gt;: With a larger cluster, you can ingest millions of data points per second across hundreds of millions of unique series, maintaining real-time query performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Workload isolation and optimization&lt;/strong&gt;: These expanded clusters enable true functional isolation between ingestion, queries, and compaction. This allows granular performance tuning optimized for your most demanding workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="architected-for-enterprise-demand"&gt;Architected for enterprise demand&lt;/h2&gt;

&lt;p&gt;This new 15-node option is available for InfluxDB 3 Enterprise and is designed for organizations that require high availability, enhanced security, and the power to maintain high ingestion and real-time query performance across high-resolution, high-velocity datasets. InfluxDB 3 Core will continue to operate in single-node deployments.&lt;/p&gt;

&lt;p&gt;By leveraging AWS infrastructure, you can spin up these expanded clusters in minutes directly from the AWS Console. With our new seamless migration capabilities, you can transition your existing Core workloads to Enterprise clusters with a single click. This ensures that as your data grows (from a few local sensors to a global fleet of devices), your database never becomes the bottleneck, and your team never has to worry about the downtime typically associated with a migration. These larger clusters are available today in all AWS regions where Amazon Timestream for InfluxDB is available, ensuring you can deploy and optimize mission-critical time series infrastructure wherever your data lives.&lt;/p&gt;

&lt;h2 id="the-foundation-for-physical-ai"&gt;The foundation for physical AI&lt;/h2&gt;

&lt;p&gt;Our partnership with AWS is about meeting developers where they build. By integrating with services like AWS Lambda, SageMaker, and Kinesis, we’ve simplified the path from high-volume streams into Physical AI. This is the frontier where intelligence moves from the digital realm into the physical world.&lt;/p&gt;

&lt;p&gt;Time series is the heartbeat of this transition, fueling a two-part lifecycle:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Training&lt;/strong&gt;: Using massive volumes of historical data to establish baselines and “normal” patterns.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Inference&lt;/strong&gt;: Streaming real-time data against those models to trigger automated, deterministic actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes our partnership with AWS unique is that we support both sides of this loop. With up to 15 nodes at your disposal, InfluxDB 3 has the headroom to act as a distributed inference engine, running predictive maintenance and anomaly detection against your data. This eliminates the latency tax of moving massive datasets between layers, ensuring that whether you are managing a robotic fleet or a smart grid, your autonomous systems can perceive and react with real-time precision.&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next?&lt;/h2&gt;

&lt;p&gt;The future of time series is about speed, precision, and scale. With today’s announcement, we’re handing you the keys to all three. By removing the barriers between single-node efficiency and enterprise-grade performance, we’re making it easier than ever to evolve your architecture as fast as your data grows.&lt;/p&gt;

&lt;p&gt;We’re excited to see what the community builds with this new level of power. If you’re ready to scale your real-time workloads, head over to the &lt;a href="https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fus-east-1.console.aws.amazon.com%2Ftimestream%2Fhome%3Fca-oauth-flow-id%3D3617%26hashArgs%3D%2523welcome%26isauthcode%3Dtrue%26oauthStart%3D1768948312939%26region%3Dus-east-1%26state%3DhashArgsFromTB_us-east-1_89587d800d106091&amp;amp;client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fpyramid&amp;amp;forceMobileApp=0&amp;amp;code_challenge=0mEuy-XrhJW82iYjevEt3OqO4t46aGARztfwPAhfPX4&amp;amp;code_challenge_method=SHA-256"&gt;AWS Console&lt;/a&gt; and start building.&lt;/p&gt;
</description>
      <pubDate>Mon, 16 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/scaling-amazon-timestream-influxdb/</guid>
      <category>Product</category>
      <category>Developer</category>
      <author>Pat Walsh (InfluxData)</author>
    </item>
    <item>
      <title>What is Industry 4.0? Everything You Need to Know in 2026</title>
      <description>&lt;p&gt;Industry 4.0 is the term used to describe the fourth industrial revolution, a name given to the integration of physical and digital systems, which includes the internet of things (IoT) and artificial intelligence that are transforming a huge number of industries.&lt;/p&gt;

&lt;p&gt;At a high level, its goal is to create an efficient, automated process for creating products or services that can be adapted quickly and efficiently to changing customer needs.&lt;/p&gt;

&lt;p&gt;Industry 4.0 also includes concepts such as cloud computing, big &lt;a href="https://www.influxdata.com/solutions/industrial-iot/?utm_source=website&amp;amp;utm_medium=industry_4_0_update_2026&amp;amp;utm_content=blog"&gt;data analytics&lt;/a&gt;, and machine learning to enable smarter production processes.&lt;/p&gt;

&lt;p&gt;By using sensors and automation technology, manufacturers can collect real-time data on their machines and operations, which can be analyzed to make more informed decisions about how best to manage resources, optimize production lines, and reduce costs.&lt;/p&gt;

&lt;p&gt;Industry 4.0 is leading manufacturers away from the traditional linear, push-based approach to production toward a new data-driven, customer-centric model. This “smart” manufacturing can help businesses remain competitive and stay ahead of the curve in terms of production capabilities, while also contributing to a more sustainable future.&lt;/p&gt;

&lt;h2 id="the-path-to-industry-40"&gt;The path to Industry 4.0&lt;/h2&gt;

&lt;p&gt;Let’s take a look at how we arrived at Industry 4.0 by looking to the past. This additional context will help give you a better understanding of why Industry 4.0 is important and why so many people think it is valuable to adopt these technologies.&lt;/p&gt;

&lt;h4 id="first-industrial-revolution"&gt;First Industrial Revolution&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.britannica.com/event/Industrial-Revolution"&gt;First Industrial Revolution&lt;/a&gt;, which took place in the late 18th and early 19th centuries, was characterized by the mechanization of production, the use of steam power, and the development of the factory system.&lt;/p&gt;

&lt;p&gt;This revolution led to significant changes in manufacturing, transportation, and communication, and had a major impact on society and the economy.&lt;/p&gt;

&lt;h4 id="second-industrial-revolution"&gt;Second Industrial Revolution&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.history.com/articles/second-industrial-revolution-advances"&gt;Second Industrial Revolution&lt;/a&gt; took place in the late 19th and early 20th centuries. It was characterized by mass production of goods, the use of electricity, and the development of the assembly line.&lt;/p&gt;

&lt;h4 id="third-industrial-revolution"&gt;Third Industrial Revolution&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://www.economist.com/leaders/2012/04/21/the-third-industrial-revolution"&gt;Third Industrial Revolution&lt;/a&gt;, also known as the Digital Revolution, took place in the late 20th and early 21st centuries and was characterized by the adoption of computers and automation in manufacturing and other industries.&lt;/p&gt;

&lt;h4 id="fourth-industrial-revolution"&gt;Fourth Industrial Revolution&lt;/h4&gt;

&lt;p&gt;Industry 4.0, also known as the Fourth Industrial Revolution, is the current trend of automation and data exchange in manufacturing technologies, including developments in artificial intelligence, the &lt;a href="https://www.influxdata.com/glossary/iot-devices/"&gt;internet of things&lt;/a&gt; (IoT), and cyber-physical systems.&lt;/p&gt;

&lt;p&gt;It’s seen as the fourth major revolution in manufacturing, following the mechanization of production in the First Industrial Revolution, the mass production of the Second Industrial Revolution, and the introduction of computers and automation in the Third Industrial Revolution.&lt;/p&gt;

&lt;h2 id="industry-40-key-concepts-and-principles"&gt;Industry 4.0 key concepts and principles&lt;/h2&gt;

&lt;h4 id="interoperability"&gt;Interoperability&lt;/h4&gt;

&lt;p&gt;Interoperability is a fundamental concept in Industry 4.0, emphasizing seamless communication and data exchange among systems, devices, and software platforms within an industrial environment.&lt;/p&gt;

&lt;p&gt;As Industry 4.0 relies heavily on integrating diverse technologies such as IoT, AI, and cloud computing, ensuring these components work effectively together is crucial to realizing the full potential of a connected, intelligent manufacturing ecosystem.&lt;/p&gt;

&lt;p&gt;Interoperability enables businesses to break down silos, streamline processes, and make better-informed decisions, ultimately leading to increased efficiency, productivity, and competitiveness.&lt;/p&gt;

&lt;p&gt;To achieve interoperability, manufacturers must adopt standardized communication protocols, open architectures, and flexible data formats to facilitate a smooth flow of information across the entire production chain.&lt;/p&gt;

&lt;h4 id="virtualization"&gt;Virtualization&lt;/h4&gt;

&lt;p&gt;Virtualization is the creation of virtual representations of physical assets, processes, and systems within the industrial environment.&lt;/p&gt;

&lt;p&gt;By using advanced technologies such as &lt;a href="https://www.influxdata.com/glossary/digital-twins/"&gt;digital twins&lt;/a&gt;, simulation software, and augmented reality, virtualization enables manufacturers to test, analyze, and optimize their operations without impacting the actual production process.&lt;/p&gt;

&lt;p&gt;Virtualization not only allows more efficient planning and decision making but also helps businesses identify potential bottlenecks or issues before they occur, resulting in reduced downtime, lower costs, and enhanced product quality.&lt;/p&gt;

&lt;p&gt;At the same time, it promotes remote monitoring and control of industrial processes, allowing experts to collaborate and troubleshoot issues from any location, which improves overall operational efficiency.&lt;/p&gt;

&lt;h4 id="cyber-physical-systems"&gt;Cyber-Physical Systems&lt;/h4&gt;

&lt;p&gt;Cyber-physical systems (CPS) are a core part of Industry 4.0, representing the seamless integration of computational and physical components. These systems enable real-time communication and data exchange between machines, humans, and digital networks, resulting in smarter, more efficient, and autonomous industrial processes.&lt;/p&gt;

&lt;h4 id="decentralization"&gt;Decentralization&lt;/h4&gt;

&lt;p&gt;Decentralization involves the shift towards distributed decision-making and autonomous control within industrial systems.&lt;/p&gt;

&lt;p&gt;In the context of manufacturing, decentralization empowers machines, devices, and production units to make decisions and perform tasks independently, without centralized supervision or control.&lt;/p&gt;

&lt;p&gt;This approach increases the agility and resilience of manufacturing operations and enables businesses to scale more effectively, as new components or devices can be seamlessly integrated into the existing network.&lt;/p&gt;

&lt;h4 id="modularity"&gt;Modularity&lt;/h4&gt;

&lt;p&gt;Modularity, the ability to adjust production lines, processes, and equipment with minimal effort and downtime, is a key concept in Industry 4.0.&lt;/p&gt;

&lt;p&gt;It emphasizes the importance of designing flexible, scalable, and adaptable systems that can be easily reconfigured or upgraded to meet changing market demands and technological advancements.&lt;/p&gt;

&lt;p&gt;By embracing modularity, manufacturers can rapidly adapt to fluctuations in product demand, introduce new products, or incorporate emerging technologies, ensuring their operations remain agile and competitive.&lt;/p&gt;

&lt;p&gt;Modularity also enables greater customization, as production lines can be adjusted to accommodate unique customer requirements or preferences.&lt;/p&gt;

&lt;h2 id="what-technologies-are-driving-industry-40"&gt;What technologies are driving Industry 4.0?&lt;/h2&gt;

&lt;h4 id="internet-of-things"&gt;Internet of Things&lt;/h4&gt;

&lt;p&gt;IoT is an important part of Industry 4.0, enabling businesses to optimize processes and become more efficient. With this technology, companies can deploy intelligent machines to automate processes and workflows, leading to higher accuracy and productivity.&lt;/p&gt;

&lt;p&gt;IoT technology also makes it possible for machines and databases to communicate, allowing businesses to access real-time data. This improved data collection has enabled insights about productivity and efficiency, streamlining many processes in Industry 4.0.&lt;/p&gt;

&lt;h4 id="cloud-computing"&gt;Cloud Computing&lt;/h4&gt;

&lt;p&gt;Cloud computing enables new ways for organizations to develop agile digital operations. By using cloud computing, companies can reduce the time needed to deploy or upgrade applications and further benefit from scalability.&lt;/p&gt;

&lt;p&gt;With cloud computing, manufacturers now have access to analytics data they did not previously have, enabling them to make informed, real-time decisions.&lt;/p&gt;

&lt;h4 id="edge-computing"&gt;Edge Computing&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/glossary/edge-computing/"&gt;Edge computing&lt;/a&gt; is the process of collecting and analyzing data at the edge of a network, closer to where it is generated. It’s at the opposite end of the spectrum from cloud computing, but it’s just as important for Industry 4.0 workloads.&lt;/p&gt;

&lt;p&gt;This makes it ideal for applications that require real-time analytics, such as autonomous robotic systems and self-driving cars.&lt;/p&gt;

&lt;p&gt;Edge computing also helps reduce network traffic by minimizing the need to send large amounts of data back and forth between devices and centralized data centers.&lt;/p&gt;

&lt;h4 id="g-networking"&gt;5G Networking&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/customer/5g-test-network-and-influxdb/"&gt;5G networks&lt;/a&gt; allow for faster communication and data transfer speeds, a huge factor in making Industry 4.0 viable. This ultimately makes the technology more accessible to businesses of all sizes and enables them to deploy IoT solutions at scale.&lt;/p&gt;

&lt;p&gt;5G can enable companies to increase operational efficiency by supporting real-time decision-making and remote monitoring capabilities.&lt;/p&gt;

&lt;h4 id="ai-and-machine-learning"&gt;AI and Machine Learning&lt;/h4&gt;

&lt;p&gt;AI and machine learning are another key piece of making Industry 4.0 possible. Using AI, companies are able to automate processes, improve decision-making, and better analyze data.&lt;/p&gt;

&lt;p&gt;Many industries &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;are already using AI&lt;/a&gt; to increase efficiency, accelerate innovation, and reduce costs. In manufacturing, for example, AI can be used to optimize production lines, predict maintenance needs, and schedule resources more efficiently.&lt;/p&gt;

&lt;h4 id="cybersecurity"&gt;Cybersecurity&lt;/h4&gt;

&lt;p&gt;Collecting and analyzing more data is great, but it also opens up numerous potential vulnerabilities for businesses. No company wants to be in the news for leaking internal or customer data, or for not being able to function because critical infrastructure has been hacked.&lt;/p&gt;

&lt;p&gt;Industry 4.0 requires sophisticated cybersecurity solutions that protect data at rest and in transit, detect malicious activity before it becomes a problem, and alert users when something is amiss. This can be accomplished through various measures such as encryption, intrusion detection systems, two-factor authentication (2FA), and network segmentation.&lt;/p&gt;

&lt;p&gt;In addition to implementing security solutions, organizations should also develop a comprehensive cybersecurity strategy that covers personnel training and processes for responding to emergency situations. This way, businesses can be more prepared for any potential attacks or data breaches.&lt;/p&gt;

&lt;h4 id="digital-twins"&gt;Digital Twins&lt;/h4&gt;

&lt;p&gt;Digital twins enable engineers to create virtual models of systems and processes that can be used to measure performance, anticipate variation, and even detect defects or dangers before they become issues in the physical world.&lt;/p&gt;

&lt;p&gt;As a result of this technology’s high accuracy, digital twin simulations can substantially reduce design costs, improve operational efficiency and sustainability, enhance product quality, and promote workplace safety.&lt;/p&gt;

&lt;p&gt;Furthermore, companies are leveraging the combination of digital twins’ advanced analytics capabilities and connected devices to optimize factory operations through remote commissioning, proactive maintenance, and streamlined troubleshooting.&lt;/p&gt;

&lt;h4 id="real-time-data-analytics"&gt;Real-Time Data Analytics&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/influxdb-3-ideal-solution-real-time-analytics/"&gt;Real-time analytics&lt;/a&gt; is an essential part of Industry 4.0, enabling businesses to monitor, analyze, and respond to operational and process changes with unprecedented speed and accuracy.&lt;/p&gt;

&lt;p&gt;By utilizing IoT devices, sensors, and advanced analytics models, manufacturers can collect and process data in real time, allowing them to make data-driven decisions and adjustments on the fly.&lt;/p&gt;

&lt;h4 id="d-printing-and-additive-manufacturing"&gt;3D Printing and Additive Manufacturing&lt;/h4&gt;

&lt;p&gt;3D printing and additive manufacturing are quickly becoming essential tools for businesses to maximize efficiency, reduce costs, and create complicated designs with ease.&lt;/p&gt;

&lt;p&gt;For example, factories can print replacement parts on-site without having to call a supplier and wait for them to arrive. This means faster repairs and less downtime overall.&lt;/p&gt;

&lt;p&gt;Additive manufacturing also allows companies to manufacture complex designs that were previously impossible with traditional manufacturing methods.&lt;/p&gt;

&lt;h4 id="robotics"&gt;Robotics&lt;/h4&gt;

&lt;p&gt;In the context of Industry 4.0, robotics goes beyond traditional automation, incorporating advanced capabilities such as AI, machine learning, and sensor integration to create intelligent, adaptive, and versatile machines capable of performing complex tasks with precision and consistency.&lt;/p&gt;

&lt;p&gt;This also includes collaborative robots, or “cobots,” which are designed to work alongside human operators, enhancing their capabilities and ensuring a safer, more ergonomic work environment. 
By using robotics, manufacturers can automate repetitive tasks, reduce human error, and reduce labor costs, while also enabling greater flexibility and customization in production.&lt;/p&gt;

&lt;h2 id="benefits-of-industry-40"&gt;Benefits of Industry 4.0&lt;/h2&gt;

&lt;h5 id="improved-productivity"&gt;1. Improved productivity&lt;/h5&gt;

&lt;p&gt;One of the primary benefits of Industry 4.0 is improved productivity. Key 4.0 technologies, such as data analytics and machine learning, can be used to identify inefficiencies and optimize production processes.&lt;/p&gt;

&lt;p&gt;Similarly, robotics and 3D printing can automate tasks, reducing the need for human labor and increasing manufacturing output.&lt;/p&gt;

&lt;h5 id="increased-efficiency"&gt;2. Increased efficiency&lt;/h5&gt;

&lt;p&gt;By enabling smarter use of resources and more efficient processes, Industry 4.0 contributes significantly to reducing energy consumption, waste generation, and greenhouse gas emissions.&lt;/p&gt;

&lt;p&gt;When companies adopt Industry 4.0 technologies, they can actively contribute to global sustainability goals while simultaneously improving their bottom line.&lt;/p&gt;

&lt;p&gt;Predictive maintenance is a prime example. This proactive approach allows companies to monitor equipment performance in real-time, identify potential issues before they escalate, and schedule maintenance activities based on actual equipment conditions rather than fixed intervals.&lt;/p&gt;

&lt;p&gt;Predictive maintenance minimizes unexpected downtime and costly repairs, extends equipment lifespan, reduces the need for frequent replacements, and reduces associated environmental impact. As an added bonus, equipment that is properly maintained also tends to run more efficiently in terms of power consumption and greenhouse gas emissions.&lt;/p&gt;

&lt;h5 id="improved-quality"&gt;3. Improved quality&lt;/h5&gt;

&lt;p&gt;By identifying errors in collected sensor data, Industry 4.0 can also help improve product quality. Additionally, 3D printing can create prototypes that can be tested for quality before mass production begins.&lt;/p&gt;

&lt;h5 id="reduced-costs"&gt;4. Reduced costs&lt;/h5&gt;

&lt;p&gt;The implementation of Industry 4.0 technologies helps minimize expenses because these technologies can help improve productivity and efficiency, leading to reduced labor costs and waste.&lt;/p&gt;

&lt;h5 id="increased-flexibility"&gt;5. Increased flexibility&lt;/h5&gt;

&lt;p&gt;Industry 4.0 helps to increase flexibility within manufacturing operations. Technologies such as 3D printing and robotics can be used to create customized products quickly and with minimal human labor.&lt;/p&gt;

&lt;p&gt;The use of data analytics also helps companies respond to changes in customer demand, scaling production up or down when needed.&lt;/p&gt;

&lt;h5 id="enhanced-safety"&gt;6. Enhanced safety&lt;/h5&gt;

&lt;p&gt;Thanks to advances such as robotics and machine learning, dangerous tasks can now be automated. This reduces the risk of worker injury and helps create a safer working environment.&lt;/p&gt;

&lt;h5 id="more-resilient-supply-chains"&gt;7. More resilient supply chains&lt;/h5&gt;

&lt;p&gt;Adopting many Industry 4.0 technologies can help businesses strengthen their supply chains. By leveraging data analytics, businesses can monitor the production process in real time and detect small issues before they escalate into larger problems.&lt;/p&gt;

&lt;p&gt;Plus, 3D printing and additive manufacturing can also be used to quickly produce replacement parts or components for machinery with little to no downtime. This helps companies maintain  operations without disruption due to supply chain problems.&lt;/p&gt;

&lt;h5 id="improved-customer-experience"&gt;8. Improved customer experience&lt;/h5&gt;

&lt;p&gt;Industry 4.0 can help businesses improve their customer experience by providing insights into customer behaviors and preferences. Through data analysis, companies can identify areas where they need to focus their efforts in order to provide the best possible service or product.&lt;/p&gt;

&lt;p&gt;Data can also help during the manufacturing process to help identify potential defects early, so customers don’t receive a faulty product.&lt;/p&gt;

&lt;h2 id="industry-40-challenges-and-risks"&gt;Industry 4.0 challenges and risks&lt;/h2&gt;

&lt;h5 id="implementation-costs"&gt;1. Implementation costs&lt;/h5&gt;

&lt;p&gt;Implementing Industry 4.0 technologies and practices can be expensive, particularly for smaller businesses. If a business doesn’t have the necessary financial resources to invest in these technologies, it may not see a return on the investment.&lt;/p&gt;

&lt;h5 id="cybersecurity-risks"&gt;2. Cybersecurity risks&lt;/h5&gt;

&lt;p&gt;The integration of advanced technologies and the reliance on connected systems increase the risk of cybersecurity threats. Without robust cybersecurity measures in place, a business may be vulnerable to attacks, which can have serious consequences.&lt;/p&gt;

&lt;h5 id="culture-challenges"&gt;3. Culture challenges&lt;/h5&gt;

&lt;p&gt;Some businesses may be hesitant to adopt new technologies and practices due to concerns about costs and disruptions to their existing operations. If a business isn’t willing to adapt to new technologies and processes, it may struggle to compete with competitors that are more forward-thinking.&lt;/p&gt;

&lt;p&gt;This can also apply to employees who aren’t familiar with new technologies and may be resistant to change, making it important to ensure that employees at all levels of the company understand how and why changes are being made.&lt;/p&gt;

&lt;h2 id="common-industry-40-use-cases"&gt;Common Industry 4.0 use cases&lt;/h2&gt;

&lt;h5 id="smart-manufacturing"&gt;1. Smart manufacturing&lt;/h5&gt;

&lt;p&gt;Smart manufacturing and smart factories are common Industry 4.0 use cases where adopting new technologies can improve productivity, make products more reliable, and keep workers safer.&lt;/p&gt;

&lt;p&gt;Beyond the direct benefits to the company, smart manufacturing can benefit the environment by reducing waste and making production more efficient.&lt;/p&gt;

&lt;h5 id="agriculture"&gt;2. Agriculture&lt;/h5&gt;

&lt;p&gt;The advantages of incorporating Industry 4.0 in agriculture are substantial.&lt;/p&gt;

&lt;p&gt;Precision farming techniques, powered by IoT sensors and data analytics, facilitate the targeted application of fertilizers, pesticides, and irrigation, reducing waste and minimizing environmental impact.&lt;/p&gt;

&lt;p&gt;Robotics and autonomous machinery can also perform repetitive tasks, such as planting, harvesting, and monitoring, improving efficiency and freeing up valuable human resources.&lt;/p&gt;

&lt;p&gt;Advanced data analysis also enables predictive modeling and forecasting, helping farmers make informed decisions on crop selection, planting schedules, and resource allocation.&lt;/p&gt;

&lt;h5 id="healthcare"&gt;3. Healthcare&lt;/h5&gt;

&lt;p&gt;By using IoT devices to collect health data, patients are able to get more personalized and effective healthcare. This can include everything from detecting emergency situations, such as a heart attack, to enabling the detection and mitigation of diseases before they become severe.&lt;/p&gt;

&lt;p&gt;Robotics is also increasingly used during surgery to reduce human error and improve outcomes.&lt;/p&gt;

&lt;h5 id="supply-chain-management"&gt;4. Supply chain management&lt;/h5&gt;

&lt;p&gt;Adopting Industry 4.0 technologies can enhance supply chain management by enabling better visibility, efficiency, and resilience.&lt;/p&gt;

&lt;p&gt;Connecting components such as suppliers, manufacturers, distributors, and retailers, enables smoother information exchange, ensuring that all stakeholders have access to accurate and up-to-date data.&lt;/p&gt;

&lt;p&gt;Predictive analytics and machine learning can help forecast demand patterns, optimize inventory levels, and identify potential disruptions, allowing supply chain managers to address issues and minimize risks.&lt;/p&gt;

&lt;h2 id="industry-40-tools"&gt;Industry 4.0 tools&lt;/h2&gt;

&lt;p&gt;In this section, we’ll examine some tools useful for a variety of tasks involved in adopting industry 4.0 technology.&lt;/p&gt;

&lt;h5 id="data-storage"&gt;1. Data storage&lt;/h5&gt;

&lt;p&gt;Storing Industry 4.0 data at scale requires scalable, efficient solutions that can handle the high volume of data generated by interconnected devices and systems. Here are a few different options for storing your data:&lt;/p&gt;

&lt;h5 id="time-series-databases"&gt;2. Time series databases&lt;/h5&gt;

&lt;p&gt;Time series databases (TSDBs) are specifically designed to store time-stamped data from sensors and IoT devices. They offer high write and query performance, making them ideal for handling the high-frequency data typical of Industry 4.0 use cases. An example of a TSDB is &lt;a href="https://www.influxdata.com/?utm_source=website&amp;amp;utm_medium=industry_4_0_update_2026&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="data-historians"&gt;3. Data historians&lt;/h5&gt;

&lt;p&gt;Data historians are specialized databases for storing and retrieving historical process data from industrial systems. They are optimized for handling time series data and offer capabilities like data compression, aggregation, and real-time querying. An example of a data historian is OSI PI.&lt;/p&gt;

&lt;h5 id="columnar-databases"&gt;4. Columnar databases&lt;/h5&gt;

&lt;p&gt;Columnar databases store data in columns rather than rows, which is well-suited for analytics and processing large datasets and is often used as a data warehouse. Columnar databases offer high query performance and data compression, making them suitable for storing and analyzing the vast amounts of structured data generated by Industry 4.0 systems.&lt;/p&gt;

&lt;h5 id="communication-protocols"&gt;5. Communication protocols&lt;/h5&gt;

&lt;p&gt;Several communication protocols are well-suited for Industry 4.0 systems, providing efficient and reliable data transfer between interconnected devices, machines, and software platforms. Here are some good options for communication protocols in Industry 4.0:&lt;/p&gt;

&lt;h5 id="mqtt"&gt;6. MQTT&lt;/h5&gt;

&lt;p&gt;MQTT is a lightweight, publish-subscribe messaging protocol designed for low-bandwidth, high-latency, and unreliable networks. Its low overhead and minimal resource requirements make it ideal for IoT devices and Industry 4.0 applications.&lt;/p&gt;

&lt;p&gt;MQTT is widely used to connect sensors, actuators, and other devices to cloud platforms, enabling efficient data exchange and remote monitoring.&lt;/p&gt;

&lt;h5 id="opc-unified-architecture-opc-ua"&gt;7. OPC Unified Architecture (OPC UA)&lt;/h5&gt;

&lt;p&gt;OPC UA is a platform-independent, service-oriented architecture developed specifically for industrial automation and communication. It provides secure and reliable data exchange between devices, machines, and software applications, regardless of the underlying platform or programming language.&lt;/p&gt;

&lt;p&gt;OPC UA supports a wide range of data types and features with built-in security mechanisms, making it a popular choice for Industry 4.0 systems.&lt;/p&gt;

&lt;h5 id="advanced-message-queuing-protocol-amqp"&gt;8. Advanced Message Queuing Protocol (AMQP)&lt;/h5&gt;

&lt;p&gt;AMQP is an open standard, application-layer protocol for message-oriented middleware. It supports flexible messaging patterns and offers reliable, secure communication between devices and applications. AMQP is well-suited to scenarios that require complex routing and guaranteed message delivery, making it a good fit for many Industry 4.0 applications.&lt;/p&gt;

&lt;h4 id="data-collection-and-integration"&gt;Data Collection and Integration&lt;/h4&gt;

&lt;p&gt;One of the big challenges for Industry 4.0 is collecting data from a variety of devices that may communicate over different protocols, then sending it to various tools for storage and analysis. Let’s take a look at some options that make collecting and integrating data easier:&lt;/p&gt;

&lt;h5 id="node-red"&gt;1. Node-RED&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt; is an open-source, flow-based programming tool for wiring together devices, APIs, and online services. It provides a browser-based visual interface for designing and deploying data flows, making it easy to connect and integrate various data sources, such as IoT devices, industrial sensors, and web services.&lt;/p&gt;

&lt;p&gt;With a large library of prebuilt nodes and support for custom nodes, Node-RED allows users to build complex data pipelines and perform data transformations with &lt;a href="https://www.influxdata.com/blog/node-red-dashboard-tutorial/"&gt;minimal coding effort&lt;/a&gt;.&lt;/p&gt;

&lt;h5 id="telegraf"&gt;2. Telegraf&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/time-series-platform/telegraf/?utm_source=website&amp;amp;utm_medium=industry_4_0_update_2026&amp;amp;utm_content=blog"&gt;Telegraf&lt;/a&gt; is an open source, plugin-driven server agent for collecting and reporting metrics from different data sources. Telegraf supports a wide range of input, output, and processing plugins, allowing it to gather and transmit data from various devices, systems, and APIs to different storage platforms.&lt;/p&gt;

&lt;p&gt;Its flexibility and extensibility make it suitable for Industry 4.0 applications, where diverse data sources are common.&lt;/p&gt;

&lt;h5 id="apache-nifi"&gt;3. Apache NiFi&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://nifi.apache.org/"&gt;Apache NiFi&lt;/a&gt; is an open source, web-based data integration tool for designing, deploying, and managing data flows. It offers a visual interface for designing data pipelines and supports a wide range of data sources, processors, and sinks.&lt;/p&gt;

&lt;p&gt;NiFi is particularly well-suited to use cases that require complex data routing, transformation, and enrichment. With built-in security features and support for data provenance, NiFi ensures data integrity and traceability in Industry 4.0 environments.&lt;/p&gt;

&lt;h2 id="industry-40-best-practices"&gt;Industry 4.0 best practices&lt;/h2&gt;

&lt;p&gt;Moving towards Industry 4.0 is a major endeavor for existing businesses and involves all areas of a business to work properly. In this section, let’s explore some best practices that can help you avoid major pitfalls that could hurt your business.&lt;/p&gt;

&lt;h5 id="have-a-clear-strategy-and-goals"&gt;1. Have a clear strategy and goals&lt;/h5&gt;

&lt;p&gt;Above all else, you need a clear understanding of how adopting these new technologies will help achieve your business goals. If you can’t actually find concrete ways that this will help your business, don’t blindly invest resources in them. Some potential things to identify:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Specific technologies that will be used&lt;/li&gt;
  &lt;li&gt;Which processes could be automated&lt;/li&gt;
  &lt;li&gt;Metrics to measure success&lt;/li&gt;
  &lt;li&gt;Cybersecurity focus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The integration of advanced technologies and the reliance on connected systems increase the risk of cybersecurity threats. Implement robust cybersecurity measures to protect against these threats from day one, so you don’t regret it later on.&lt;/p&gt;

&lt;h5 id="collaboration"&gt;2. Collaboration&lt;/h5&gt;

&lt;p&gt;Industry 4.0 technologies often involve integrating systems and processes across different organizations. It’s important to collaborate with suppliers and partners to ensure that these systems and processes are integrated effectively.&lt;/p&gt;

&lt;h5 id="track-results-and-iterate"&gt;3. Track results and iterate&lt;/h5&gt;

&lt;p&gt;Establish metrics before starting so you can measure progress against expected results. Based on progress, you need to be willing and able to change your strategy if necessary.&lt;/p&gt;

&lt;h2 id="faqs"&gt;FAQs&lt;/h2&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;What are the origins of Industry 4.0?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Industry 4.0 as a concept dates back to 2006, when the German government laid out a plan to maintain its manufacturing dominance in a paper that looked into the future of manufacturing and how companies would be impacted and need to adapt to emerging technologies. The concept was further refined in 2010 when the German Cabinet laid out their High-Tech Strategy 2020 plan, which defined five priorities that would be used to direct billions of dollars in government investment.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;How are digital transformation and Industry 4.0 related?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                &lt;a href="https://www.influxdata.com/customers/iot-data-platform/"&gt;Digital transformation&lt;/a&gt; and Industry 4.0 are often used interchangeably, but it's crucial to understand their unique characteristics and how they relate to each other. While both concepts involve adopting advanced technologies to improve business operations, Industry 4.0 specifically focuses on the manufacturing sector, whereas digital transformation encompasses a broader range of industries and applications. Digital transformation is the process of integrating digital technologies across a business's customer service, marketing, supply chain management, and internal operations. The goal of digital transformation is to optimize processes, enhance efficiency, and create new business models that drive growth and competitiveness. This transformation is achieved through the implementation of technologies such as cloud computing, data analytics, artificial intelligence, and IoT. Industry 4.0, on the other hand, is a subset of digital transformation that targets the manufacturing industry. It is often referred to as the Fourth Industrial Revolution, representing a new era of intelligent, connected, and autonomous manufacturing systems. Industry 4.0 leverages technologies like IoT, advanced analytics, robotics, and additive manufacturing to optimize production processes, improve product quality, and increase overall efficiency. Despite their differences, digital transformation and Industry 4.0 are closely related, as both aim to drive innovation and create value through the adoption of advanced technologies. In fact, Industry 4.0 can be considered a specific application of digital transformation within the manufacturing sector. As companies embark on their digital transformation journeys, embracing Industry 4.0 principles can provide a solid foundation for growth and success in manufacturing.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;What is IT/OT convergence?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Businesses have traditionally been siloed between information technology (IT) and operational technology (OT). But in recent years, these worlds have started to merge in a process commonly referred to as IT/OT convergence. Better collaboration between IT and OT can add tremendous value to any business by providing greater visibility across the organization, improved data analysis capabilities, fewer manual processes, and a faster response to customer needs. By leveraging both sets of technologies, businesses can gain unprecedented control over their operations. IT/OT convergence involves integrating hardware, software, and networks traditionally used in OT with those used in IT. This integration synchronizes the two disconnected systems, allowing them to exchange data and information. For example, an IT system can enable operators to access real-time operational data from OT systems, such as sensors and actuators.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;What is Industry 5.0?&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Industry 5.0 is a term used to describe the next phase of the Fourth Industrial Revolution, characterized by the integration of advanced technologies such as AI, IoT, and &lt;a href="https://www.ibm.com/think/topics/quantum-computing"&gt;quantum computing&lt;/a&gt; into manufacturing and other industries. There isn't a universally accepted definition of Industry 5.0, and the concept is still evolving. However, it's generally seen as a continuation of the trend towards increased automation and data exchange that began with Industry 4.0, with a focus on even more advanced technologies and their integration across sectors. One key difference between Industry 4.0 and Industry 5.0 is the focus on sustainability and social responsibility. Industry 5.0 is expected to involve the development of technologies that are more environmentally friendly and that promote social equity. This could include using renewable energy sources and developing technologies to reduce waste and pollution. Overall, the main difference between Industry 4.0 and Industry 5.0 is the level of technological advancement. Industry 5.0 involves the integration of even more advanced technologies, such as quantum computing, which have the potential to significantly impact and transform various industries.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;
&lt;/div&gt;
</description>
      <pubDate>Fri, 13 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/industry-4-0-update-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/industry-4-0-update-2026/</guid>
      <category>IoT</category>
      <category>Developer</category>
      <author>Company (InfluxData)</author>
    </item>
    <item>
      <title>When Your Plant Talks Back: Conversational AI with InfluxDB 3</title>
      <description>&lt;p&gt;No one wants to stare at a plant and guess if it needs water. It’s much easier if the plant can say, “I’m thirsty.” A few years ago, we built &lt;a href="https://www.influxdata.com/blog/prototyping-iot-with-influxdb-cloud-2-0/?utm_source=website&amp;amp;utm_medium=plant_buddy_influxdb_3&amp;amp;utm_content=blog"&gt;Plant Buddy using InfluxDB Cloud 2.0&lt;/a&gt;. The linked article is still a great guide for cloud-first IoT prototyping as it shows how quickly you can connect devices, store time series data, and build dashboards in the cloud with the previous version of InfluxDB.&lt;/p&gt;

&lt;p&gt;But this time, the goal was different. Instead of sending soil moisture data to the cloud, the entire system runs locally using the latest InfluxDB 3 Core, similar to a modern industrial setup powered by LLM for a natural conversational interaction.&lt;/p&gt;

&lt;h2 id="the-architecture-the-factory-at-home"&gt;The architecture: the “factory” at home&lt;/h2&gt;

&lt;p&gt;In real factories, raw PLC data is captured at the edge, often using MQTT and a local database. That same architecture now powers PlantBuddy v3 with the following setup.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Edge Device (ESP32 / Arduino)&lt;/strong&gt;: Works like a small PLC. It reads soil moisture and publishes the plant’s state to the network without knowing anything about the database.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Soil Moisture Sensor (Analog)&lt;/strong&gt;: Outputs an analog signal based on soil moisture. The microcontroller converts it to digital using its built-in ADC.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Message Bus (Mosquitto MQTT)&lt;/strong&gt;: Handles publish/subscribe communication. The Arduino publishes data to the broker (running locally), and Telegraf subscribes to forward data to the database.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Database (InfluxDB 3 Core)&lt;/strong&gt;: Runs locally in Docker as a high-performance time series database storing all sensor readings.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;User Interface (Claude + MCP)&lt;/strong&gt;: Enables natural language queries. Instead of writing SQL, questions about plant health can be asked conversationally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1ZSbIHFEYUbPMC1AdqrrST/ea99e0486c676472a7f68eec9b8b7d7e/Screenshot_2026-02-19_at_9.59.35â__AM.png" alt="Plant Buddy architecture" /&gt;&lt;/p&gt;

&lt;h4 id="collecting--sending-data-from-the-edge"&gt;1. Collecting &amp;amp; Sending Data from the Edge&lt;/h4&gt;

&lt;p&gt;To make this scalable, I treat the sensor data like an industrial payload. It’s not just a number; it’s a structured JSON object containing the ID, raw metrics, and a pre-calculated status flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Arduino Payload&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-xml"&gt;{ 
"id": "pothos_01",    // Device identifier (like a PLC tag) 
"raw": 715,  		// Raw ADC value (0-1023) 
"pct": 19,  		// Calculated moisture percentage 
"stat": "DRY_ALERT"   // Pre-computed status 
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Why compute status at the edge?&lt;/strong&gt; In factories, PLCs make local decisions (e.g., stop motor, trigger alarm). Here, the Arduino determines “DRY_ALERT” so the database can trigger alerts without recalculating thresholds.&lt;/p&gt;

&lt;h4 id="the-ingest-pipeline"&gt;2. The Ingest Pipeline&lt;/h4&gt;

&lt;p&gt;Below are two approaches to send plant data to InfluxDB. In this project, I went with MQTT and Telegraf, which are more standard for an industrial setup.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5McEkD3dooB2Ii4nfJQ6D1/2d370c54ba97a41a460a66ec05c07af1/Screenshot_2026-02-19_at_10.02.34â__AM.png" alt="Plant Buddy Ingest Pipeline" /&gt;&lt;/p&gt;

&lt;p&gt;Telegraf acts as the gateway, subscribing to the MQTT broker and translating the JSON into InfluxDB Line Protocol. This configuration is identical to what you’d see in a manufacturing plant monitoring vibration sensors.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-toml"&gt;# telegraf.conf - Complete Working Example
[agent]
  interval = "10s"
  flush_interval = "10s"

[[inputs.mqtt_consumer]]
  servers = ["tcp://127.0.0.1:1883"]
  topics = ["home/livingroom/plant/moisture"]
  data_format = "json"

  # Tags become indexed dimensions (fast filtering)
  tag_keys = ["id", "stat"]

  # Fields become measured values
  json_string_fields = ["raw", "pct"]

[[outputs.influxdb_v2]]
  urls = ["http://127.0.0.1:8181"]
  token = "$INFLUX_TOKEN"
  organization = "my-org"
  bucket = "plant_data"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If Telegraf runs in Docker, use &lt;code class="language-markup"&gt;host.docker.internal:8181&lt;/code&gt; to reach the database.&lt;/p&gt;

&lt;h4 id="time-series-database-influxdb-3-docker-container"&gt;3. Time Series Database: InfluxDB 3 (Docker Container)&lt;/h4&gt;

&lt;p&gt;InfluxDB 3 Core runs locally in Docker as the time series database. It stores soil moisture readings and enables real-time analytics, all without depending on external cloud connectivity.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Create persistent storage 
mkdir -p ~/influxdb3-data

# Run InfluxDB 3 Core with proper configuration
docker run --rm -p 8181:8181 \
  -v $PWD/data:/var/lib/influxdb3/data \
  -v $PWD/plugins:/var/lib/influxdb3/plugins \
  influxdb:3-core influxdb3 serve \
    --node-id=my-node-0 \
    --object-store=file \
    --data-dir=/var/lib/influxdb3/data \
    --plugin-dir=/var/lib/influxdb3/plugins&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="the-ai-interface-influxdb-mcp--claude"&gt;4. The “AI” Interface: InfluxDB MCP &amp;amp; Claude&lt;/h4&gt;

&lt;p&gt;Instead of writing SQL queries or building dashboards, the system connects an LLM to InfluxDB through the Model Context Protocol (MCP). I’ve written another blog post on how to connect InfluxDB 3 to MCP, which you can find here.&lt;/p&gt;

&lt;p&gt;Now the question isn’t:
&lt;strong&gt;“What’s the SQL query for average soil moisture over the last 24 hours?”&lt;/strong&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;It becomes:
&lt;strong&gt;“Has the plant been dry today?”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The LLM generates the correct SQL under the hood. If needed, the generated query can be inspected. This makes time series analytics accessible through conversation.&lt;/p&gt;

&lt;p&gt;&lt;code class="language-markup"&gt;claude_desktop_config.json&lt;/code&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;{
  "mcpServers": {
    "influxdb": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "--interactive",
        "--add-host=host.docker.internal:host-gateway",
        "--env",
        "INFLUX_DB_PRODUCT_TYPE",
        "--env",
        "INFLUX_DB_INSTANCE_URL",
        "--env",
        "INFLUX_DB_TOKEN",
        "influxdata/influxdb3-mcp-server"
      ],
      "env": {
        "INFLUX_DB_PRODUCT_TYPE": "core",
        "INFLUX_DB_INSTANCE_URL": "http://host.docker.internal:8181",
        "INFLUX_DB_TOKEN": "YOUR_RESOURCE_TOKEN"
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="the-result"&gt;The Result:&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5ic88rDutPS2omn2Z6tD1k/908b17ccb43b429d80c7dfa134de9dd2/Screenshot_2026-02-19_at_10.08.18â__AM.png" alt="Plant Buddy result" /&gt;&lt;/p&gt;

&lt;h2 id="whats-next"&gt;What’s next&lt;/h2&gt;

&lt;p&gt;In the next post, we will upgrade this Plant Buddy project to do more than passively monitor. New features will include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;A water pump, motor, and small display&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automatic watering&lt;/strong&gt; when the plant enters &lt;code class="language-markup"&gt;DRY_ALERT&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;An extended system with &lt;strong&gt;light and temperature sensors&lt;/strong&gt; to determine how placement of the potted plant affects its health, especially during trips when no one is home.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try to build one yourself with &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=plant_buddy_influxdb_3&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt;! We would love to hear your questions/comments in our &lt;a href="https://community.influxdata.com"&gt;community forum&lt;/a&gt;, &lt;a href="https://join.slack.com/t/influxcommunity/shared_invite/zt-3hevuqtxs-3d1sSfGbbQgMw2Fj66rZsA"&gt;Slack&lt;/a&gt;, or Discord.&lt;/p&gt;
</description>
      <pubDate>Tue, 10 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/plant-buddy-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/plant-buddy-influxdb-3/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>From Reactive to Predictive: Preserving BESS Uptime at Scale</title>
      <description>&lt;p&gt;Battery Energy Storage Systems (BESS) operate as revenue-generating grid assets that capture surplus electricity, deploy power during demand spikes, and support frequency control. By shifting energy across time, they stabilize grid conditions, enable renewable integration, and execute market dispatch commitments. When systems respond as designed, stored capacity becomes a flexible, monetizable supply.&lt;/p&gt;

&lt;p&gt;But BESS performance depends on precision and availability. When deviations in temperature, voltage, or current go undetected, instability can propagate across battery modules and supporting systems. Dispatch commitments fail, contractual penalties follow, and safety exposure increases.&lt;/p&gt;

&lt;p&gt;In large-scale deployments, uptime becomes a financial and operational control variable rather than a maintenance metric. Preserving availability requires more than reacting to alarms after limits are breached. As fleets expand and system complexity grows, reactive monitoring reaches its ceiling.&lt;/p&gt;

&lt;h2 id="what-is-a-bess"&gt;What is a BESS?&lt;/h2&gt;

&lt;p&gt;A Battery Energy Storage System (BESS) is a grid-connected battery infrastructure that stores electricity when supply exceeds demand and deploys it when demand rises. By shifting energy across time, these systems help balance generation and consumption while supporting market commitments and frequency control. Their value lies not only in storing energy, but in responding precisely when grid conditions change.&lt;/p&gt;

&lt;p&gt;Electrical supply and demand must remain balanced at all times. When surplus power enters the grid, a BESS absorbs that energy and holds it until demand increases, at which point stored electricity is released back into the network. This coordinated charge-and-discharge cycle enables controlled energy movement that stabilizes supply, supports renewable energy sources, and maintains consistent grid performance.&lt;/p&gt;

&lt;p&gt;Storage systems adjust output within seconds to correct short-term imbalances. Rapid response smooths fluctuations from wind and solar generation and helps maintain grid stability. As more renewable energy comes online and demand patterns shift, reliance on storage systems increases. In this environment, availability and response speed directly influence reliability and financial performance.&lt;/p&gt;

&lt;h4 id="availability-as-an-operational-variable"&gt;Availability as an Operational Variable&lt;/h4&gt;

&lt;p&gt;The value of a BESS depends on its availability. When a system goes offline, dispatch capacity contracts immediately, and stored energy cannot be delivered as planned. Market commitments may go unmet, and replacement capacity must be sourced elsewhere, resulting in lost revenue, potential penalties, and increased operational expenses.&lt;/p&gt;

&lt;p&gt;In large-scale deployments, availability becomes more complex to manage. Thousands of battery modules operate simultaneously, each producing continuous temperature, voltage, and current data. These modules function as a coordinated system, in whichwhere small issues in one area can affectinfluence overall performance. As fleet size grows, operational oversight becomes more demanding.&lt;/p&gt;

&lt;p&gt;Uptime is more than a maintenance metric. It directly affects revenue performance, capacity payments, and grid commitments. Even small disruptions can reduce dispatch capability before a full outage occurs. Preserving availability requires visibility that scales with system complexity.&lt;/p&gt;

&lt;h2 id="the-limits-of-reactive-monitoring"&gt;The limits of reactive monitoring&lt;/h2&gt;

&lt;p&gt;Operational failures in BESS environments rarely begin as sudden outages. They often start as gradual shifts in temperature, voltage, or current that move systems toward instability while remaining within acceptable limits. These early changes can appear normal when viewed in isolation.&lt;/p&gt;

&lt;p&gt;Most monitoring systems rely on predefined thresholds to detect abnormal conditions. An alert is triggered only after a value crosses a set boundary, confirming that a limit has already been breached. By the time an alarm activates, the underlying condition may have been developing for hours or days. The opportunity for intervention narrows.&lt;/p&gt;

&lt;p&gt;Telemetry is often distributed across battery management systems, inverter controls, and environmental monitoring platforms, creating &lt;a href="https://www.influxdata.com/blog/breaking-data-silos-influxdb-3/"&gt;data silos&lt;/a&gt; across operational layers. Each system captures a portion of operational behavior, but signals are reviewed separately and correlated manually. This separation makes it difficult to see how conditions evolve across modules. Engineers spend valuable time assembling context rather than acting on it.&lt;/p&gt;

&lt;p&gt;As deviations compound, risk increases. Capacity can drop offline, dispatch commitments may fail, and safety exposure rises. Reactive monitoring preserves awareness of failure, but does not preserve control.&lt;/p&gt;

&lt;h4 id="thermal-runway"&gt;Thermal Runway&lt;/h4&gt;

&lt;p&gt;Thermal runaway is one example of how small battery deviations can escalate when not addressed early. A gradual rise in temperature can accelerate internal reactions and generate additional heat. Without timely correction, this cycle can intensify and spread to neighboring cells.
What begins as minor drift can trigger protective shutdown mechanisms designed to prevent damage. While necessary for safety, shutdown interrupts dispatch commitments and reduces available capacity. Lost availability affects revenue performance and may introduce regulatory and safety exposure. The longer that instability goes undetected, the greater the operational impact.&lt;/p&gt;

&lt;h2 id="predictive-monitoring-extends-control"&gt;Predictive monitoring extends control&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/glossary/predictive-maintenance/"&gt;Predictive monitoring&lt;/a&gt; evaluates how operational signals change over time rather than reacting only after limits are breached. Temperature, voltage, and current readings are analyzed as evolving trends across battery modules, allowing engineers to see how conditions develop instead of viewing each signal in isolation. The value lies not only in collecting data, but in understanding how system behavior shifts as signals change together.&lt;/p&gt;

&lt;p&gt;In large BESS deployments, thousands of modules generate high-frequency telemetry that reflects thermal and electrical conditions. When these signals are reviewed independently or only against static thresholds, gradual drift can appear routine. Evaluated within a shared time context, emerging patterns become visible across modules and clarify where intervention is required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/what-is-time-series-data/"&gt;Time series data&lt;/a&gt; reflects current operating conditions, while historical data preserves baseline behavior and long-term performance trends. Comparing live readings against historical baselines distinguishes normal variation from early signs of degradation. By combining immediate visibility with long-term context, operators can intervene before instability propagates.&lt;/p&gt;

&lt;h4 id="real-time-analysis-with-influxdb"&gt;Real-time Analysis with InfluxDB&lt;/h4&gt;

&lt;p&gt;InfluxDB is purpose-built for time-series workloads that require high ingestion rates, scalable retention, and fast analytical queries. It captures continuous telemetry from distributed battery systems and organizes it using &lt;a href="https://www.influxdata.com/glossary/database-indexing/"&gt;time-based indexing&lt;/a&gt; and &lt;a href="https://www.influxdata.com/glossary/column-database/"&gt;columnar storage&lt;/a&gt; structures optimized for time-stamped data. Its value lies not only in storing operational signals, but in preserving query efficiency as data volume increases.&lt;/p&gt;

&lt;p&gt;As BESS fleets expand, ingestion and query demand rise simultaneously. Temperature, voltage, and current streams must be written at scale while remaining immediately available for investigation. InfluxDB applies compression and retention policies that balance long-term historical context with storage growth. This design maintains visibility at scale without slowing down dashboards or investigative workflows.&lt;/p&gt;

&lt;p&gt;Real-time analysis and historical comparison occur within the same execution path. Engineers can evaluate gradual drift and investigate emerging instability without exporting data to separate systems. Downsampling strategies preserve long-term trend visibility while keeping high-resolution data available for recent events. This unified architecture reduces operational overhead and preserves intervention windows under load.&lt;/p&gt;

&lt;h2 id="predictive-monitoring-in-action"&gt;Predictive monitoring in action&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/siemens-energy-standardizes-predictive-maintenance-influxdb/"&gt;Siemens Energy&lt;/a&gt; uses InfluxDB to standardize predictive maintenance across distributed energy and battery storage operations. High-frequency sensor telemetry from production systems and battery deployments is ingested into a unified time-series platform that preserves both real-time visibility and long-term historical context. Its value lies not only in collecting large volumes of operational data, but in maintaining consistent access as systems expand across sites and regions.&lt;/p&gt;

&lt;p&gt;Across more than 70 global locations and approximately 23,000 battery modules, continuous temperature, voltage, and performance signals are captured and stored within the same environment. Time-based indexing and scalable retention policies ensure that high-resolution data remains accessible for immediate analysis while preserving long-term degradation trends. This coordinated data architecture enables engineers to evaluate system behavior across modules rather than reviewing signals in isolation.&lt;/p&gt;

&lt;h2 id="the-verdict"&gt;The verdict&lt;/h2&gt;

&lt;p&gt;BESS assets operate within narrow operational and financial tolerances where availability directly influences revenue, safety, and grid reliability. Reactive monitoring confirms when limits are crossed, but predictive monitoring preserves visibility into how conditions evolve before capacity is affected. As fleets expand and telemetry volume increases, infrastructure must ingest high-frequency signals, retain historical context, and return results without latency. When time-series architecture aligns with the structure of operational data, predictive maintenance scales with system complexity rather than breaking under it, preserving uptime across large BESS environments.&lt;/p&gt;

&lt;p&gt;Ready to move from reactive monitoring to predictive control?  Get started with a free download of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb/?utm_source=website&amp;amp;utm_medium=preserving_bess_uptime&amp;amp;utm_content=blog"&gt;Core OSS&lt;/a&gt; or a trial of InfluxDB 3 &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=preserving_bess_uptime&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Thu, 05 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/preserving-bess-uptime/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/preserving-bess-uptime/</guid>
      <category>Developer</category>
      <author>Allyson Boate (InfluxData)</author>
    </item>
    <item>
      <title>A Practical Guide to SCADA Security</title>
      <description>&lt;p&gt;Critical infrastructure is under siege. The systems that control our power grids, water treatment plants, and oil pipelines weren’t designed for a connected world. This post covers what security measures teams need to understand and how &lt;a href="https://www.influxdata.com/what-is-time-series-data/?utm_source=website&amp;amp;utm_medium=scada_security_guide&amp;amp;utm_content=blog"&gt;time series&lt;/a&gt; monitoring can help turn SCADA’s weaknesses into a security advantage.&lt;/p&gt;

&lt;h2 id="the-stakes-for-scada-security-have-never-been-higher"&gt;The stakes for SCADA security have never been higher&lt;/h2&gt;

&lt;p&gt;Somewhere right now, a programmable logic controller is opening a valve, adjusting a turbine’s speed, or regulating the chlorine levels in a city’s drinking water. These actions are orchestrated by Supervisory Control and Data Acquisition (SCADA) systems. They run power grids, water treatment facilities, oil and gas pipelines, manufacturing plants, and transportation networks.&lt;/p&gt;

&lt;p&gt;For decades, these systems operated in relative obscurity. They sat on isolated networks, spoke proprietary protocols, and were managed by operational technology (OT) engineers who rarely crossed paths with the IT security team.&lt;/p&gt;

&lt;p&gt;The convergence of IT and OT networks, driven by the demand for remote access, operational analytics, and cost efficiency, has dragged &lt;a href="https://www.influxdata.com/glossary/SCADA-supervisory-control-and-data-acquisition/"&gt;SCADA&lt;/a&gt; systems into a threat landscape they were never built to survive. The results have been dramatic. In 2015 and 2016, coordinated cyberattacks took down portions of Ukraine’s power grid, leaving hundreds of thousands without electricity. In 2021, the Colonial Pipeline ransomware attack shut down fuel distribution across the U.S. East Coast, triggering panic buying and fuel shortages.&lt;/p&gt;

&lt;p&gt;These aren’t theoretical risks. They’re documented events, and they only represent the incidents that became public. The reality is that SCADA systems are being probed, scanned, and targeted every day, and many operators lack the visibility to even know it’s happening.&lt;/p&gt;

&lt;h2 id="scada-security-challenges"&gt;SCADA security challenges&lt;/h2&gt;

&lt;p&gt;Securing SCADA and industrial control systems is fundamentally different from securing a corporate IT environment. The assumptions, priorities, and constraints are almost inverted.&lt;/p&gt;

&lt;h4 id="availability-over-confidentiality"&gt;Availability Over Confidentiality&lt;/h4&gt;

&lt;p&gt;In IT security, the classic triad is confidentiality, integrity, and availability, usually prioritized in roughly that order. In OT, the priorities flip. A power plant cannot tolerate downtime. A water treatment facility cannot go offline for a patch cycle. The consequences of a disrupted industrial process aren’t a lost spreadsheet; they’re potential physical harm, environmental damage, or loss of life. This means that many standard IT security practices, such as aggressive patching, frequent reboots, and network scanning, can be dangerous or even impossible in OT environments.&lt;/p&gt;

&lt;h4 id="legacy-systems-and-long-lifecycles"&gt;Legacy Systems and Long Lifecycles&lt;/h4&gt;

&lt;p&gt;SCADA components often have operational lifecycles of 20 to 30 years. It’s not uncommon to find PLCs running firmware from the early 2000s, human-machine interfaces (HMIs) on Windows XP, or historians on unsupported database platforms. These systems were engineered for reliability and determinism, not security. Replacing them is expensive and operationally risky, so they persist despite the vulnerabilities.&lt;/p&gt;

&lt;h4 id="protocols-without-security"&gt;Protocols Without Security&lt;/h4&gt;

&lt;p&gt;Modbus, DNP3, and &lt;a href="https://www.influxdata.com/glossary/opc-ua/"&gt;OPC&lt;/a&gt; Classic are the workhorses of industrial communication, but they were designed in an era when network isolation was considered sufficient protection. Modbus, for instance, has no authentication, no encryption, and no way to verify the identity of a device sending commands. These protocols are deeply embedded in operational infrastructure and cannot be easily replaced.&lt;/p&gt;

&lt;h4 id="the-air-gap-myth"&gt;The Air Gap Myth&lt;/h4&gt;

&lt;p&gt;Many organizations still believe their OT networks are air-gapped. In practice, true air gaps are rare. Remote access solutions, vendor support connections, shared file servers, USB drives, and even cellular modems on RTUs create pathways between networks.&lt;/p&gt;

&lt;h2 id="key-strategies-for-scada-security"&gt;Key strategies for SCADA security&lt;/h2&gt;

&lt;p&gt;Effective SCADA security is layered, OT-aware, and built around the operational realities of industrial environments. There is no single solution, but a combination of strategies dramatically reduces risk.&lt;/p&gt;

&lt;h4 id="network-segmentation"&gt;Network Segmentation&lt;/h4&gt;

&lt;p&gt;The foundation of SCADA security is proper network architecture. At a minimum, there should be a demilitarized zone (DMZ) between the corporate IT network and the OT network, with no direct traffic flowing between them. Within the OT network, further segmentation between supervisory systems, control systems, and field devices helps limit lateral movement.&lt;/p&gt;

&lt;h4 id="asset-inventory-and-visibility"&gt;Asset Inventory and Visibility&lt;/h4&gt;

&lt;p&gt;You cannot protect what you don’t know exists. Many organizations lack a complete, accurate inventory of their OT assets, including &lt;a href="https://www.influxdata.com/resources/overcoming-iiot-data-challenges-data-injection-from-plcs-to-influxdb/"&gt;PLCs&lt;/a&gt;, RTUs, HMIs, &lt;a href="https://www.influxdata.com/glossary/data-historian/"&gt;historians&lt;/a&gt;, network switches, and communication links. Passive network discovery tools designed for OT environments can build and maintain this inventory without disrupting operations.&lt;/p&gt;

&lt;h4 id="access-control-and-authentication"&gt;Access Control and Authentication&lt;/h4&gt;

&lt;p&gt;Every access point into the OT environment should require strong authentication, ideally multi-factor. Least-privilege principles should govern who can access what, and remote access should be tightly controlled, monitored, and time-limited. Shared accounts should be eliminated wherever possible.&lt;/p&gt;

&lt;h4 id="ot-aware-patch-management"&gt;OT-Aware Patch Management&lt;/h4&gt;

&lt;p&gt;Patching in OT requires a risk-based approach. Not every vulnerability needs an immediate patch, and not every system can be patched without operational impact. Organizations need a process that evaluates vulnerability severity in the context of their specific environment, tests patches in a staging environment where possible, and schedules maintenance windows that align with operational needs.&lt;/p&gt;

&lt;h4 id="deep-packet-inspection-for-industrial-protocols"&gt;Deep Packet Inspection for Industrial Protocols&lt;/h4&gt;

&lt;p&gt;Traditional firewalls see Modbus traffic as TCP on port 502 and nothing more. OT-aware firewalls and intrusion detection systems can parse the actual protocol content to inspect function codes and register addresses to enforce policies.&lt;/p&gt;

&lt;h4 id="incident-response-planning"&gt;Incident Response Planning&lt;/h4&gt;

&lt;p&gt;OT incident response is not IT incident response, the playbook must account for the physical consequences of containment actions. Isolating a network segment might stop an attacker, but could also trip a safety system or halt a process. Response plans need to be developed collaboratively between security teams, OT engineers, and plant operations.&lt;/p&gt;

&lt;h2 id="continuous-monitoring-for-scada-security"&gt;Continuous monitoring for SCADA security&lt;/h2&gt;

&lt;p&gt;All of the strategies above are essential, but there’s a fundamental truth about SCADA security that defenders can exploit: &lt;strong&gt;industrial processes are inherently predictable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A temperature sensor in a chemical reactor reports a value every second. A PLC cycles through its logic on a fixed schedule. A pump runs at a consistent speed. Network traffic between a SCADA server and its RTUs follows regular, repeatable patterns. This predictability means that anomalies like equipment failure, operator error, or a cyberattack create detectable deviations from established baselines.&lt;/p&gt;

&lt;p&gt;This is where time series data becomes a security team’s most powerful tool.&lt;/p&gt;

&lt;h4 id="baselining-normal-behavior"&gt;Baselining Normal Behavior&lt;/h4&gt;

&lt;p&gt;By collecting and storing high-resolution time series data from sensors, PLCs, network flows, and protocol logs, you can build a detailed behavioral profile of “normal” for every asset and process in your environment. What does normal Modbus traffic look like between the SCADA server and PLC-07? What’s the typical temperature range for reactor vessel 3 during a batch run? How often does the engineering workstation initiate write commands?&lt;/p&gt;

&lt;p&gt;With enough historical data, these baselines become remarkably precise, and deviations become immediately apparent.&lt;/p&gt;

&lt;h4 id="detecting-process-manipulation"&gt;Detecting Process Manipulation&lt;/h4&gt;

&lt;p&gt;An attacker who gains access to a SCADA system may try to subtly alter process parameters, such as changing a setpoint, opening a valve, or adjusting a chemical dosing rate. If you’re monitoring time series data from those processes, you can detect changes that fall outside historical norms.&lt;/p&gt;

&lt;h4 id="spotting-anomalous-network-behavior"&gt;Spotting Anomalous Network Behavior&lt;/h4&gt;

&lt;p&gt;Industrial network traffic is highly structured. By logging protocol-level metadata, you can detect unusual patterns. A “write multiple registers” command from an IP address that has only ever issued read commands is suspicious. A burst of DNP3 unsolicited responses at an unusual time deserves investigation. These signals are only visible if you’re capturing and analyzing this data.&lt;/p&gt;

&lt;h4 id="correlating-across-it-and-ot"&gt;Correlating Across IT and OT&lt;/h4&gt;

&lt;p&gt;The most sophisticated attacks traverse the IT/OT boundary. Detecting them requires correlating events across both domains on a unified timeline. For example, a failed VPN login attempt at 1:47 AM, followed by a successful login at 1:52 AM, followed by an unusual engineering workstation session at 1:55 AM, followed by a PLC configuration change at 2:03 AM. While each of these events in isolation might not trigger an alert, together, on a single timeline, the pattern is unmistakable. Time series data makes this correlation possible.&lt;/p&gt;

&lt;h2 id="why-a-time-series-database-beats-a-siem-or-relational-database-for-ot-security-data"&gt;Why a time series database beats a SIEM or relational database for OT security data&lt;/h2&gt;

&lt;p&gt;If you’re convinced that this kind of monitoring is critical for SCADA security, the next question is where to store and analyze all this data. The three common options are a traditional relational database, a Security Information and Event Management (SIEM) platform, or a time series database like InfluxDB. For OT security data, the &lt;a href="https://www.influxdata.com/time-series-database/?utm_source=website&amp;amp;utm_medium=scada_security_guide&amp;amp;utm_content=blog"&gt;time series database&lt;/a&gt; wins decisively. Here’s why.&lt;/p&gt;

&lt;h4 id="data-volume"&gt;Data Volume&lt;/h4&gt;

&lt;p&gt;A single SCADA environment can generate enormous volumes of data. Consider a modest facility with 500 sensors reporting every second, 20 PLCs, a network tap capturing protocol metadata, and authentication logs from access points. That’s easily millions of data points per day, and larger environments generate orders of magnitude more.&lt;/p&gt;

&lt;p&gt;Relational databases like PostgreSQL or MySQL were designed for transactional workloads: inserts, updates, deletes, and joins across normalized tables. They handle time series data poorly at scale. Write throughput degrades as tables grow, and time-based queries over millions of rows become expensive without careful indexing and partitioning, which creates operational complexity.
SIEMs are built for log ingestion, but they’re optimized for text-based event logs, not numerical telemetry. Ingesting raw sensor data at one-second intervals into a SIEM is technically possible, but economically painful, as SIEM licensing is typically based on data volume, and the cost of ingesting OT data can be prohibitive. Many organizations end up sampling or aggregating data before it reaches the SIEM, losing the granularity needed for effective &lt;a href="https://www.influxdata.com/blog/IOT-anomaly-detection-primer-influxdb/"&gt;anomaly detection&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;InfluxDB and other time series databases are built for this workload. They use storage engines optimized for high-volume writes of timestamped data and compressed, columnar storage that keeps disk usage manageable even at scale. InfluxDB can handle hundreds of thousands of writes per second on modest hardware.&lt;/p&gt;

&lt;h4 id="query-performance"&gt;Query Performance&lt;/h4&gt;

&lt;p&gt;OT security analysis is fundamentally time-focused. You need to answer questions like: “What was the average pressure in vessel 4 between 2:00 and 2:15 AM?” or “Show me all Modbus write commands to PLC-12 in the last 24 hours alongside the corresponding sensor readings.” or “Alert me if the rate of change of this temperature exceeds the 99th percentile of its 30-day historical distribution.”&lt;/p&gt;

&lt;p&gt;In a relational database, these queries require careful SQL with window functions, CTEs, and often materialized views to perform well. The query language wasn’t designed for time series operations, and performance tuning is an ongoing burden.&lt;/p&gt;

&lt;p&gt;SIEMs offer search languages that handle event correlation well but are awkward for continuous numerical analysis. Calculating rolling averages, derivatives, or statistical distributions over sensor data in a SIEM is possible but cumbersome.&lt;/p&gt;

&lt;p&gt;Time series databases provide native query primitives for exactly these operations. InfluxDB includes built-in functions for windowed aggregation, moving averages, derivatives, percentiles, and histogram analysis. A query that would require 30 lines of carefully optimized SQL can often be expressed in a few lines with InfluxDB. This matters not just for convenience but for enabling security analysts and OT engineers to explore data and build detection logic without being database specialists.&lt;/p&gt;

&lt;h4 id="data-retention"&gt;Data Retention&lt;/h4&gt;

&lt;p&gt;OT security data has a natural tiered value structure. The last 24 hours of raw sensor data are extremely valuable for investigating an active incident. The last 30 days at full resolution are important for anomaly detection baselines. Data from six months ago is useful for trend analysis, but doesn’t need high granularity. Data from a year ago might only need hourly averages for compliance purposes.&lt;/p&gt;

&lt;p&gt;Relational databases require you to manage this lifecycle manually by writing ETL jobs to downsample old data, archive tables, and manage storage. SIEMs typically offer hot/warm/cold storage tiers, but with limited control over how data is aggregated as it ages.
InfluxDB has retention policies and downsampling built into the database itself. You can define policies that automatically downsample data from one-second to one-minute resolution after 30 days, then to five-minute resolution after 90 days, and delete raw data after a year. This happens transparently, without external tooling, and keeps storage costs predictable while preserving long-term visibility.&lt;/p&gt;

&lt;h2 id="moving-forward"&gt;Moving forward&lt;/h2&gt;

&lt;p&gt;SCADA security is not a problem that can be solved with a single product, a one-time assessment, or a policy document. It requires sustained commitment to understanding your environment, monitoring it continuously, and building the organizational capacity to detect and respond to threats.&lt;/p&gt;

&lt;p&gt;The good news is that the same characteristic that makes SCADA systems vulnerable, like their reliance on predictable, deterministic processes, is also what makes them uniquely defensible through data-driven monitoring. Industrial processes generate time series data that reveals anomalies clearly when you have the right tools to capture and analyze it.&lt;/p&gt;

&lt;p&gt;A time series database like &lt;a href="https://www.influxdata.com/products/influxdb-overview/?utm_source=website&amp;amp;utm_medium=scada_security_guide&amp;amp;utm_content=blog"&gt;InfluxDB&lt;/a&gt;, paired with a well-designed collection pipeline and visualization layer, enables security teams to see their OT environment with a level of clarity that was previously impractical. Not as a replacement for network segmentation, access control, and the other foundational security measures, but as the monitoring layer that ties everything together and ensures that when something goes wrong, you know about it in seconds rather than weeks.&lt;/p&gt;
</description>
      <pubDate>Tue, 03 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/scada-security-guide/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/scada-security-guide/</guid>
      <category>Developer</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
    <item>
      <title>The "Now" Problem: Why BESS Operations Demand Last Value Caching</title>
      <description>&lt;p&gt;Battery Energy Storage Systems (BESS) represent one of the most unforgiving environments for real-time data. Unlike a passive asset, a battery is a complex electrochemical system where safety and revenue are determined by split-second decisions. In this context, “average” latency can become a serious problem. Performance depends entirely on one key question:&lt;/p&gt;

&lt;h2 id="what-is-happening-right-now"&gt;“What is happening right now?”&lt;/h2&gt;

&lt;p&gt;For grid operators, Energy Management Systems (EMS), and trading desks, this is the most critical question. To answer it, operations teams rely on dashboards that answer:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Safety &amp;amp; Health&lt;/strong&gt;: What is the current State of Health (SoH) of my BESS operations? Is the site healthy, or are there active thermal alarms?&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Bottlenecks&lt;/strong&gt;: What is limiting performance right now? (Is it a Power Conversion System [PCS] derate, a specific rack, or a container-level issue?)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Revenue&lt;/strong&gt;: What is the precise State of Charge (SoC) available for immediate dispatch?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="the-challenge-the-latest-value-bottleneck"&gt;The challenge: the “latest value” bottleneck&lt;/h2&gt;

&lt;p&gt;“Current state” dashboards create a punishing workload for standard time series databases. A single utility-scale site might generate 50,000+ distinct signals (high cardinality) from cells, inverters, and meters. To display a “Live View,” the database must repeatedly scan recent data on disk to find the most recent timestamp for every single one of those signals.&lt;/p&gt;

&lt;p&gt;At the site level, this is difficult. &lt;strong&gt;At fleet scale with more assets, more concurrent users, and millions of streams, this “scan-for-latest” pattern becomes a crippling bottleneck.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id="the-solution-last-value-cache"&gt;The solution: Last Value Cache&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 solves this architectural conflict with its built-in &lt;strong&gt;Last Value Cache (LVC)&lt;/strong&gt;. Instead of scanning historical data to compute the current state, LVC automatically caches the most recent values (or the last N values) in memory for your critical signals. This ensures that “current state” queries remain sub-millisecond (&amp;lt; 10ms) and consistent, regardless of write throughput or fleet size, bridging the gap between historical analysis and real-time situational awareness.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3P8QsCW6bSfmliLYxMmNVP/5b074db94e9b2f58b57a9f18c65922cb/Image-2026-02-23_16_33_24.png" alt="BESS LVC solution" /&gt;&lt;/p&gt;

&lt;h2 id="how-to-use-influxdbs-last-value-cache-lvc-in-memory-for-bess-operations"&gt;How to use InfluxDB’s Last Value Cache (LVC) in memory for BESS operations&lt;/h2&gt;

&lt;h4 id="define-your-hot-signals"&gt;1. Define Your “Hot” Signals&lt;/h4&gt;

&lt;p&gt;Don’t cache everything. Pick the specific high-leverage fields that power your “Current State” dashboards and safety alerts, for example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Safety&lt;/strong&gt;: Cell Temperature (&lt;code class="language-markup"&gt;temp_c&lt;/code&gt;), Voltage (&lt;code class="language-markup"&gt;volts&lt;/code&gt;), Alarm Severity (&lt;code class="language-markup"&gt;alarm_level&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt;: State of Charge (&lt;code class="language-markup"&gt;soc&lt;/code&gt;), State of Health (&lt;code class="language-markup"&gt;soh&lt;/code&gt;), Inverter Mode (&lt;code class="language-markup"&gt;inv_state&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Ops&lt;/strong&gt;: Comms Heartbeat (&lt;code class="language-markup"&gt;last_seen&lt;/code&gt;), Charge/Discharge Limits (&lt;code class="language-markup"&gt;p_limit_kw&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="design-your-keys"&gt;2. Design Your Keys&lt;/h4&gt;

&lt;p&gt;Choose the columns that define how operators slice the system. These will become your cache keys.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Best Practice&lt;/strong&gt;: Match your dashboard filters. If your dashboard filters by &lt;code class="language-markup"&gt;site_id → container_id → rack_id&lt;/code&gt;, those are your keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cardinality Note&lt;/strong&gt;: Keep keys efficient. While InfluxDB 3 handles high cardinality exceptionally well, unnecessary keys (like a unique &lt;code class="language-markup"&gt;transaction_id&lt;/code&gt; per second) waste memory. Stick to asset identifiers.&lt;/p&gt;

&lt;h4 id="shape-the-cache-behavior"&gt;3. Shape the Cache Behavior&lt;/h4&gt;

&lt;p&gt;Configure the cache to match your visualization needs:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;count&lt;/code&gt;:
    &lt;ul&gt;
      &lt;li&gt;Set to &lt;strong&gt;1&lt;/strong&gt; for Gauges, Status Lights, and “Single Value” tiles.&lt;/li&gt;
      &lt;li&gt;Set to &lt;strong&gt;3–10&lt;/strong&gt; for “Sparklines” (mini-charts) where operators need to see the immediate trend (e.g., “Is voltage diving or stable?”).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;ttl&lt;/code&gt; (&lt;strong&gt;time-to-live&lt;/strong&gt;): Define when data becomes “stale.” If a sensor stops reporting, how long should the dashboard show the last value before switching to “Offline/Unknown”? (e.g., &lt;code class="language-markup"&gt;30s&lt;/code&gt; for safety, &lt;code class="language-markup"&gt;1h&lt;/code&gt; for capacity).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="create-the-cache"&gt;4. Create the Cache&lt;/h4&gt;

&lt;p&gt;Create the Last Value Cache using the UI explorer, HTTP API or the CLI.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Key arguments:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Database name: bess_db&lt;/li&gt;
  &lt;li&gt;Table name: bess_telemetry&lt;/li&gt;
  &lt;li&gt;Cache name: bess_ops_lvc&lt;/li&gt;
  &lt;li&gt;Key columns: site_id, rack_id (field keys to cache)&lt;/li&gt;
  &lt;li&gt;Value columns: soc, temp_max, alarm_state (field values to cache)&lt;/li&gt;
  &lt;li&gt;Count: 5 (the number of values to cache per unique key column combination, range 1-10)&lt;/li&gt;
  &lt;li&gt;TTL: 30s (time duration until data becomes stale)&lt;/li&gt;
  &lt;li&gt;Token: InfluxDB 3 authentication token&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="the-warm-cache-advantage"&gt;5. The “Warm Cache” Advantage&lt;/h4&gt;

&lt;p&gt;Unlike a standard cache that starts empty, LVC in InfluxDB 3 is “warm” by default.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;On creation&lt;/strong&gt;: It instantly backfills from existing data on disk.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;On restart&lt;/strong&gt;: It automatically reloads the state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;: Ops teams never see “blank” dashboards after a maintenance window. The system is ready the moment it comes back online.&lt;/p&gt;

&lt;h4 id="querying-the-cache"&gt;6. Querying the Cache&lt;/h4&gt;

&lt;p&gt;Use standard SQL and &lt;code class="language-markup"&gt;last_cache()&lt;/code&gt; function that replaces complex analytical queries with a simple lookup.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="architecture-built-for-scale-using-influxdb-3-enterprise"&gt;7. Architecture: Built for Scale Using InfluxDB 3 Enterprise&lt;/h4&gt;

&lt;p&gt;Last Value Cache can help separate heavy “writing” from “reading” workloads:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Dedicated Ingest Nodes&lt;/strong&gt;: Handle the massive flood of 1Hz sensor data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Dedicated Query Nodes&lt;/strong&gt;: Host the LVC in memory to serve dashboards instantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database bess_db \
  --table bess_telemetry \
  --token AUTH_TOKEN \
  --node-spec "nodes:query-01,query-02" \
  --key-columns site_id,rack_id \
  --value-columns soc,temp_max,alarm_state \
  --count 5 \
  --ttl 30s \
  bess_ops_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;The benefit&lt;/strong&gt;: Heavy write loads (e.g., a fleet-wide firmware update logging millions of events) will never slow down the control room’s live view.&lt;/p&gt;

&lt;h4 id="the-value-of-lvc"&gt;The value of LVC&lt;/h4&gt;

&lt;p&gt;In BESS operations, latency isn’t just a delay; it’s a risk. InfluxDB 3’s Last Value Cache eliminates that risk by serving the “current state” of your entire fleet instantly from memory, removing the need for complex external caching.&lt;/p&gt;

&lt;p&gt;When you’re ready to start building, &lt;a href="https://www.influxdata.com/products/influxdb3-enterprise/?utm_source=website&amp;amp;utm_medium=bess_last_value_caching&amp;amp;utm_content=blog"&gt;download InfluxDB 3 Enterprise&lt;/a&gt;, or &lt;a href="https://www.influxdata.com/contact-sales-enterprise/?utm_source=website&amp;amp;utm_medium=bess_last_value_caching&amp;amp;utm_content=blog"&gt;contact us&lt;/a&gt; to talk about running a proof of concept.&lt;/p&gt;
</description>
      <pubDate>Thu, 26 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/bess-last-value-caching/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/bess-last-value-caching/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>What Is Predictive Analytics? A Complete Guide for 2026</title>
      <description>&lt;p&gt;In simple terms, predictive analytics is a form of analytics that tries to predict future events, trends, or behaviors based on historical and present data. You can achieve this goal in different ways, each involving trade-offs between accuracy and cost.&lt;/p&gt;

&lt;h2 id="why-is-predictive-analytics-important"&gt;Why is predictive analytics important?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/predictive-analytics-pipelines-real-world-ai-predictive-maintenance-time-series-data/"&gt;Predictive analytics&lt;/a&gt; enables organizations to be more efficient and accurate in how they plan for the future. The end result of a properly implemented predictive analytics system will depend on the industry, but at a high level, here are some common benefits:&lt;/p&gt;

&lt;h4 id="improved-strategic-decision-making"&gt;Improved Strategic Decision-Making&lt;/h4&gt;

&lt;p&gt;Predictive analytics provides insight into future trends, so business leaders can make better decisions faster rather than relying on reactivity.&lt;/p&gt;

&lt;h4 id="increased-operational-efficiency"&gt;Increased Operational Efficiency&lt;/h4&gt;

&lt;p&gt;Using predictive analytics can help businesses improve their profit margins and efficiency by predicting equipment failures and reducing downtime.&lt;/p&gt;

&lt;h4 id="improved-risk-management"&gt;Improved Risk Management&lt;/h4&gt;

&lt;p&gt;By looking at historical data where things went wrong, a business can reduce its risk by finding data that correlates with negative outcomes and avoiding them proactively. An example would be a bad investment in the finance industry.&lt;/p&gt;

&lt;h4 id="happier-customers"&gt;Happier customers&lt;/h4&gt;

&lt;p&gt;Predicting potential churn and reaching out to customers, or ensuring items are in stock by having more accurate predictions for inventory management help enhance customer experience.&lt;/p&gt;

&lt;h2 id="how-does-predictive-analytics-work"&gt;How does predictive analytics work?&lt;/h2&gt;

&lt;p&gt;The end goal of predictive analytics is to make accurate predictions based on historical data. Here is a general outline of the process for building a predictive analytics system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Determine the goal for the project&lt;/strong&gt;.
The first step is to identify the problem or opportunity you are trying to address via predictive analytics. Define your goals and success metrics upfront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Organize and collect data&lt;/strong&gt;.
The next step will be gathering the data to build your predictive analytics model, as well as the pipeline that will send fresh data to your model for generating predictions. 
This will typically be a combination of public data similar to your own, 3rd-party data relevant to your use case, and your own unique business data for fine-tuning your model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Process data&lt;/strong&gt;.
Once you have your data, one of the biggest challenges is often processing and cleaning it so it’s ready for your model. This can involve removing invalid data, filling in missing data, or transforming data into a standard format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Develop a predictive analytics model&lt;/strong&gt;.
Now that your data has been collected and cleaned, you are ready to actually develop your predictive model. The model you use will depend on your business requirements, including accuracy requirements and the type of modeling you will be doing.&lt;/p&gt;

&lt;p&gt;A predictive model can be used for trend detection, classification, clustering, and more. You can create these models using statistical methods or modern machine learning techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Validate results&lt;/strong&gt;.
Creating and deploying your model is just the first step; once the model is live, you will need to validate the results to confirm it works as expected. 
This generally involves testing against a separate dataset for accuracy, as well as running the model against live production data and evaluating the results based on the output. 
If the results aren’t as good as desired, you may need to return to the previous steps and modify factors like how data is processed and the type of model used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Deploy to production&lt;/strong&gt;.
If your predictive analytics model produces accurate, valuable results, you can now deploy it to production, where people will actually use the results. The system may need a human to confirm the action, or it may be fully automated, taking action solely based on the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Update and improve the model over time&lt;/strong&gt;.
Predictive analytics isn’t a one-time deal. You will want to constantly feed your model recent data so it stays up to date and can be aware of potential changes that need to be integrated. 
Typical tasks would involve retraining the model, adjusting parameters, or providing it with additional data to improve accuracy. The entire system can also be fine-tuned over time to be more efficient and affordable.&lt;/p&gt;

&lt;h2 id="predictive-analytics-use-cases"&gt;Predictive analytics use cases&lt;/h2&gt;

&lt;p&gt;Predictive analytics are useful across almost every industry, but let’s take a look at a few specific examples where predictive analytics are particularly valuable. 
An ideal use case for predictive analytics is any situation where data is relatively easy to collect and having more accurate predictions will generate a significant business impact, such as revenue or cost reduction.&lt;/p&gt;

&lt;h4 id="manufacturing"&gt;Manufacturing&lt;/h4&gt;

&lt;p&gt;In the &lt;a href="https://www.influxdata.com/resources/advanced-manufacturing-monitoring-using-ctrlxos-with-influxdb/"&gt;manufacturing&lt;/a&gt; sector, predictive analytics can be used to predict and prevent machinery malfunctions before they occur. This reduces maintenance costs and improves factory efficiency of factories, resulting in higher profit margins.&lt;/p&gt;

&lt;h4 id="healthcare"&gt;Healthcare&lt;/h4&gt;

&lt;p&gt;Governments and businesses both use predictive analytics to improve the healthcare industry. Governments create predictive models to try to predict and prevent the spread of diseases and also determine investments in healthcare programs. 
Hospitals can use predictive models to look at patient medical records to create personalized treatment plans.&lt;/p&gt;

&lt;h4 id="marketing"&gt;Marketing&lt;/h4&gt;

&lt;p&gt;Predictive analytics can be used for marketing purposes to predict trends in consumer demand, improve customer engagement to prevent churn, and improve sales by recommending products customers might like based on their past purchases compared to those of similar customers.&lt;/p&gt;

&lt;h4 id="supply-chain-management"&gt;Supply Chain Management&lt;/h4&gt;

&lt;p&gt;Predictive analytics can help with supply chain management by forecasting changes in product supply and demand driven by factors such as time of year or location.It can also be used to optimize logistics and manage risk.&lt;/p&gt;

&lt;h4 id="finance"&gt;Finance&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://sigmatechnology.com/articles/predictive-analytics-for-finance-insights-and-case-studies/"&gt;finance&lt;/a&gt; industry uses predictive analytics in a number of ways, ranging from predicting stock prices to detecting fraudulent transactions. Banks can use predictive analytics to assess loan applicants’ risk by comparing historical data with the applicant’s personal history.&lt;/p&gt;

&lt;h2 id="predictive-analytics-challenges"&gt;Predictive analytics challenges&lt;/h2&gt;

&lt;p&gt;While predictive analytics can offer many business benefits, implementing it can be  challenging, especially if a company lacksin-house expertise or infrastructure. Here are some of the key roadblocks to consider when getting started.&lt;/p&gt;

&lt;h4 id="data-quality"&gt;Data Quality&lt;/h4&gt;
&lt;p&gt;To make accurate predictions, you will need a large volume of high-quality data relevant to your predictive analytics use case. This means you need to have a way to collect data and store it in a long-term format that is easy to access for teams creating predictive analytics models.&lt;/p&gt;

&lt;h4 id="integration-with-legacy-systems"&gt;Integration with Legacy Systems&lt;/h4&gt;
&lt;p&gt;Many established businesses will have systems that may not be seamlessly integrated. This means engineering effort will be required to ensure that data is not siloed and that the predictive analytics team can access the systems and data they require.&lt;/p&gt;

&lt;h4 id="accuracy-of-results"&gt;Accuracy of Results&lt;/h4&gt;
&lt;p&gt;The biggest challenge with predictive analytics will be creating a model that produces results accurate enough to justify the investment in creating them and that drives business value.&lt;/p&gt;

&lt;p&gt;This will require not only the initial creation of the model but also constant updates with new data to keep it accurate as conditions change.&lt;/p&gt;

&lt;h4 id="hiring-talent"&gt;Hiring Talent&lt;/h4&gt;
&lt;p&gt;All of the above problems require highly skilled employees to be solved. These skills are in demand across many industries, making it difficult to attract and retain the workers needed to implement a predictive analytics system.&lt;/p&gt;

&lt;h4 id="security"&gt;Security&lt;/h4&gt;
&lt;p&gt;Another challenge with predictive analytics is ensuring that all the new data collected and stored is secure. This data can contain sensitive information about customers or about your business, so security must be a top priority.&lt;/p&gt;

&lt;h2 id="predictive-analytics-techniques"&gt;Predictive analytics techniques&lt;/h2&gt;

&lt;p&gt;There are a number of  models available for generating insights via predictive analytics. The type of model to use for your organization depends on the data you are working with, as well as factors such as the cost to develop the model and your accuracy requirements.
Let’s take a look at some of the most common predictive analytics techniques and models.&lt;/p&gt;

&lt;h4 id="machine-learningai-models"&gt;Machine Learning/AI Models&lt;/h4&gt;

&lt;p&gt;In the past, classical statistical models have dominated predictive analytics and forecasting because of their ease of interpretation, lower computational costs, and accuracy. 
However, in recent years, ML/AI-based models have begun to surpass traditional forecasting methods in accuracy. They also offer the benefit of being easier to generalize across different predictions and of requiring less fine-tuning by highly trained statisticians.&lt;/p&gt;

&lt;h4 id="time-series-models"&gt;Time Series Models&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/time-series-influxdb-vector-database/"&gt;Time series models&lt;/a&gt; are used to analyze temporal data and forecast future values. They are particularly useful when data shows sequential patterns or seasonality, such as stock prices, weather patterns, or sales data.&lt;/p&gt;

&lt;p&gt;Time series models are ideal for data that has seasonal variations and time-based dependencies, making them useful for forecasting.&lt;/p&gt;

&lt;p&gt;Some downsides of time series models are that they can struggle when the data isn’t at regular intervals and may assume past trends will continue, which can make them inaccurate at predicting drastic changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/python-ARIMA-tutorial-influxDB/"&gt;ARIMA&lt;/a&gt; and exponential smoothing are examples of time series models. An easy way to start testing these models for predictive analytics is to use a library like Python Statsmodels.&lt;/p&gt;

&lt;h4 id="regression-models"&gt;Regression Models&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/guide-regression-analysis-time-series-data/"&gt;Regression models&lt;/a&gt; predict a continuous outcome variable based on one or more predictor variables. They are widely used in predictive analytics, from predicting house prices to estimating stock returns.&lt;/p&gt;

&lt;p&gt;Regression models are useful for providing results that are easy to interpret and for identifying clear relationships between variables. Some downsides of regression models are that they do require a decent level of statistics knowledge and can struggle with non-linear relationships and datasets with many variables.&lt;/p&gt;

&lt;p&gt;Linear and logistic regression are examples of regression models. You can get started with regression models using the Python scikit-learn library.&lt;/p&gt;

&lt;h4 id="decision-tree-models"&gt;Decision Tree Models&lt;/h4&gt;

&lt;p&gt;Decision tree models make predictions by learning simple decision rules from the data. They can be used for both regression and classification problems. 
Decision tree models offer results that are easier to understand than those from machine learning models. A challenge is that they can be easily over- or underfit and be affected by small changes in the data.&lt;/p&gt;

&lt;h4 id="gradient-boosting-model"&gt;Gradient Boosting Model&lt;/h4&gt;

&lt;p&gt;Gradient boosting involves creating an ensemble of prediction models, typically from decision tree models. This method can be extremely accurate and has been used in recent years to win many machine learning competitions.&lt;/p&gt;

&lt;p&gt;Gradient boosting is good at providing accurate predictions for data with non-linear relationships between variables and datasets with high dimensionality.&lt;/p&gt;

&lt;p&gt;One weakness is that they can be overfit when they aren’t tuned properly and are more of a black box compared to traditional statistical models. XGBoost and LightGBM are libraries that can be used to create gradient boosting models.&lt;/p&gt;

&lt;h4 id="random-forest-models"&gt;Random Forest Models&lt;/h4&gt;

&lt;p&gt;Random forests are similar to gradient boosting in that they are ensemble models that use decision trees for making predictions. The main difference is that gradient boosting models generally use far more decision trees, and they are also trained sequentially so that errors from previous trees can be corrected.&lt;/p&gt;

&lt;p&gt;In comparison, random forest decision trees make predictions independently, and then the final prediction is created by aggregating those predictions. This makes the results easier to interpret because each decision tree’s prediction can be analyzed. You can test out random forest models on your data using a library like scikit-learn.&lt;/p&gt;

&lt;h4 id="clustering-models"&gt;Clustering Models&lt;/h4&gt;

&lt;p&gt;Clustering models, such as k-means clustering, can be used to group data points. While this is generally used for data analysis, these clusters can also serve as input features for predictive models like the ones mentioned above.&lt;/p&gt;

&lt;p&gt;Cluster modeling can help identify hidden patterns or relationships in your data, but to work, it requires a way to measure how similar data points are, and the number of clusters ‌must be chosen ahead of time.&lt;/p&gt;

&lt;h2 id="future-trends-in-predictive-analytics"&gt;Future trends in predictive analytics&lt;/h2&gt;

&lt;p&gt;The predictive analytics landscape is changing rapidly as technology advances and impacts all industries. Here are a few trends to look out for in the future:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Increased demand for real-time data&lt;/strong&gt;. To get the most accurate results, models need to be updated as frequently as possible so they aren’t out of sync with reality. This means that real-time data and systems that support it will become increasingly important.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Prescriptive analytics&lt;/strong&gt;. The term prescriptive analytics refers to the next step beyond predictive analytics. This involves taking action based on a predicted outcome before it occurs to try to influence the outcome.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Synthetic data&lt;/strong&gt;. Data is the key to making accurate predictions. The problem is that many businesses haven’t collected the data they need. 
A number of tools have been created to generate “synthetic” data, which can help get a           predictive analytics system off the ground using artificial data that mimics the use case.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Further adoption of machine learning and AI&lt;/strong&gt;. While most businesses still rely on traditional methods for prediction, cutting-edge practitioners are using ML/AI to win competitions because of its accuracy.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Easier to use predictive analytics tools&lt;/strong&gt;. Currently, implementing and using predictive analytics requires specialized skills. But domain knowledge is very important for making accurate predictions.&lt;/p&gt;

    &lt;p&gt;Future tools will focus on usability and enabling non-technical users to make predictions based   on their data. This will make implementation more affordable and drive more business value.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="best-practices"&gt;Best practices&lt;/h2&gt;

&lt;p&gt;Here are some helpful tips for using predictive analytics.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Have a well-defined objective&lt;/strong&gt;. Predictive analytics can only generate value when it influences a decision, and hence, the why should be the first thing followed by the model. Without a goal, you’ll maximize the things that make no difference. To implement this, you must clearly state what you want to predict, where you will apply the prediction, and what action you will take.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Focus more on feature engineering than model complexity&lt;/strong&gt;. Features are used to convert raw data into signals that the model can learn, and this step can be what makes the difference in determining success, more than the algorithm used. To do this effectively, design domain-aware features such as rolling averages, lagged values, and behavioral features like frequency and recency.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Measure models based on business impact&lt;/strong&gt;. Conventional measures such as accuracy may be misleading, particularly in skewed problems. It is significant because the technically correct model can be expensive or hazardous to implement. Use measures of actual trade-offs, like accuracy and accuracy of fraud detection, or average misplacement of demand forecasting.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Choose an easy, performance-dependent model&lt;/strong&gt;. Complex models may be appealing; however, they are more difficult to maintain, debug, and explain. This is important in production situations where stability and interpretability are paramount. It is better to start with baselines and simple models, and add complexity only as performance improves.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Provide quality, time-accurate data&lt;/strong&gt;. Predictive models use patterns in past records, and poor quality or poorly ordered records can lead to misleading results. Problems such as lost values, data leakage, or irregular timestamps may only inflate model performance during testing but not in production.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="common-pitfalls-to-avoid-in-predictive-analytics-projects"&gt;Common pitfalls to avoid in predictive analytics projects&lt;/h2&gt;

&lt;h4 id="overfitting-the-model"&gt;Overfitting the Model&lt;/h4&gt;

&lt;p&gt;Overfitting occurs when a model fits noise rather than any general patterns, usually because of too much complexity or too little data. This is important because these models are useful for training data but not for new data.&lt;/p&gt;

&lt;p&gt;An example of this is that a deep neural network trained on a small sample of customers might     work flawlessly at elucidating the past, but would not help predict what customers would         purchase in the future, whereas a simpler model would be more generalizable.&lt;/p&gt;

&lt;h4 id="data-leakage"&gt;Data Leakage&lt;/h4&gt;

&lt;p&gt;Data leakage occurs when the information of the future accidentally affects the model during training. This will happen when it has features with data that cannot be known at prediction time, achieving unrealistically high test performance but failing in practice.&lt;/p&gt;

&lt;p&gt;One such example is the use of the account closed date or an order completion status as an       input into the churn or demand prediction model, which makes the model seem very accurate, but   is not usable in practice.&lt;/p&gt;

&lt;h4 id="using-the-wrong-evaluation-metrics"&gt;Using the Wrong Evaluation Metrics&lt;/h4&gt;

&lt;p&gt;Accuracy alone can be a bad way to measure model performance, especially for use cases where positives are rare and costly when missed. An example would be fraud detection, a model that simply classifies all transactions as non-frauds would be very accurate(due to over 99% of transactions being legitimate), but in reality it’s still missing every case of fraud. For use cases like this teams need to use metrics that track actual business impact when evaluating their models.&lt;/p&gt;

&lt;h4 id="ignoring-changes-in-data-patterns"&gt;Ignoring Changes in Data Patterns&lt;/h4&gt;

&lt;p&gt;Predictive models assume that future data will behave like past data; however, in reality, systems continue to evolve. This is particularly problematic in areas such as retail or finance, where seasonality, promotions, or changes in user behaviour often change.&lt;/p&gt;

&lt;h2 id="faqs"&gt;FAQs&lt;/h2&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;Predictive Analytics vs Predictive Maintenance&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Predictive analytics is a broad field that uses statistical algorithms, machine learning, and data to anticipate future events across many domains. It identifies patterns in historical and current data to predict future trends, behaviors, and activities. Predictive analytics is used across industries such as finance, healthcare, and marketing to inform decision-making and develop proactive strategies. Predictive maintenance, on the other hand, is a specific application of predictive analytics in maintenance and asset management. It uses predictive analytics techniques to anticipate when equipment might fail or require maintenance. By analyzing data from sensors, logs, and historical maintenance records, predictive maintenance models can forecast equipment failures before they happen. The goal is to perform maintenance in time to prevent failures, improving efficiency and reducing downtime. In short, predictive maintenance is a subset of the broader predictive analytics ecosystem.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;Traditional Statistical Models vs Machine Learning and AI Models for Predictive Analytics&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                More traditional techniques, such as regression models and decision trees, have been used for decades in predictive analytics. This is due to their simplicity, lower computational requirements, and ability to show the relationship between specific variables and the impact of changing them on business outcomes. In recent years, AI/ML techniques like neural networks and gradient boosting have grown in popularity for predictive analytics use cases. The primary reason is that ML techniques can perform better with higher-dimensional data, where relationships among numerous variables are harder to define. These AI/ML models can learn from data without explicit tuning and can uncover relationships between variables that aren't obvious, resulting in higher accuracy. Some downsides of AI/ML for predictive analytics are that they tend to require more hardware for computation and are harder to interpret in terms of how they produce results, in some ways acting as black boxes.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;
&lt;/div&gt;
</description>
      <pubDate>Tue, 24 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/predictive-analytics-guide-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/predictive-analytics-guide-2026/</guid>
      <category>Getting Started</category>
      <category>Developer</category>
      <author>Company (InfluxData)</author>
    </item>
    <item>
      <title>Node-RED Dashboard Tutorial</title>
      <description>&lt;p&gt;If you’re already familiar with &lt;a href="https://www.influxdata.com/blog/iot-easy-node-red-influxdb/"&gt;Node-RED&lt;/a&gt;, you know how useful it is for automation. This post is a guide to getting started with the Node-RED dashboard 2.0. We’ll cover how to install the Node-RED dashboard 2.0 nodes and provide examples of how to create a graphical user interface (GUI) for your automation.&lt;/p&gt;

&lt;h2 id="what-is-node-red"&gt;What is Node-RED?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt; is a programming tool built on Node.js that lets you create automated workflows with minimal code. It wires different nodes together, with each node performing a specific function that links them to create a flow that carries out an automation task (e.g., switching off the light in a room or closing a door).&lt;/p&gt;

&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;We assume you’ve already installed and set up Node-RED. If that isn’t the case, here are some guides that can help you achieve that a few different ways:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://nodered.org/docs/getting-started/raspberrypi"&gt;Run Node-RED on Raspberry PI&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://randomnerdtutorials.com/getting-started-node-red-raspberry-pi/"&gt;Getting Started with Node-RED on Raspberry Pi&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://nodered.org/docs/getting-started/local"&gt;Running Node-RED Locally&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="node-red-dashboard-installation"&gt;Node-RED dashboard installation&lt;/h2&gt;

&lt;p&gt;The first step in the process is installing the dashboard module in Node-RED.&lt;/p&gt;

&lt;p&gt;In the browser’s Node-RED window, click on the &lt;strong&gt;Menu&lt;/strong&gt; icon at the top right corner of the page, find &lt;strong&gt;Manage Palette&lt;/strong&gt; on the menu items, click &lt;strong&gt;Install&lt;/strong&gt;, and search for &lt;code class="language-markup"&gt;@flowfuse/node-red-dashboard&lt;/code&gt;. Install it and make sure the module reads &lt;strong&gt;@flowfuse/node-red-dashboard&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1Hk7SrlPAjmL8wC7zA40Pa/665ae128d3617f53fd3a1662f8f3e5f5/Node_red_1.png" alt="Node Red 1" /&gt;&lt;/p&gt;

&lt;p&gt;After successful installation, all the Dashboard 2.0 nodes will appear in the palette section. Each dashboard node provides a widget that you can display in a user interface (UI) (e.g., a gauge, chart, or button).&lt;/p&gt;

&lt;h2 id="creating-a-user-interface"&gt;Creating a user interface&lt;/h2&gt;

&lt;p&gt;In this section, we’ll walk through how to create a dashboard UI using the Dashboard 2.0 nodes on Node-RED. But first, let’s understand the dashboard hierarchy.&lt;/p&gt;

&lt;h4 id="dashboard-hierarchy"&gt;Dashboard Hierarchy&lt;/h4&gt;

&lt;p&gt;Dashboard 2.0 uses this hierarchical structure:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Base&lt;/strong&gt;: Defines the base URL for your dashboard. The default is /dashboard.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Page&lt;/strong&gt;: Individual pages that users can navigate to.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Theme&lt;/strong&gt;: Offers different options for styling the dashboard.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Group&lt;/strong&gt;: Collections of widgets within a page&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Widget&lt;/strong&gt;: Individual UI elements, like buttons, text, charts, and others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use pages and groups to arrange the UI in the Node-RED Dashboard 2.0. Pages are the main navigation section that hold or display different groups, which separate a page into several sections. In each group, you organize your node widgets (buttons, texts, charts, etc.)&lt;/p&gt;

&lt;h2 id="creating-your-first-widget"&gt;Creating your first widget&lt;/h2&gt;

&lt;p&gt;Now let’s create your first widget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Grab a button node&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Look at your palette on the left side and find the button node (It’s in the dashboard 2 section). Click and drag it to your workspace (the big grid area in the middle of your screen).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Open the configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Double-click on the button you just placed. A configuration window will pop up with a bunch of options.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2PvqWP8G2jK0ohaYL6fKh6/85ceb83ec7758be4f3d0105021dcc925/NR_2.png" alt="NR 2" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See &lt;strong&gt;Group&lt;/strong&gt; near the top? Click the pencil icon next to the dropdown. This edits the default group Node-RED created when you added the widget. Give it a name like “My Controls.” If you want to create a completely new group instead, click the plus sign next to the pencil icon.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1u10hF9modcL7DKo4njkjb/83b114b00cd2d836625ffcf7b1393866/NR_3.png" alt="NR 3" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create a page&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, in that same group window, you’ll see a field that says &lt;strong&gt;Page&lt;/strong&gt;. Click the pencil icon next to it and then give the page a name, maybe “Dashboard,” for your first one. Now click &lt;strong&gt;Add&lt;/strong&gt; ( or &lt;strong&gt;Update&lt;/strong&gt; if you’re editing the default, as in my case).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6nGJkbHPoosBuPvGvpqb92/ae280d129c9d3304ca70439dbfcabfae/NR_4.png" alt="NR 4" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Finish up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ll be back at the group window. Click &lt;strong&gt;Add&lt;/strong&gt; (or &lt;strong&gt;Update&lt;/strong&gt;) there as well, and you should be returned to the button configuration window. Click &lt;strong&gt;Done&lt;/strong&gt;, and everything should be set.&lt;/p&gt;

&lt;p&gt;Now we have successfully created our first page and group. The next step will be displaying something on the user interface.&lt;/p&gt;

&lt;h2 id="display-a-widget-on-the-ui"&gt;Display a widget on the UI&lt;/h2&gt;

&lt;p&gt;In this example, we want to create a button that displays text when clicked on.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Add a text node from your palette and place it near your button.&lt;/li&gt;
  &lt;li&gt;Wire them together. You can do this by dragging from the button’s right port to the text node’s left port.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3FwpfXSQRz6GJwAgS8iEpU/ede789b4db8464019b0d5151cef6b231/NR_5.png" alt="NR 5" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Double-click the text node and set the &lt;strong&gt;Group&lt;/strong&gt; to match your button’s group (in my case, “My Controls”). Also, give a name and a label, then click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6skwulFLfLIcIBEVqdbz5G/452c539b825ad4b88ea376b34d7daa96/NR_6.png" alt="NR 6" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Next, double-click the button, scroll to &lt;strong&gt;Payload&lt;/strong&gt;, type “Welcome to Dashboard 2.0!”, enter a name and a label like “Click Me,” and click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Now click the red &lt;strong&gt;Deploy&lt;/strong&gt; button in the top right corner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3Xz3OftjpYJ7yBIH0spfa0/384d8a9acea4f35f56216caa79ec5c8c/NR_7.png" alt="NR 7" /&gt;&lt;/p&gt;

&lt;p&gt;To view the dashboard UI, follow this URL: &lt;strong&gt;http://localhost:1880/dashboard&lt;/strong&gt; or &lt;strong&gt;http://Your_RPi_IP_address:1880/dashboard&lt;/strong&gt;, where Your_RPi_IP_address is the address of the Raspberry Pi machine you’re using, 1880 is the port where Node-RED is exposed, and /dashboard displays the dashboard user interface.&lt;/p&gt;

&lt;p&gt;If everything works correctly, you’ll see the same window shown below.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/38YMP6KccaYsNuKWmHQKrL/488b0a9d1366c1aa8393d4d492e7fec4/NR_8.png" alt="NR 8" /&gt;&lt;/p&gt;

&lt;p&gt;When you click the button, the “Welcome to Dashboard 2.0” text will appear as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6ajhqQpArpTyhOstyjuR7U/484fbd9e7f66bef4a1b1a59ceb5c17db/NR_9.png" alt="NR 9" /&gt;&lt;/p&gt;

&lt;p&gt;You might see a light theme background. We’ll get to how to change themes later in the post.&lt;/p&gt;

&lt;h2 id="more-examples-of-widgets"&gt;More examples of widgets&lt;/h2&gt;

&lt;p&gt;Let’s create a simple dashboard example with two tabs and UI elements (widgets) in each.&lt;/p&gt;

&lt;h4 id="creating-additional-pages"&gt;Creating Additional Pages&lt;/h4&gt;

&lt;p&gt;Open the Dashboard 2.0 sidebar (right side of your editor) and click the &lt;strong&gt;+ Page&lt;/strong&gt; button at the top.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6ySE2mN8LlxueXAWBj63TN/1ba40be315933251dda9a2130b294369/NR_10.png" alt="NR 10" /&gt;&lt;/p&gt;

&lt;p&gt;This will create a new page and pop up a window with options to edit the page’s properties (such as name, path, and others). Remember, you can also create pages when configuring a widget (as we did earlier).&lt;/p&gt;

&lt;h2 id="building-an-interactive-data-visualization"&gt;Building an interactive data visualization&lt;/h2&gt;

&lt;p&gt;Now, we’ll create a slider that controls both a gauge and a chart in real-time. This is the setup:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Drag a slider, gauge, and chart node into your workspace.
&lt;strong&gt;Step 2:&lt;/strong&gt; Double-click the slider, set min to 0, max to 100, and assign to a new group on your second page, which you just created (mine is “Monitoring Dashboard”).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/TnzXMlz6PJKg9uKWbTzQB/7d0b08a5a0205421c8ecfee3d4e3df1e/NR_11.png" alt="NR 11" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Next, double-click the gauge. Assign it to the same group and range of 0-100, and add a color segment (green 0-30, yellow 30-70, red 70-100).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/70sDgKnPHNwFm4rXnwcqJW/30dac2bdc5faafec5b54f4a916459cb8/NR_12.png" alt="NR 12" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Next, configure the chart. Same group and set Type to “Line.”&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/OGKPTHO8ZP9xCvJmdZmRR/5197b48b02bfebc85473f51ef7b100aa/NR_13.png" alt="NR 13" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; After this, you need to wire the gauge and chart to the slider, as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5cuUQySAXoXAAsGgbusG9n/dbad57c8d2fd66739419306b0ff0d7fa/Screenshot_2026-02-18_at_1.06.21â__AM.png" alt="NR 14" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Deploy and go to your dashboard UI (&lt;strong&gt;http://localhost:1880/dashboard&lt;/strong&gt; or &lt;strong&gt;http://Your_RPi_IP_address:1880/dashboard&lt;/strong&gt;).
&lt;strong&gt;Step 7:&lt;/strong&gt; In your dashboard UI, navigate to the next page. Click the hamburger icon in the top left corner, and you will see a list of your pages. Select your second page (mine is “Monitoring Dashboard”).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4FQZQuKkk7qWwU83iVzMDG/88f9a3e773ab36eeeb727d32a0bfc9b1/NR_14.png" alt="NR 14/2" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; On your second page, notice the initial state of your dashboard.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2MaKTPvxzcfryDshVRmMR9/1a856b4321821f56825f83690c5d9b9c/NR_15.png" alt="NR 15" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; Move the slider on your dashboard and watch everything update instantly (the gauge changes color, the chart plots, and Z the values).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4xzX407VuoK6ahonnh0dS7/4b21bbca2f5b5b5ba22d0a1abfc62c49/NR_16.png" alt="NR 16" /&gt;&lt;/p&gt;

&lt;h2 id="connecting-to-real-weather-data"&gt;Connecting to real weather data&lt;/h2&gt;

&lt;p&gt;Now, let’s enhance our example by connecting to a real data source. In this example, we would fetch real weather data and display it on our dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Go to your &lt;strong&gt;Manage Palette&lt;/strong&gt; on your &lt;strong&gt;Menu&lt;/strong&gt;, search and install node-red-node-openweathermap (Like we did with @flowfuse/node-red-dashboard in the previous example).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Get an API Key from http://openweathermap.org (New keys take up to two hours to activate, so be patient if you’re just signing up).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Drag &lt;strong&gt;openweathermap&lt;/strong&gt; node to your workspace, double-click it, paste your API key and enter your city and country (e.g., “New York City, US”), and then click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/13EmvDO0POWTT0cHNS3aXF/e1b1122067660cc2df5e851f910d930d/NR_17.png" alt="NR 17" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Next, drag an &lt;strong&gt;inject&lt;/strong&gt; node to your workspace, double-click it, and set &lt;strong&gt;Repeat&lt;/strong&gt; to “interval.” Also, check “Inject once after” so it runs on startup. Then wire the &lt;strong&gt;inject&lt;/strong&gt; node to your &lt;strong&gt;openweathermap&lt;/strong&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/27d2Ljk4kAuUMRCyGzN7nv/9e307ac565fc1293fc7c9f22ca29b232/NR_18.png" alt="NR 18" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Drag a &lt;strong&gt;debug&lt;/strong&gt; node, wire it to the &lt;strong&gt;openweathermap&lt;/strong&gt; node, and &lt;strong&gt;Deploy&lt;/strong&gt;. Open the &lt;strong&gt;Debug&lt;/strong&gt; panel (in the right sidebar, bug icon). You’ll see the weather data structure showing what’s available.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1lSaUl4BslgxhxEEAlNeWB/98a339bb4cfdc518259e2df11d635362/NR_19.png" alt="NR 19" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Next, add three function nodes to extract data:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Double-click the first function node, name it “Get Temperature,” and add the code below:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-javascript"&gt;msg.payload = msg.payload.tempc; 
return msg;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4zn26Af28J5AqGKEFSfgxY/e97e11f202952c7d2e02b43e6fad0472/NR_20.png" alt="NR 20" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Name the second “Get Humidity” and add:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-javascript"&gt;msg.payload = msg.payload.humidity; 
return msg;&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Name the last function “Get Conditions” and add:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-javascript"&gt;msg.payload = msg.payload.description;
return msg;&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
  &lt;li&gt;Wire all three functions from the &lt;strong&gt;openweathermap&lt;/strong&gt; node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Next, add display widgets:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Add two &lt;strong&gt;gauge&lt;/strong&gt; nodes for temperature (set unit as “°C”, range -10 to 40) and humidity (% as unit, range 0-100).&lt;/li&gt;
  &lt;li&gt;Add one &lt;strong&gt;text&lt;/strong&gt; node for the weather conditions.&lt;/li&gt;
  &lt;li&gt;Assign all to the same group (preferably a new group and page) and wire each function to its corresponding display widget.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6l9J6Yp69hbfXM4JjXlOEq/ece07849ccc75a8e5aef240af5835735/NR_21.png" alt="NR 21" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Finally, deploy and visit your dashboard UI.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3Hf47PDNUor8qqH9cSHRnE/fa717fc26ff025fd1c50a5d392aaa4b7/NR_22.png" alt="NR 22" /&gt;&lt;/p&gt;

&lt;p&gt;Now, you’ve gotten a live weather dashboard updating every 10 minutes!&lt;/p&gt;

&lt;h2 id="dashboard-theme-and-styling"&gt;Dashboard theme and styling&lt;/h2&gt;

&lt;p&gt;By default, the Node-RED dashboard comes with a light theme, but you can also customize it. To edit the theme on your UI, open the Dashboard 2.0 sidebar, click on the Theme section, and you can:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Customize themes and colors&lt;/li&gt;
  &lt;li&gt;Set custom fonts&lt;/li&gt;
  &lt;li&gt;Adjust spacing and sizing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Z3yPbMMtfPMO3KaaAlMXO/a7e206cc780611432cb984128abf8280/NR_23.png" alt="NR 23" /&gt;&lt;/p&gt;

&lt;p&gt;After you’re done, don’t forget to deploy so you can view your changes.&lt;/p&gt;

&lt;h2 id="common-issues-and-fixes"&gt;Common issues and fixes&lt;/h2&gt;

&lt;p&gt;While following this tutorial, here are some common hurdles you might encounter and how to fix them:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If your dashboard fails to open, look at the URL. Try using http://localhost:1880/dashboard, otherwise, use your PI’s IP instead.&lt;/li&gt;
  &lt;li&gt;If your widgets are not appearing on your dashboard, ensure that they’re connected to a group and a page before deploying.&lt;/li&gt;
  &lt;li&gt;If OpenWeatherMap returns “Invalid API key,” just wait a bit (activation can last two hours).
Clicking buttons, but no response? Ensure your nodes are well-connected.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id="wrapping-up-node-red-dashboard"&gt;Wrapping up Node-RED dashboard&lt;/h2&gt;

&lt;p&gt;The aim of this guide was to provide basic knowledge of how the Node-RED dashboard 2.0 works on Raspberry Pi. I believe, with all the examples we covered in this post, you should have an idea of what you want to do next with the Node-RED dashboard on your journey to an automated life.&lt;/p&gt;

&lt;h2 id="node-red-dashboard-faqs"&gt;Node-RED Dashboard FAQs&lt;/h2&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between Node-RED Dashboard 1.0 and Dashboard 2.0?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Dashboard 2.0, published by FlowFuse under the package &lt;code&gt;@flowfuse/node-red-dashboard&lt;/code&gt;, is a ground-up rewrite of the original Node-RED dashboard. It introduces a new hierarchy (Base, Page, Theme, Group, Widget), improved theming and styling controls, and a more modern component set. If you're starting a new project, Dashboard 2.0 is the recommended choice, as it is actively maintained and better suited for complex, multi-page UIs.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Can I run the Node-RED dashboard without a Raspberry Pi?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Yes. Node-RED runs on any system that supports Node.js, including Windows, macOS, and Linux. The dashboard is accessed through a browser at &lt;code&gt;http://localhost:1880/dashboard&lt;/code&gt; regardless of which machine Node-RED is installed on. Raspberry Pi is a popular choice for home automation projects, but it is not required to use the Node-RED dashboard.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Why are my Node-RED dashboard widgets not showing up?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                The most common reason widgets don't appear is that they haven't been assigned to both a Group and a Page before deploying. Every widget in Dashboard 2.0 must be linked to a group, which must itself be linked to a page. Double-click the widget node to verify those assignments, then click the red Deploy button and refresh your dashboard URL.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I display live data from an external API in the Node-RED dashboard?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Use an inject node set to a repeating interval to trigger the flow on a schedule, then connect it to a data-source node (such as &lt;code&gt;node-red-node-openweathermap&lt;/code&gt; for weather data). Wire one or more function nodes after the data source to extract the specific fields you need from the payload, then connect each function node to a corresponding display widget — such as a gauge or text node — on your dashboard. Deploy the flow and the dashboard will update automatically at each interval.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-5"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I add multiple pages to my Node-RED dashboard?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-5" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Open the Dashboard 2.0 sidebar on the right side of the Node-RED editor and click the &lt;strong&gt;+ Page&lt;/strong&gt; button to create a new page. You can also create a page directly when configuring any widget node — click the pencil icon next to the Page field to edit or create one inline. Once pages are created, users can navigate between them via the hamburger menu in the top-left corner of the dashboard UI.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-6"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I change the theme or colors of my Node-RED dashboard?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-6" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                In the Dashboard 2.0 sidebar, navigate to the Theme section. From there you can customize colors, set custom fonts, and adjust spacing and sizing. The default is a light theme, but dark and custom themes are supported. After making changes, click Deploy to apply them to your live dashboard.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-7"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What types of widgets are available in Node-RED Dashboard 2.0?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-7" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Dashboard 2.0 includes a range of UI widgets covering inputs (buttons, sliders), displays (gauges, charts, text), and layout controls (groups, pages). Gauges support color-coded segments, charts support line and other types, and sliders can drive real-time updates to other widgets. All widgets appear in the palette under the "dashboard 2" section after installing &lt;code&gt;@flowfuse/node-red-dashboard&lt;/code&gt;.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
</description>
      <pubDate>Wed, 18 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/node-red-dashboard-tutorial/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/node-red-dashboard-tutorial/</guid>
      <category>Developer</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>From Legacy Data Historians to a Modern, Open Industrial Data Stack</title>
      <description>&lt;p&gt;We recently sat down with founder and principal consultant at recultiv8, &lt;a href="https://za.linkedin.com/in/coenraadpretorius"&gt;Coenraad Pretorius&lt;/a&gt;, who drew on his years of data engineering experience in the manufacturing and energy sectors to share key industrial IoT insights. In this article, I list the top takeaways; you can also watch the full webinar recording &lt;a href="https://www.influxdata.com/resources/modernizing-industrial-data-stacks-energy-optimization-with-recultiv8-influxdb"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-challenge-with-traditional-data-historians"&gt;The challenge with traditional data historians&lt;/h2&gt;

&lt;p&gt;Industrial systems generate large volumes of time series data from machines, sensors, and control systems. Historically, this data has been managed using proprietary data historian platforms.&lt;/p&gt;

&lt;p&gt;These systems often lead to the following challenges:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Complexity&lt;/strong&gt;: Traditional stacks involve many tightly coupled components: SCADA systems, OPC servers, historians, data extraction tools, and analytics layers. Each layer requires specialized skills, making systems difficult to debug, extend, or modernize.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;High cost&lt;/strong&gt;: Per-tag licensing, annual maintenance fees, and specialized training significantly increase the total cost of ownership, particularly as systems scale.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Slow time to insight&lt;/strong&gt;: Extracting and analyzing data often takes days or weeks, delaying decisions and limiting optimization opportunities.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The analytics gap&lt;/strong&gt;: Traditional historians prioritize &lt;strong&gt;data storage&lt;/strong&gt;, not &lt;strong&gt;data analysis&lt;/strong&gt;. Common pain points include proprietary query languages, reliance on Excel exports, overloaded BI integrations, and additional licensing for advanced features. As a result, time to action is measured in days or weeks rather than hours, which is an unacceptable delay for modern industrial operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="data-historian-technical-architecture"&gt;Data Historian Technical Architecture&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Atyss4Y4ewXA83dyvy5tP/210bc838fdb9f5c416b8c1af0603f021/Screenshot_2026-02-11_at_9.41.38â__PM.png" alt="Data Historian Traditional Architecture" /&gt;&lt;/p&gt;

&lt;h2 id="a-modern-open-architecture-edge--cloud"&gt;A modern, open architecture: edge + cloud&lt;/h2&gt;

&lt;p&gt;To address these limitations, Coenraad presented a modern architecture built around InfluxDB 3, open source tooling, and cloud analytics. The core idea is a &lt;strong&gt;clear separation of responsibilities&lt;/strong&gt; that leads to improved performance, security, cost efficiency, and scalability while keeping systems simpler and easier to operate.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Edge systems&lt;/strong&gt; handle real-time ingestion, short-term storage, and operational dashboards close to the data source.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cloud systems&lt;/strong&gt; handle long-term storage, historical analysis, and advanced analytics without impacting operational performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="modern-iiot-technical-architecture"&gt;Modern IIoT Technical Architecture&lt;/h4&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3LQnfjtGdArcpRRAPgZnel/a49068e5d20e85b5f92e3f927231f93b/Screenshot_2026-02-11_at_9.56.11â__PM.png" alt="Modern Stack Overview" /&gt;&lt;/p&gt;

&lt;h2 id="example-from-coenraads-case-study"&gt;Example from Coenraad’s case study&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Typical deployment setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Four OPC UA servers&lt;/li&gt;
  &lt;li&gt;10k+ tags&lt;/li&gt;
  &lt;li&gt;Windows-based servers&lt;/li&gt;
  &lt;li&gt;Telegraf running as Windows service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration approach&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Split config files (agent, inputs, outputs)&lt;/li&gt;
  &lt;li&gt;Custom Starlark processor for schema management&lt;/li&gt;
  &lt;li&gt;Environment variables for cloud credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: Rapid implementation of the modern data stack using open source solution resulted in saving $70k (once off) and $5 (annually).&lt;/p&gt;

&lt;h2 id="why-this-approach-works"&gt;Why this approach works&lt;/h2&gt;

&lt;p&gt;This modern stack delivers several practical benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Simpler systems&lt;/strong&gt; built with familiar tools like SQL and Python that most developers are familiar with.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Faster dashboards&lt;/strong&gt; move from multi-second load times to near instant response as detailed in this &lt;a href="https://h3xagn.com/blazingly-fast-dashboards-with-influxdb"&gt;blog post&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Lower costs&lt;/strong&gt; are incurred by replacing proprietary licensing with open source and consumption-based services.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Flexible data pipelines&lt;/strong&gt; use Telegraf to ingest data from industrial protocols such as OPC UA, MQTT, and Modbus into InfluxDB Core with optional streaming to the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="recap"&gt;Recap&lt;/h2&gt;

&lt;p&gt;The difference is fairly cut and dry: traditional data historians often limit agility and slow down insights, while modern industrial data stacks focus on speed, openness, and maintainability by separating edge operations from cloud analytics and using familiar, developer-friendly tools. For industrial and IIoT teams, modernizing the data pipeline is now foundational. To learn more, read the Teréga &lt;a href="https://www.influxdata.com/blog/terega-replaced-legacy-data-historian-with-influxdb-aws-io-base/"&gt;case study&lt;/a&gt; and connect with our community in the InfluxDB forums.&lt;/p&gt;
</description>
      <pubDate>Thu, 12 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/modern-industrial-stack-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/modern-industrial-stack-influxdb/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>How Network Operations Teams Use InfluxDB to Solve Network Monitoring Gaps</title>
      <description>&lt;p&gt;Organizations are starting to question whether the value they get from traditional Network Monitoring Systems (NMS) justifies the budget they’ve locked into them.&lt;/p&gt;

&lt;p&gt;On the technical side, network operations teams are dealing with more complexity than ever. Environments are dynamic, traffic patterns shift quickly, and the cost of outages keeps rising. Meanwhile, many traditional platforms haven’t kept pace. Their data pipelines and discovery workflows lag behind how modern networks actually behave. At the same time, pricing and licensing changes are making NMS and Network Performance Management (NPM) solutions even more costly. SolarWinds is a clear example: after its &lt;a href="https://www.securityweek.com/solarwinds-taken-private-in-4-4-billion-turn-river-capital-acquisition/"&gt;acquisition by Turn/River&lt;/a&gt; and shift to a subscription-based licensing model, &lt;a href="https://www.reddit.com/r/Solarwinds/"&gt;users have reported a price increase of over 100%&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is exactly where one of our largest enterprise customers found themselves, anonymous here due to regulatory requirements. They found that their NMS had blind spots that no amount of tuning could fix. Rather than continue pouring budget into SolarWinds to chase diminishing returns, they reallocated spending to implement a network monitoring solution built around InfluxDB. It closed the gaps immediately, restored the visibility they needed for day-to-day reliability, and gave the organization room to decide what comes next.&lt;/p&gt;

&lt;p&gt;Below are a couple of the main networking monitoring challenges this team faced, why their NMS couldn’t address them, and how they used their InfluxDB-centric solution to close their network monitoring gaps.&lt;/p&gt;

&lt;h2 id="network-spike-detection"&gt;Network spike detection&lt;/h2&gt;

&lt;p&gt;The operations team kept seeing Virtual Fabric Drops (VFDs) and intermittent link flaps on a 400 Mbps data center interconnect, but nothing in their NMS showed utilization anywhere near the levels that should trigger them. In fact, it appeared the link never broke ~365 Mbps.&lt;/p&gt;

&lt;p&gt;The underlying issue was short, high-intensity traffic spikes that the NMS could not capture. With a five-minute polling interval, each window was averaged into a single utilization value. Spikes that lasted only a few seconds never aligned with the polling timestamps and were smoothed into what looked like normal traffic.&lt;/p&gt;

&lt;p&gt;The team identified the real pattern only after collecting 1-second metrics from their Arista switches with Telegraf and storing them in InfluxDB. At that resolution, the spikes were obvious and lined up exactly with the VFD events. Their Cisco switches, limited to 30-second polling under SolarWinds, simply couldn’t provide the granularity needed to reveal this behavior.&lt;/p&gt;

&lt;h2 id="cpu-monitoring-granularity"&gt;CPU monitoring granularity&lt;/h2&gt;

&lt;p&gt;The operations team was seeing intermittent performance issues on a Palo Alto firewall, but nothing in their monitoring system indicated CPU saturation. Throughput and latency symptoms suggested load problems, yet the reported CPU utilization stayed around 50%, well below any alarm thresholds.&lt;/p&gt;

&lt;p&gt;The underlying issue was the way the NMS collected and reported CPU metrics. The firewall has separate data-plane and control-plane CPUs, and the platform’s default behavior was to average them. In the incident in question, the data-plane CPU was at 99% while the control-plane CPU sat at 2%, and the averaged value masked the data-plane saturation entirely. As a result, the primary indicator of forwarding stress never surfaced.&lt;/p&gt;

&lt;p&gt;When the team pulled per-CPU metrics into InfluxDB using Telegraf, the data-plane spikes were immediately visible and aligned with the observed performance degradation. From there, they set independent alerts for each CPU so data-plane saturation would be detected directly. While the NMS could have been customized to approximate this view, InfluxDB provided the necessary granularity by default, making the issue straightforward to diagnose and monitor going forward.&lt;/p&gt;

&lt;h2 id="dynamic-vip-monitoring"&gt;Dynamic VIP monitoring&lt;/h2&gt;

&lt;p&gt;The team noticed that Virtual IP (VIP) metrics were incomplete or out of date, and some newly created services weren’t showing up in their monitoring at all. The gaps appeared random, but they pointed to a visibility issue rather than an application problem.&lt;/p&gt;

&lt;p&gt;The root cause was straightforward. Their NMS couldn’t automatically discover or track new VIPs as they were created, moved, or retired. Each VIP had to be added manually, and anything not configured manually wasn’t monitored. In a dynamic environment, that meant missing data and inconsistent coverage.&lt;/p&gt;

&lt;p&gt;Once the team switched to an InfluxDB-centric approach, the issue went away. Telegraf pulled VIP information directly from their AVI load balancer, and each VIP, along with its metrics, was written to InfluxDB as soon as it became available. Monitoring kept pace with the environment without any manual steps. This was especially useful in deployments where VIPs changed frequently, reducing overhead and ensuring complete, up-to-date visibility across the entire set of VIPs.&lt;/p&gt;

&lt;h2 id="how-influxdb-addresses-nms-observability-gaps"&gt;How InfluxDB+ addresses NMS observability gaps&lt;/h2&gt;

&lt;p&gt;Most NMS platforms miss the same categories of data: short-lived spikes, per-component metrics, dynamic objects like VIPs, and anything outside their predefined device models. An InfluxDB-centric stack fills those gaps without replacing your existing tools.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/eewc7XkMkow9hgd0Luv0N/a0a75c11e4f3d96eb983ce1a81d4607b/Network_Infrastructure_-_Light.png" alt="Network Infrastructure" /&gt;&lt;/p&gt;

&lt;h4 id="key-components-of-the-stack"&gt;Key Components of the Stack&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Telegraf&lt;/strong&gt; — Collects high-resolution metrics from devices across the network.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;InfluxDB 3 Enterprise&lt;/strong&gt; — Ingests telemetry at scale and provides fast queries for both recent and historical data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Grafana&lt;/strong&gt; — Visualizes the data and supports operational dashboards and alerting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.influxdata.com/telegraf/v1/"&gt;Telegraf&lt;/a&gt; acts as the universal collector. It pulls metrics, every second or faster, from routers, switches, firewalls, load balancers, storage systems, and virtual infrastructure using SNMP, gNMI, and vendor APIs. It captures interface counters, per-CPU usage, packet drops, latency, queue depth, and other operational signals. Telegraf streams all of this telemetry—thousands of series from across the environment—directly into InfluxDB at full fidelity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=solve_mns_gaps_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB 3&lt;/a&gt; is the core of the stack. It ingests high-resolution telemetry at scale and provides fast access to the recent data needed for dashboards, alerts, and operational workflows. At the same time, it retains full-fidelity history at low cost, giving teams a single place to analyze both real-time conditions and long-horizon trends. The processing engine supports real-time evaluations and, when paired with tools like Grafana, it delivers continuous, high-resolution visibility across the entire environment.&lt;/p&gt;

&lt;h2 id="future-proofing-your-network-monitoring-stack"&gt;Future proofing your network monitoring stack&lt;/h2&gt;

&lt;p&gt;If there’s one lesson from this customer’s experience, it’s that network monitoring is shifting fast. Networks are more distributed, more dynamic, and far more dependent on real-time signals than traditional NMS platforms were built to handle. Polling cycles, rigid device models, and closed data pipelines simply can’t deliver the visibility modern operations teams need.&lt;/p&gt;

&lt;p&gt;InfluxDB 3 + Telegraf gives operations teams a way to work past those constraints. New devices, protocols, and metrics can be onboarded immediately, without waiting for vendor updates. And because the platform stores full-fidelity telemetry inexpensively, teams keep both the real-time signals they need for operations and the long-term history required for deeper analysis.&lt;/p&gt;

&lt;p&gt;That combination of real-time visibility into high-resolution telemetry and cost-effective retention supports the broader remit of modern network operations teams. They are responsible not only for day-to-day reliability but also for the long-term work that depends on complete data: capacity planning, drift detection, anomaly identification, and cross-system correlation.&lt;/p&gt;

&lt;p&gt;In short, if you are running into similar visibility gaps or preparing for a more complex environment, you have options. InfluxDB can fill specific weak spots, operate alongside your existing NMS as a high-resolution telemetry layer, or replace the legacy platform entirely. Unlike traditional NMS tools, it doesn’t lock you into a fixed model or licensing scheme. The stack scales with your network instead of constraining it.&lt;/p&gt;
</description>
      <pubDate>Thu, 05 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/solve-mns-gaps-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/solve-mns-gaps-influxdb/</guid>
      <category>Developer</category>
      <author>Mike Devy, Patrick Oliver (InfluxData)</author>
    </item>
    <item>
      <title>How to Use Pandas Time Index: A Tutorial with Examples</title>
      <description>&lt;p&gt;Time series data is everywhere in modern analytics, from stock prices and sensor readings to web traffic and financial transactions. When working with temporal data in Python, pandas provides powerful tools for handling time-based indexing through its DatetimeIndex functionality.&lt;/p&gt;

&lt;p&gt;This tutorial will guide you through creating, manipulating, and extracting insights from &lt;a href="https://www.influxdata.com/blog/getting-started-with-influxdb-and-pandas/"&gt;pandas&lt;/a&gt; time indexes with practical examples.&lt;/p&gt;

&lt;h2 id="what-is-a-pandas-datetimeindex"&gt;What is a pandas DatetimeIndex?&lt;/h2&gt;

&lt;p&gt;A DatetimeIndex is a specialized index type in pandas designed specifically for time series data. Unlike regular numeric indexes, DatetimeIndex understands temporal relationships, enabling powerful time-based operations like resampling, filtering by &lt;a href="https://www.influxdata.com/blog/pandas-datetime-tutorial/"&gt;date ranges&lt;/a&gt;, and extracting time components.&lt;/p&gt;

&lt;p&gt;The DatetimeIndex serves as the backbone for time series analysis in pandas, providing a rich set of functionality that makes working with temporal data intuitive and efficient.&lt;/p&gt;

&lt;p&gt;When you have data points that are naturally ordered by time—such as stock prices recorded every minute, temperature readings from sensors, or website traffic metrics—DatetimeIndex becomes indispensable.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;import pandas as pd
import numpy as np
from datetime import datetime, timedelta

# Create a simple DatetimeIndex
dates = pd.date_range('2024-01-01', periods=10, freq='D')
print(dates)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The example above creates a DatetimeIndex with 10 consecutive days starting from January 1, 2024. The beauty of DatetimeIndex is its ability to automatically understand and handle various time-related operations that would be cumbersome with regular indexes.&lt;/p&gt;

&lt;h4 id="why-use-datetimeindex"&gt;Why Use DatetimeIndex?&lt;/h4&gt;

&lt;p&gt;Traditional numeric indexes treat each row as an independent entity, but time series data has inherent relationships between consecutive points. DatetimeIndex recognizes these relationships and provides specialized methods for:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Temporal filtering&lt;/strong&gt;: Easily select data from specific periods&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Resampling&lt;/strong&gt;: Convert data from one frequency to another (e.g., daily to monthly)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Time-based grouping&lt;/strong&gt;: Group data by time periods automatically&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Missing data handling&lt;/strong&gt;: Identify and handle gaps in time series&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Time zone management&lt;/strong&gt;: Handle data across different time zones seamlessly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="setting-up-your-environment"&gt;Setting up your environment&lt;/h2&gt;

&lt;p&gt;Before diving into examples, ensure you have the necessary libraries installed. While pandas comes with robust datetime functionality out of the box, you’ll want additional libraries for comprehensive time series analysis:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import matplotlib.pyplot as plt&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For production environments dealing with large-scale time series data, consider installing additional packages:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;pip install pandas numpy matplotlib pytz&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code class="language-markup"&gt;pytz&lt;/code&gt; library is particularly useful for time-zone-aware operations, while &lt;code class="language-markup"&gt;matplotlib&lt;/code&gt; helps visualize time series patterns. If you’re working with financial data, &lt;code class="language-markup"&gt;pandas-datareader&lt;/code&gt; can fetch real-time market data with proper DatetimeIndex formatting.&lt;/p&gt;

&lt;h2 id="creating-a-datetimeindex"&gt;Creating a DatetimeIndex&lt;/h2&gt;

&lt;p&gt;Creating a DatetimeIndex is the first step in time series analysis. Pandas offers multiple approaches depending on your data source and requirements.&lt;/p&gt;

&lt;h4 id="method-1-using-pddaterange"&gt;Method 1: Using pd.date_range()&lt;/h4&gt;

&lt;p&gt;The most common and flexible way to create a DatetimeIndex is using &lt;code class="language-markup"&gt;pd.date_range()&lt;/code&gt;. This method is particularly useful when you need to generate regular time intervals:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Daily frequency for 30 days
daily_index = pd.date_range('2024-01-01', periods=30, freq='D')

# Hourly frequency for 24 hours
hourly_index = pd.date_range('2024-01-01', periods=24, freq='H')

# Monthly frequency for 12 months
monthly_index = pd.date_range('2024-01-01', periods=12, freq='M')

# Business days only (excludes weekends)
business_index = pd.date_range('2024-01-01', periods=20, freq='B')

# Custom frequency - every 15 minutes
custom_index = pd.date_range('2024-01-01 09:00', periods=32, freq='15T')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The frequency parameter (&lt;code class="language-markup"&gt;freq&lt;/code&gt;) accepts various aliases: ‘D’ for daily, ‘H’ for hourly, ‘T’ or ‘min’ for minutes, ‘S’ for seconds, ‘B’ for business days, ‘W’ for weekly, ‘M’ for month-end, ‘MS’ for month-start, ‘Q’ for quarter-end, and ‘A’ for year-end.&lt;/p&gt;

&lt;h4 id="method-2-converting-existing-columns"&gt;Method 2: Converting Existing Columns&lt;/h4&gt;

&lt;p&gt;In real-world scenarios, you’ll often work with datasets that have date information stored as strings or other formats. Converting these to DatetimeIndex is crucial for time series analysis:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Sample data with date strings
data = {
   'date': ['2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04'],
   'value': [100, 105, 98, 110],
   'category': ['A', 'B', 'A', 'B']
}
df = pd.DataFrame(data)

# Convert date column to datetime and set as index
df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
print(df.index)

# Alternative: Convert and set index in one step
df = pd.DataFrame(data)
df = df.set_index(pd.to_datetime(df['date']))
df.drop('date', axis=1, inplace=True)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;When dealing with non-standard date formats, &lt;code class="language-markup"&gt;pd.to_datetime()&lt;/code&gt; offers additional parameters:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Handle different date formats
dates_various = ['01/15/2024', '2024-02-16', '17-Mar-2024']
df_various = pd.DataFrame({'dates': dates_various, 'values': [1, 2, 3]})

# Let pandas infer the format
df_various['dates'] = pd.to_datetime(df_various['dates'], infer_datetime_format=True)

# Or specify the format explicitly for better performance
dates_specific = ['01/15/2024', '01/16/2024', '01/17/2024']
df_specific = pd.DataFrame({'dates': dates_specific, 'values': [1, 2, 3]})
df_specific['dates'] = pd.to_datetime(df_specific['dates'], format='%m/%d/%Y')&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="method-3-direct-datetimeindex-creation"&gt;Method 3: Direct DatetimeIndex Creation&lt;/h4&gt;

&lt;p&gt;For maximum control over the DatetimeIndex creation process, you can instantiate it directly:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create from a list of datetime objects
dates = [datetime(2024, 1, 1), datetime(2024, 1, 2), datetime(2024, 1, 3)]
dt_index = pd.DatetimeIndex(dates)

# Create from strings
string_dates = ['2024-01-01', '2024-01-02', '2024-01-03']
dt_index = pd.DatetimeIndex(string_dates)

# Create with timezone information
tz_dates = pd.DatetimeIndex(['2024-01-01', '2024-01-02'], tz='UTC')

# Create with specific name
named_index = pd.DatetimeIndex(string_dates, name='timestamp')&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="datetimeindex-syntax-and-usage"&gt;DatetimeIndex syntax and usage&lt;/h2&gt;

&lt;p&gt;The DatetimeIndex constructor provides extensive customization options for handling various time series scenarios:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;pd.DatetimeIndex(
   data=None,          # Array-like of datetime objects
   freq=None,          # Frequency string
   tz=None,            # Timezone
   normalize=False,    # Normalize to midnight
   closed=None,        # Whether interval is closed
   ambiguous='raise',  # How to handle ambiguous times
   dayfirst=False,     # Interpret first value as day
   yearfirst=False,    # Interpret first value as year
   dtype=None,         # Data type
   copy=False,         # Copy input data
   name=None           # Name for the index
)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Understanding these parameters is crucial for handling edge cases in time series data:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;freq: Specifies the frequency of the time series. Common values include 'D' (daily), 'H' (hourly), 'T' (minutely)
tz: Time zone information, essential for global applications
normalize: When True, normalizes times to midnight, useful for daily aggregations
ambiguous: Handles ambiguous times during daylight saving transitions
dayfirst/yearfirst: Controls date parsing when format is ambiguous

# Example with various parameters
complex_index = pd.DatetimeIndex(
   ['2024-01-01 14:30', '2024-01-02 14:30', '2024-01-03 14:30'],
   tz='America/New_York',
   freq='D',
   name='trading_times'
)&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="key-attributes-of-pandas-datetimeindex"&gt;Key attributes of pandas DatetimeIndex&lt;/h2&gt;

&lt;p&gt;DatetimeIndex provides numerous attributes for accessing time components, making it easy to extract meaningful information from temporal data:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create sample data
dates = pd.date_range('2024-01-01', periods=100, freq='D')
df = pd.DataFrame({'value': np.random.randn(100)}, index=dates)

# Access various time components
print("Year:", df.index.year.unique())
print("Month:", df.index.month.unique())
print("Day:", df.index.day[:5])
print("Day of week:", df.index.dayofweek[:5])
print("Day name:", df.index.day_name()[:5])
print("Quarter:", df.index.quarter.unique())&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The comprehensive list of available attributes includes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Temporal components&lt;/strong&gt;: year, month, day, hour, minute, second, microsecond&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Week-related&lt;/strong&gt;: week, dayofweek, dayofyear, weekday&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Period indicators&lt;/strong&gt;: quarter, is_month_start, is_month_end, is_quarter_start, is_quarter_end&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Special properties&lt;/strong&gt;: is_leap_year, days_in_month, freqstr&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These attributes enable sophisticated time-based analysis without complex date manipulation:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Advanced attribute usage
df['is_weekend'] = df.index.dayofweek.isin([5, 6])
df['is_month_end'] = df.index.is_month_end
df['days_in_month'] = df.index.days_in_month
df['week_number'] = df.index.isocalendar().week

# Analyze patterns
weekend_mean = df[df['is_weekend']]['value'].mean()
weekday_mean = df[~df['is_weekend']]['value'].mean()
print(f"Weekend vs Weekday difference: {weekend_mean - weekday_mean:.3f}")&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="extracting-time-components"&gt;Extracting time components&lt;/h2&gt;

&lt;h4 id="extract-year-from-datetimeindex"&gt;Extract Year from DatetimeIndex&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create sample time series data
dates = pd.date_range('2020-01-01', '2024-12-31', freq='M')
df = pd.DataFrame({'sales': np.random.randint(1000, 5000, len(dates))}, index=dates)

# Extract year
df['year'] = df.index.year
yearly_sales = df.groupby('year')['sales'].sum()
print(yearly_sales)&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="extract-month-from-datetimeindex"&gt;Extract Month from DatetimeIndex&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Extract month number and name
df['month_num'] = df.index.month
df['month_name'] = df.index.month_name()

# Analyze monthly patterns
monthly_avg = df.groupby('month_name')['sales'].mean()
print(monthly_avg.sort_values(ascending=False))&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="extract-day-hour-and-minute-components"&gt;Extract Day, Hour, and Minute Components&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create hourly data
hourly_dates = pd.date_range('2024-01-01', periods=168, freq='H')  # One week
df_hourly = pd.DataFrame({'temperature': np.random.normal(20, 5, len(hourly_dates))},
                       index=hourly_dates)

# Extract time components
df_hourly['day'] = df_hourly.index.day
df_hourly['hour'] = df_hourly.index.hour
df_hourly['minute'] = df_hourly.index.minute

# Find peak temperature hours
hourly_avg = df_hourly.groupby('hour')['temperature'].mean()
print(f"Peak temperature hour: {hourly_avg.idxmax()}")&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="advanced-datetimeindex-operations"&gt;Advanced DatetimeIndex operations&lt;/h2&gt;

&lt;h4 id="find-first-and-last-day-of-month"&gt;Find First and Last Day of Month&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Check if date is first day of month
df['is_month_start'] = df.index.is_month_start

# Check if date is last day of month
df['is_month_end'] = df.index.is_month_end

# Filter for month-end data
month_end_data = df[df['is_month_end']]
print(month_end_data.head())&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="find-start-and-end-of-year"&gt;Find Start and End of Year&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Check if date is first day of year
df['is_year_start'] = df.index.is_year_start

# Check if date is last day of year
df['is_year_end'] = df.index.is_year_end

# Get year-end values
year_end_sales = df[df['is_year_end']]['sales']
print(year_end_sales)&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="identify-leap-years"&gt;Identify Leap Years&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Check if year is leap year
df['is_leap_year'] = df.index.is_leap_year

# Count leap year occurrences
leap_year_count = df['is_leap_year'].sum()
print(f"Number of leap year entries: {leap_year_count}")&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="working-with-day-of-week"&gt;Working with Day of Week&lt;/h4&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Get day of week (0=Monday, 6=Sunday)
df['day_of_week'] = df.index.dayofweek
df['day_name'] = df.index.day_name()

# Analyze weekday vs weekend patterns
df['is_weekend'] = df['day_of_week'].isin([5, 6])
weekend_avg = df[df['is_weekend']]['sales'].mean()
weekday_avg = df[~df['is_weekend']]['sales'].mean()

print(f"Weekend average: {weekend_avg:.2f}")
print(f"Weekday average: {weekday_avg:.2f}")&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="rounding-dates-in-datetimeindex"&gt;Rounding Dates in DatetimeIndex&lt;/h4&gt;

&lt;p&gt;DatetimeIndex supports rounding operations for aggregating data:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create minute-level data
minute_dates = pd.date_range('2024-01-01 09:00', periods=120, freq='T')
df_minutes = pd.DataFrame({'price': np.random.normal(100, 2, len(minute_dates))},
                        index=minute_dates)

# Round to different frequencies
df_minutes['hour_rounded'] = df_minutes.index.round('H')
df_minutes['15min_rounded'] = df_minutes.index.round('15T')

# Aggregate by rounded time
hourly_avg = df_minutes.groupby('hour_rounded')['price'].mean()
print(hourly_avg)&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="time-series-filtering-and-slicing"&gt;Time Series Filtering and Slicing&lt;/h4&gt;

&lt;p&gt;DatetimeIndex enables intuitive time-based filtering:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create sample data
dates = pd.date_range('2024-01-01', '2024-12-31', freq='D')
df = pd.DataFrame({'value': np.random.randn(len(dates))}, index=dates)

# Filter by year
data_2024 = df['2024']

# Filter by month
january_data = df['2024-01']

# Filter by date range
q1_data = df['2024-01':'2024-03']

# Filter using boolean indexing
recent_data = df[df.index &amp;gt;= '2024-06-01']&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="resampling-with-datetimeindex"&gt;Resampling with DatetimeIndex&lt;/h4&gt;

&lt;p&gt;One of the most powerful features is resampling:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Daily data resampled to weekly
weekly_data = df.resample('W').mean()

# Daily data resampled to monthly
monthly_data = df.resample('M').agg({
   'value': ['mean', 'std', 'min', 'max']
})

print(monthly_data.head())&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="working-with-time-zones"&gt;Working with Time Zones&lt;/h4&gt;

&lt;p&gt;DatetimeIndex supports time-zone-aware operations:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create timezone-aware index
utc_dates = pd.date_range('2024-01-01', periods=10, freq='D', tz='UTC')
df_tz = pd.DataFrame({'value': range(10)}, index=utc_dates)

# Convert to different timezone
df_tz_ny = df_tz.tz_convert('America/New_York')
print(df_tz_ny.index)&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="integration-with-influxdb"&gt;Integration with InfluxDB&lt;/h2&gt;

&lt;p&gt;When working with time series databases like &lt;a href="https://www.influxdata.com/products/influxdb3/"&gt;InfluxDB&lt;/a&gt;, DatetimeIndex becomes even more valuable for data preparation and analysis. &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/reference/client-libraries/v3/python/"&gt;InfluxDB 3.0’s Python client&lt;/a&gt; integrates seamlessly with Pandas DataFrames that use DatetimeIndex:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Example of preparing data for InfluxDB
def prepare_for_influxdb(df):
   """Prepare DataFrame with DatetimeIndex for InfluxDB insertion"""
   # Ensure index is timezone-aware
   if df.index.tz is None:
       df.index = df.index.tz_localize('UTC')

   # Add timestamp column for InfluxDB
   df['timestamp'] = df.index

   return df

# Usage example with sensor data
sensor_dates = pd.date_range('2024-01-01', periods=1000, freq='5T')
sensor_df = pd.DataFrame({
   'temperature': np.random.normal(22, 3, 1000),
   'humidity': np.random.normal(45, 10, 1000),
   'sensor_id': 'sensor_001'
}, index=sensor_dates)

prepared_df = prepare_for_influxdb(sensor_df)

# The prepared DataFrame can now be written to InfluxDB
# with proper timestamp handling and timezone awareness&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;InfluxDB’s strength in handling high-cardinality time series data complements pandas’ analytical capabilities. You can query data from InfluxDB, perform complex analysis using DatetimeIndex operations, and write results back to the database:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Example workflow with InfluxDB integration
def analyze_sensor_data(df):
   """Analyze sensor data using DatetimeIndex features"""
   # Resample to hourly averages
   hourly_avg = df.resample('H').mean()

   # Identify daily patterns
   hourly_avg['hour'] = hourly_avg.index.hour
   daily_pattern = hourly_avg.groupby('hour')[['temperature', 'humidity']].mean()

   # Find anomalies (values beyond 2 standard deviations)
   temp_std = df['temperature'].std()
   temp_mean = df['temperature'].mean()
   df['temp_anomaly'] = abs(df['temperature'] - temp_mean) &amp;gt; 2 * temp_std

   return hourly_avg, daily_pattern, df

# This analysis leverages DatetimeIndex for efficient time-based operations
# that would be complex with traditional indexing approaches&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="best-practices-and-performance-tips"&gt;Best practices and performance tips&lt;/h2&gt;

&lt;p&gt;Effective use of DatetimeIndex requires understanding performance implications and following established best practices.&lt;/p&gt;

&lt;h4 id="choosing-appropriate-frequency"&gt;1. Choosing Appropriate Frequency&lt;/h4&gt;

&lt;p&gt;Select the right frequency for your data to optimize memory usage and query performance:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# For high-frequency data, consider the trade-off between granularity and performance
# Minute-level data for a year: 525,600 rows
minute_data = pd.date_range('2024-01-01', '2024-12-31 23:59', freq='T')

# Daily data for a year: 366 rows (much more manageable)
daily_data = pd.date_range('2024-01-01', '2024-12-31', freq='D')

# Choose based on your analysis needs&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="time-zone-awareness"&gt;2. Time Zone Awareness&lt;/h4&gt;

&lt;p&gt;Always be explicit about time zones in production systems to avoid confusion and errors:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Good: Explicit timezone
utc_index = pd.date_range('2024-01-01', periods=100, freq='D', tz='UTC')

# Better: Convert to local timezone when needed
local_index = utc_index.tz_convert('America/New_York')

# Best: Document timezone assumptions in your code
def create_trading_hours_index(start_date, periods):
   """Create DatetimeIndex for US trading hours (9:30 AM - 4:00 PM ET)"""
   return pd.date_range(
       start=start_date + ' 09:30:00',
       periods=periods,
       freq='B',  # Business days only
       tz='America/New_York'
   )&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="efficient-filtering"&gt;3. Efficient Filtering&lt;/h4&gt;

&lt;p&gt;Use string-based indexing for date ranges when possible, as it’s more readable and often faster:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Efficient: String-based filtering
q1_data = df['2024-01':'2024-03']
january_data = df['2024-01']

# Less efficient: Boolean indexing for simple date ranges
q1_data_bool = df[(df.index "= '2024-01-01') &amp;amp; (df.index "= '2024-03-31')]&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="memory-optimization"&gt;4. Memory Optimization&lt;/h4&gt;

&lt;p&gt;Consider using categorical data types for repeated time components:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Memory-efficient approach for repeated analysis
df['month_name'] = df.index.month_name().astype('category')
df['day_name'] = df.index.day_name().astype('category')

# This reduces memory usage when you have many repeated values&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="vectorized-operations"&gt;5. Vectorized Operations&lt;/h4&gt;

&lt;p&gt;Leverage vectorized operations instead of loops for better performance:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Efficient: Vectorized operations
df['is_business_day'] = df.index.dayofweek " 5
df['quarter_start'] = df.index.is_quarter_start

# Inefficient: Loop-based approach
# for i, date in enumerate(df.index):
#     df.loc[date, 'is_business_day'] = date.dayofweek " 5&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="common-pitfalls-and-solutions"&gt;Common pitfalls and solutions&lt;/h2&gt;

&lt;p&gt;Understanding common challenges with DatetimeIndex helps avoid frustrating debugging sessions and ensures robust time series analysis.&lt;/p&gt;

&lt;h4 id="handling-missing-dates"&gt;Handling Missing Dates&lt;/h4&gt;

&lt;p&gt;Time series data often has gaps due to system downtime, weekends, holidays, or irregular data collection. DatetimeIndex provides elegant solutions:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create data with missing dates
irregular_dates = ['2024-01-01', '2024-01-03', '2024-01-05']
df_irregular = pd.DataFrame({'value': [1, 2, 3]},
                          index=pd.to_datetime(irregular_dates))

# Reindex to fill missing dates
full_range = pd.date_range('2024-01-01', '2024-01-05', freq='D')
df_complete = df_irregular.reindex(full_range)
print(df_complete)

# Fill missing values with different strategies
df_forward_fill = df_complete.fillna(method='ffill')  # Forward fill
df_interpolated = df_complete.interpolate()  # Linear interpolation
df_zero_fill = df_complete.fillna(0)  # Fill with zeros&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="dealing-with-different-date-formats"&gt;Dealing with Different Date Formats&lt;/h4&gt;

&lt;p&gt;Real-world data often comes in various date formats. Robust parsing is essential:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Mixed date formats
mixed_dates = ['01/15/2024', '2024-01-16', '17-Jan-2024']
standardized = pd.to_datetime(mixed_dates, infer_datetime_format=True)
print(standardized)

# Handle parsing errors gracefully
problematic_dates = ['01/15/2024', 'invalid_date', '2024-01-17']
safe_dates = pd.to_datetime(problematic_dates, errors='coerce')
print(safe_dates)  # Invalid dates become NaT (Not a Time)

# Custom parsing for specific formats
custom_format_dates = ['15-Jan-2024 14:30', '16-Jan-2024 15:45']
parsed_custom = pd.to_datetime(custom_format_dates, format='%d-%b-%Y %H:%M')&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="time-zone-conversion-issues"&gt;Time Zone Conversion Issues&lt;/h4&gt;

&lt;p&gt;Time zone handling can be tricky, especially with daylight saving time transitions:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# Create timezone-naive data
naive_dates = pd.date_range('2024-03-10', periods=5, freq='D')
df_naive = pd.DataFrame({'value': range(5)}, index=naive_dates)

# Localize to a specific timezone
df_localized = df_naive.tz_localize('US/Eastern')

# Handle ambiguous times during DST transitions
dst_dates = pd.date_range('2024-11-03 01:00', periods=4, freq='H', tz='US/Eastern')
# This might raise an error due to ambiguous times

# Solution: Handle ambiguous times explicitly
safe_dst = pd.date_range('2024-11-03 01:00', periods=4, freq='H',
                       tz='US/Eastern', ambiguous='infer')&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="performance-issues-with-large-datasets"&gt;Performance Issues with Large Datasets&lt;/h4&gt;

&lt;p&gt;Large time series datasets require careful memory and performance management:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;# For very large datasets, consider chunking
def process_large_timeseries(file_path, chunk_size=10000):
   """Process large time series data in chunks"""
   results = []

   for chunk in pd.read_csv(file_path, chunksize=chunk_size,
                           parse_dates=['timestamp'], index_col='timestamp'):
       # Process each chunk
       processed_chunk = chunk.resample('H').mean()
       results.append(processed_chunk)

   return pd.concat(results)

# Use efficient data types
def optimize_dtypes(df):
   """Optimize DataFrame data types for memory efficiency"""
   for col in df.select_dtypes(include=['float64']).columns:
       df[col] = df[col].astype('float32')

   for col in df.select_dtypes(include=['int64']).columns:
       df[col] = df[col].astype('int32')

   return df&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;

&lt;p&gt;Pandas DatetimeIndex is an essential tool for time series analysis, providing intuitive methods for handling temporal data.&lt;/p&gt;

&lt;p&gt;From basic operations like extracting time components to advanced features like resampling and time zone handling, DatetimeIndex enables efficient time-based data manipulation that would be cumbersome or impossible with traditional indexing approaches.&lt;/p&gt;

&lt;p&gt;The power of DatetimeIndex lies not just in its individual features, but in how they work together to create a comprehensive time series analysis ecosystem.&lt;/p&gt;

&lt;p&gt;Whether you’re analyzing financial market data to identify trading patterns, processing IoT sensor readings to detect anomalies, or examining web analytics to understand user behavior trends, DatetimeIndex provides a foundation for sophisticated temporal analysis.&lt;/p&gt;

&lt;p&gt;As time series data continues to grow in volume and importance across industries, mastering DatetimeIndex becomes increasingly valuable. The techniques covered in this tutorial provide a solid foundation, but the real learning comes from applying these concepts to your specific use cases.&lt;/p&gt;

&lt;p&gt;For large-scale time series applications, consider pairing pandas with specialized time series databases like InfluxDB to handle high-volume, &lt;a href="https://www.influxdata.com/blog/optimizing-influxdb-performance-for-high-velocity-data/"&gt;high-velocity temporal data&lt;/a&gt; efficiently. InfluxDB’s optimized storage and query engine, combined with pandas’ analytical capabilities, creates a powerful platform for time series analysis at any scale.&lt;/p&gt;

&lt;p&gt;The examples in this tutorial provide a comprehensive starting point for working with time-indexed data in pandas.&lt;/p&gt;

&lt;p&gt;Practice these techniques with your own datasets, experiment with different frequency settings, and explore the extensive documentation to become proficient in &lt;a href="https://www.influxdata.com/time-series-forecasting-methods/"&gt;time series analysis with Python&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Remember that effective time series analysis is as much about understanding your data’s temporal patterns as it is about mastering the technical tools to analyze them.&lt;/p&gt;
</description>
      <pubDate>Thu, 05 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/pandas-time-index-tutorial/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/pandas-time-index-tutorial/</guid>
      <category>Developer</category>
      <author>Company (InfluxData)</author>
    </item>
    <item>
      <title>Exponential Smoothing: A Guide to Getting Started</title>
      <description>&lt;p&gt;Exponential smoothing is a time series forecasting method that uses an exponentially weighted average of past observations to predict future values. In other words, it assigns greater weight to recent observations than to older ones, allowing the forecast to adapt to changing data trends.&lt;/p&gt;

&lt;p&gt;In this post, we’ll look at the basics of exponential smoothing, including how it works, its types, and how to implement it in Python.&lt;/p&gt;

&lt;h2 id="what-is-exponential-smoothing"&gt;What is exponential smoothing?&lt;/h2&gt;

&lt;p&gt;Exponential smoothing forecasts &lt;a href="https://www.influxdata.com/what-is-time-series-data/"&gt;time series data&lt;/a&gt; by smoothing out fluctuations in the data. The technique was first introduced by Robert Goodell Brown in 1956 and then further developed by Charles Holt in 1957. It has since become one of the most widely used methods for forecasting.&lt;/p&gt;

&lt;p&gt;The basic idea behind exponential smoothing is to give more weight to recent observations by assigning weights to each observation that decrease exponentially with age. Weights are then used to calculate a weighted moving average of the data, which is used to forecast the next period.&lt;/p&gt;

&lt;p&gt;Exponential smoothing assumes that future values of a time series are a function of its past values. The method works well when the time series has a trend and/or a seasonal component, but it can also be used for stationary data (i.e., without trend or seasonality).&lt;/p&gt;

&lt;h2 id="types-of-exponential-smoothing"&gt;Types of exponential smoothing&lt;/h2&gt;

&lt;p&gt;There are several types of exponential smoothing methods, each with its own formulae and assumptions. The most commonly used methods include:&lt;/p&gt;

&lt;h4 id="simple-exponential-smoothing"&gt;1. Simple Exponential Smoothing&lt;/h4&gt;

&lt;p&gt;Simple exponential smoothing (SES), also known as single exponential smoothing, is its simplest form. It assumes that the time series has no trend or seasonality.&lt;/p&gt;

&lt;p&gt;The forecast for the next period is based on the weighted average of the previous observation and the forecast for the current period.&lt;/p&gt;

&lt;p&gt;The formula for simple exponential smoothing is:&lt;/p&gt;

&lt;p&gt;s&lt;sub&gt;(t)&lt;/sub&gt; = αx&lt;sub&gt;(t)&lt;/sub&gt; + (1-α)s&lt;sub&gt;t-1&lt;/sub&gt;&lt;/p&gt;

&lt;p&gt;where&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;s&lt;sub&gt;(t)&lt;/sub&gt; is the smoothed value at time t,&lt;/li&gt;
  &lt;li&gt;x&lt;sub&gt;(t)&lt;/sub&gt; is the observed value at time t,&lt;/li&gt;
  &lt;li&gt;s&lt;sub&gt;t-1&lt;/sub&gt; is the previous smoothed statistic, and&lt;/li&gt;
  &lt;li&gt;α is the smoothing parameter between 0 and 1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The smoothing parameter α controls the weight given to the current observation and the previous forecast.&lt;/p&gt;

&lt;p&gt;A high value of α gives more weight to the current observation, while a low value of α gives more weight to the previous forecast.&lt;/p&gt;

&lt;h4 id="holts-linear-exponential-smoothing"&gt;2. Holt’s Linear Exponential Smoothing&lt;/h4&gt;

&lt;p&gt;Holt’s linear exponential smoothing, also known as double exponential smoothing, is used to forecast time series data that has a linear trend but no seasonal pattern. This method uses two smoothing parameters: α for the level (the intercept) and β for the trend.&lt;/p&gt;

&lt;p&gt;The formulas for double exponential smoothing are:&lt;/p&gt;

&lt;p&gt;s&lt;sub&gt;t&lt;/sub&gt; = αx&lt;sub&gt;t&lt;/sub&gt; + (1 – α)(s&lt;sub&gt;t-1&lt;/sub&gt; + b&lt;sub&gt;t-1&lt;/sub&gt;)&lt;/p&gt;
&lt;p&gt;β&lt;sub&gt;t&lt;/sub&gt; = β(s&lt;sub&gt;t&lt;/sub&gt; – s&lt;sub&gt;t-1&lt;/sub&gt;) + (1 – β)b&lt;sub&gt;t-1&lt;/sub&gt;&lt;/p&gt;

&lt;p&gt;where&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;bt is the slope and best estimate of the trend at time t,&lt;/li&gt;
  &lt;li&gt;α is the smoothing parameter of data (0 &amp;lt; α &amp;lt; 1), and&lt;/li&gt;
  &lt;li&gt;β is the smoothing parameter for the trend (0 &amp;lt; β &amp;lt; 1).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Holt’s method is more accurate than SES for time series data with a trend, but it doesn’t work well for time series data with a seasonal component.&lt;/p&gt;

&lt;h4 id="holt-winters-exponential-smoothing"&gt;3. Holt-Winters’ Exponential Smoothing&lt;/h4&gt;

&lt;p&gt;Holt-Winters’ exponential smoothing, also referred to as triple exponential smoothing, is used to forecast time series data with both a trend and a seasonal component. It uses three smoothing parameters: α for the level (the intercept), β for the trend, and γ for the seasonal component.&lt;/p&gt;

&lt;p&gt;The triple exponential smoothing formulas are given by:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/67b5pCQEjFeXrfJTLoj4DY/cc8ea1c3ac3eb77ac2270c8d9c5b3fef/image4.png" alt="image4" /&gt;&lt;/p&gt;

&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;s&lt;sub&gt;t&lt;/sub&gt; = smoothed statistic; it’s the simple weighted average of current observation Y&lt;sub&gt;t&lt;/sub&gt;&lt;/li&gt;
  &lt;li&gt;s&lt;sub&gt;t-1&lt;/sub&gt; = previous smoothed statistic&lt;/li&gt;
  &lt;li&gt;α = smoothing factor of data (0 &amp;lt; α &amp;lt; 1)&lt;/li&gt;
  &lt;li&gt;t = time period&lt;/li&gt;
  &lt;li&gt;b&lt;sub&gt;t&lt;/sub&gt; = best estimate of a trend at time t&lt;/li&gt;
  &lt;li&gt;β = trend smoothing factor (0 &amp;lt; β &amp;lt;1)&lt;/li&gt;
  &lt;li&gt;c&lt;sub&gt;t&lt;/sub&gt; = seasonal component at time t&lt;/li&gt;
  &lt;li&gt;γ = seasonal smoothing parameter (0 &amp;lt; γ &amp;lt; 1)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Holt-Winters’ method is the most accurate of the three methods, but it’s also the most complex, requiring more data and computation than the other methods.&lt;/p&gt;

&lt;h2 id="when-to-use-exponential-smoothing"&gt;When to use exponential smoothing&lt;/h2&gt;

&lt;p&gt;Exponential smoothing is most useful for time series data that has a consistent trend, seasonality, and random fluctuations.&lt;/p&gt;

&lt;p&gt;It’s particularly useful for short to medium-term forecasting of business metrics such as sales, revenue, and customer traffic. It’s also useful for monitoring and predicting seasonal changes in industries such as tourism, agriculture, and energy.&lt;/p&gt;

&lt;p&gt;Here are some other common situations where exponential smoothing can be useful:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Time series forecasting&lt;/strong&gt; — One of the most common applications of exponential smoothing is in time series forecasting. If you have historical data for a particular variable over time, such as sales or website traffic, you can use exponential smoothing to forecast the future values of that variable.&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Inventory management&lt;/strong&gt;  — Exponential smoothing can be used to forecast demand for products or services, which can be helpful in inventory management. By forecasting demand, businesses can make sure they have enough inventory on hand to meet customer needs without overstocking.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Finance&lt;/strong&gt; — It can be used in finance to forecast stock prices, interest rates, and other financial variables. This can be helpful for investors who are trying to make informed decisions about buying and selling stocks or other financial instruments.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Marketing&lt;/strong&gt; — It’s also used to forecast the effectiveness of marketing campaigns. By tracking past campaign results and using exponential smoothing to forecast future performance, marketers can optimize their campaigns to achieve the best possible results.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="why-is-exponential-smoothing-popular"&gt;Why is exponential smoothing popular?&lt;/h2&gt;

&lt;p&gt;Exponential smoothing is among the most widely used &lt;a href="https://www.influxdata.com/time-series-forecasting-methods/"&gt;time series forecasting methods&lt;/a&gt; due to its simplicity and effectiveness.&lt;/p&gt;

&lt;p&gt;Unlike other methods, exponential smoothing can adapt to changes in data trends. It also provides accurate predictions by assigning different weights to different time periods based on their importance.&lt;/p&gt;

&lt;p&gt;Exponential smoothing is computationally efficient, making it ideal for large datasets. Additionally, it’s widely used in business forecasting, providing accurate, reliable forecasts across a range of applications, including demand, sales, and financial forecasting.&lt;/p&gt;

&lt;h2 id="how-to-perform-exponential-smoothing"&gt;How to perform exponential smoothing&lt;/h2&gt;

&lt;p&gt;Performing exponential smoothing begins with understanding your data’s structure.&lt;/p&gt;

&lt;p&gt;First, examine your time series to identify whether it contains trends, seasonality, or neither. This analysis will guide you in selecting the right exponential smoothing method: SES works best for stationary data, Holt’s method handles data with trends, and Holt-Winters’ method addresses both trends and seasonal patterns.&lt;/p&gt;

&lt;p&gt;Once you’ve chosen your method, the implementation is straightforward. Modern statistical software and libraries can automatically optimize the smoothing parameters (α, β, and γ) by finding values that minimize forecast errors.&lt;/p&gt;

&lt;p&gt;After fitting the model to your historical data, you’ll have a trained forecasting tool ready to generate predictions for future time periods.&lt;/p&gt;

&lt;p&gt;Now, let’s put this into practice by implementing exponential smoothing using Python, one of the most popular tools for time series analysis.&lt;/p&gt;

&lt;h2 id="exponential-smoothing-in-python"&gt;Exponential smoothing in Python&lt;/h2&gt;

&lt;p&gt;Python has several libraries for exponential smoothing, including &lt;a href="https://pandas.pydata.org/"&gt;Pandas&lt;/a&gt;, &lt;a href="https://www.statsmodels.org/"&gt;Statsmodels,&lt;/a&gt; and &lt;a href="http://facebook.github.io/prophet/docs/quick_start.html"&gt;Prophet&lt;/a&gt;. These libraries provide various functions and methods for implementing different types of exponential smoothing methods.&lt;/p&gt;

&lt;h4 id="the-dataset"&gt;The Dataset&lt;/h4&gt;

&lt;p&gt;For this example, we’ll use the AirPassengers dataset, a time series dataset that contains the monthly number of airline passengers from 1949 to 1960. You can download the dataset from &lt;a href="https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv"&gt;this link&lt;/a&gt;.&lt;/p&gt;

&lt;h4 id="setting-up-the-environment"&gt;Setting Up the Environment&lt;/h4&gt;

&lt;p&gt;To get started, we need to set up our environment. We’ll be using Python 3, so make sure it’s installed. Alternatively, you can use &lt;a href="https://colab.research.google.com/"&gt;Google Colab&lt;/a&gt; and go straight to importing the libraries.&lt;/p&gt;

&lt;p&gt;Next, install these libraries using pip:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;pip install pandas matplotlib statsmodels&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once you’ve installed the necessary libraries, you can import them into your Python script:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.api import SimpleExpSmoothing&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="loading-the-data"&gt;Loading the Data&lt;/h4&gt;

&lt;p&gt;After setting up the environment, we can load the AirPassengers dataset into a pandas DataFrame using the read_csv function:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;data = pd.read_csv('airline-passengers.csv', parse_dates=['Month'], index_col='Month')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can then inspect the first few rows of the DataFrame using the head function:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;print(data.head())&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will output:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2X41cTWsMZNTFnbrtyOXkH/682ec3f8d07e9358daa8cddf66f42464/output.png" alt="output" /&gt;&lt;/p&gt;

&lt;h4 id="visualizing-the-data"&gt;Visualizing the Data&lt;/h4&gt;

&lt;p&gt;Before we apply simple exponential smoothing to the data, let’s visualize it to get a better understanding of its properties. We can use the &lt;strong&gt;plot&lt;/strong&gt; function of pandas to create a line plot of the data:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;plt.plot(data)
plt.xlabel('Year')
plt.ylabel('Number of Passengers')
plt.show()&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will produce a plot of the number of passengers over time:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5AatnC33yN99xVDsGExTKJ/f0173855174542b58947f08fd17a635f/plot_of_the_number_of_passengers_over_time.png" alt="plot of the number of passengers over time" /&gt;&lt;/p&gt;

&lt;p&gt;We can see that the number of passengers appears to be increasing, with some seasonality as well.&lt;/p&gt;

&lt;h4 id="performing-ses"&gt;Performing SES&lt;/h4&gt;

&lt;p&gt;Now that we’ve loaded and visualized the data, we can perform simple exponential smoothing using the &lt;strong&gt;SimpleExpSmoothing&lt;/strong&gt; function from the &lt;strong&gt;statsmodels&lt;/strong&gt; library. Then, we’ll create an instance of the &lt;strong&gt;SimpleExpSmoothing&lt;/strong&gt; class, passing in the data as an argument, and then fit the model to the data using the &lt;strong&gt;fit&lt;/strong&gt; method:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;model = SimpleExpSmoothing(data)
model_fit = model.fit(optimized=True)
print(model_fit.summary())&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will calculate the smoothing parameters and fit the model to the data. The &lt;code class="language-markup"&gt;optimized=True&lt;/code&gt; parameter allows the model to find the best smoothing parameter (α) for your data.&lt;/p&gt;

&lt;h4 id="making-predictions"&gt;Making Predictions&lt;/h4&gt;

&lt;p&gt;Finally, we can use the &lt;strong&gt;forecast&lt;/strong&gt; method to predict future values of the time series, where the argument specifies the number of periods to forecast.&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;forecast = model_fit.forecast(6)
print(forecast)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will produce a forecast for the next six months:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2dnsSkxm1P2vIf4uVmeCaD/24e19b9c94581bba7d451b9a4ab461b8/forecast_for_the_next_six_months.png" alt="forecast for the next six months" /&gt;&lt;/p&gt;

&lt;p&gt;Based on the forecast, we can assume that over the next six months, there will be approximately 432 airline passengers.&lt;/p&gt;

&lt;h2 id="wrapping-up"&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;Exponential smoothing is a powerful method for time series forecasting that allows for accurate predictions of future values based on past observations. It’s a simple, efficient method that can be used for a wide range of time series data.&lt;/p&gt;

&lt;p&gt;In this post, we provided a beginner’s guide to exponential smoothing, explaining the basics of the method, its types, and how to use it for forecasting. However, there are more advanced techniques and approaches to discover beyond the scope of this post.&lt;/p&gt;

&lt;p&gt;If you’re interested in learning more, be sure to check out this &lt;a href="https://www.influxdata.com/blog/python-time-series-forecasting-tutorial/"&gt;Python time series forecasting tutorial&lt;/a&gt;. It’s a great resource that can help you deepen your understanding of this topic and take your forecasting skills to the next level with &lt;a href="https://www.influxdata.com/get-influxdb/"&gt;InfluxDB&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="exponential-smoothing-faqs"&gt;Exponential smoothing FAQs&lt;/h2&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the smoothing parameter α in exponential smoothing and how do I choose it?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                The smoothing parameter α (alpha) controls how much weight is given to the most recent observation versus older data. A value close to 1 makes the forecast highly responsive to the latest data point, while a value close to 0 places more emphasis on the historical average. In practice, you don't need to set α manually — libraries like Python's statsmodels can automatically optimize α by minimizing forecast error when you use the &lt;code&gt;optimized=True&lt;/code&gt; parameter.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What is the difference between simple, double, and triple exponential smoothing?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Simple exponential smoothing (SES) works best for stationary data with no trend or seasonality, using a single smoothing parameter α. Double exponential smoothing (Holt's method) adds a second parameter β to handle data with a linear trend. Triple exponential smoothing (Holt-Winters' method) adds a third parameter γ to also capture seasonal patterns, making it the most accurate of the three but also the most computationally demanding.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-3"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How do I know which exponential smoothing method to use for my data?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-3" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Start by plotting your time series and examining its structure. If the data is relatively flat with no clear upward/downward drift or repeating seasonal cycles, use Simple Exponential Smoothing. If there's a consistent trend but no seasonality, use Holt's linear method. If both a trend and seasonal pattern are visible — such as sales data that peaks every holiday quarter — use Holt-Winters' method. The key is to visually inspect your data before selecting a model.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-4"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;Is exponential smoothing the same as a moving average?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-4" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                They are related but not the same. A simple moving average gives equal weight to all observations in the window and completely ignores data outside it. Exponential smoothing, on the other hand, uses all historical data but assigns weights that decrease exponentially with age — meaning the most recent observations have the greatest influence. This makes exponential smoothing more adaptive to recent changes than a standard moving average.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-5"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What Python libraries can I use for exponential smoothing?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-5" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                The most commonly used library is &lt;code&gt;statsmodels&lt;/code&gt;, which provides &lt;code&gt;SimpleExpSmoothing&lt;/code&gt; for SES and &lt;code&gt;ExponentialSmoothing&lt;/code&gt; for Holt's and Holt-Winters' methods. You'll also typically use &lt;code&gt;pandas&lt;/code&gt; for loading and preparing your time series data and &lt;code&gt;matplotlib&lt;/code&gt; for visualizing it. All three can be installed via pip and work well together for end-to-end forecasting workflows in Python.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-6"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;What are the limitations of exponential smoothing?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-6" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Exponential smoothing is best suited for short to medium-term forecasting and assumes that future patterns will resemble past ones. It struggles with abrupt structural changes in the data, such as sudden market shifts or one-time events, because the model adapts gradually. It also doesn't capture complex non-linear relationships between multiple variables. For longer-horizon forecasts or multivariate scenarios, models like ARIMA, SARIMA, or machine learning-based approaches may be more appropriate.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-7"&gt;
            &lt;div class="message-header"&gt;
                &lt;h3&gt;How accurate are exponential smoothing forecasts?&lt;/h3&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-7" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Forecast accuracy depends on how well the chosen method matches your data's structure. When the right variant is applied — SES for stationary data, Holt's for trended data, Holt-Winters' for seasonal data — exponential smoothing consistently delivers strong results for short to medium-term horizons. Accuracy also improves when smoothing parameters are automatically optimized to minimize error on your historical data, as supported by statsmodels' &lt;code&gt;optimized=True&lt;/code&gt; option.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Israel Oyetunji. &lt;a href="https://twitter.com/israelmitolu"&gt;Israel&lt;/a&gt; is a frontend developer with a knack for creating engaging UI and interactive experiences. He has proven experience developing consumer-focused websites using HTML, CSS, JavaScript, ReactJS, SASS, and relevant technologies. He loves writing about tech and creating how-to tutorials for developers.&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Tue, 03 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/exponential-smoothing-beginners-guide/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/exponential-smoothing-beginners-guide/</guid>
      <category>Developer</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>Building with the InfluxDB 3 MCP Server &amp; Claude</title>
      <description>&lt;p&gt;InfluxDB 3 &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/mcp-server/"&gt;Model Context Protocol (MCP) server&lt;/a&gt; lets you manage and query InfluxDB 3 (Core, Enterprise, Dedicated, Serverless, Clustered) using natural language through popular LLM tools like Claude Desktop, ChatGPT Desktop, and other MCP-compatible agents.&lt;/p&gt;

&lt;p&gt;The setup is straightforward. In this article, we will focus on &lt;strong&gt;setting up InfluxDB 3 Enterprise&lt;/strong&gt; using Docker with &lt;strong&gt;Claude Desktop&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;Install InfluxDB 3 Enterprise using Docker (if you’re a new user, try out our &lt;a href="https://www.influxdata.com/lp/influxdb-database/?utm_source=website&amp;amp;utm_medium=influxdb_3_mcp_server_claude&amp;amp;utm_content=blog"&gt;free trial&lt;/a&gt;) on your machine by running the installer script:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;curl -O https://www.influxdata.com/d/install_influxdb3.sh &amp;amp;&amp;amp; sh install_influxdb3.sh enterprise&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;InfluxDB 3 Explorer UI will also make it easier to manage InfluxDB operations, so it’s recommended that you &lt;a href="https://docs.influxdata.com/influxdb3/explorer/install/#installation-methods"&gt;install&lt;/a&gt; it (using Docker) during initial setup or afterwards.&lt;/p&gt;

&lt;h2 id="create-an-influxdb-3-token-for-mcp-server"&gt;1. Create an InfluxDB 3 token for MCP server&lt;/h2&gt;

&lt;p&gt;The easiest way to create a scoped token is within &lt;strong&gt;InfluxDB 3 Explorer&lt;/strong&gt; UI.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open Explorer at http://localhost:8888.&lt;/li&gt;
  &lt;li&gt;Go to &lt;strong&gt;Manage Tokens&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Create a &lt;strong&gt;Database (resource) token&lt;/strong&gt; with &lt;strong&gt;read&lt;/strong&gt; (and optional write) &lt;strong&gt;permissions&lt;/strong&gt; for the databases you want your LLM to access.&lt;/li&gt;
  &lt;li&gt;Copy the token string and store it securely; the MCP server will use it as &lt;code class="language-markup"&gt;INFLUX_DB_TOKEN&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alternatively, you can run the following command inside a Docker container to create the token.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;docker exec -it YOUR_CONTAINER_ID influxdb3 create token \
  --permission "db:DATABASE1,DATABASE2:read,write" \
  --name "Read-write on DATABASE1, DATABASE2" \
  --token YOUR_ADMIN_TOKEN \
  --expiry 1y&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Tip&lt;/strong&gt;: Use resource tokens with the minimum required permissions and an expiration date, rather than providing a full admin token to the LLM MCP.&lt;/p&gt;

&lt;h2 id="configure-the-claude-desktop-mcp-server-docker-for-influxdb-3-enterprise"&gt;2. Configure the Claude Desktop MCP server (Docker) for InfluxDB 3 Enterprise&lt;/h2&gt;

&lt;p&gt;The InfluxDB 3 MCP server runs as a separate service and can be started using either &lt;a href="https://nodejs.org/en/download/current"&gt;Node.js&lt;/a&gt; or Docker. We will use Docker, as it’s already running InfluxDB 3 and Explorer UI.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open Claude Desktop.&lt;/li&gt;
  &lt;li&gt;Navigate to Settings → Developers → Edit Config&lt;/li&gt;
  &lt;li&gt;Open the Claude Desktop configuration file, add the following to the existing file, save, and restart Claude Desktop.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-json"&gt;{
  "mcpServers": {
    "influxdb": {
      "command": "docker",
      "args": [
        "run",
        "--rm",
        "--interactive",
        "--add-host=host.docker.internal:host-gateway",
        "--env",
        "INFLUX_DB_PRODUCT_TYPE",
        "--env",
        "INFLUX_DB_INSTANCE_URL",
        "--env",
        "INFLUX_DB_TOKEN",
        "influxdata/influxdb3-mcp-server"
      ],
      "env": {
        "INFLUX_DB_PRODUCT_TYPE": "enterprise",
        "INFLUX_DB_INSTANCE_URL": "http://host.docker.internal:8181",
        "INFLUX_DB_TOKEN": "YOUR_RESOURCE_TOKEN"
      }
    }
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/W8sa9IGq6VeCO5bsc8FTl/976bc915980cadadc4b9411e6c9d2522/Claude_desktop_1.jpg" alt="Claude desktop 1" /&gt;&lt;/p&gt;

&lt;h2 id="use-claude-with-influxdb-via-mcp"&gt;3. Use Claude with InfluxDB via MCP&lt;/h2&gt;

&lt;p&gt;Once restarted, verify that Claude can access the InfluxDB 3 MCP server by chatting with it.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3copMcWux1iw9eHUigdSao/7de96f342fc2143eabb9cc76e74efd85/Claude_desktop_2.jpg" alt="Claude desktop 2" /&gt;&lt;/p&gt;

&lt;p&gt;Finally, you can interact with the database however you’d like, such as performing operations, getting analytics, etc., using natural language. Try the following prompts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“List all the databases and permissions you have access to.”&lt;/li&gt;
  &lt;li&gt;“Show me the schema for the &lt;code class="language-markup"&gt;sensor_data table&lt;/code&gt;.”&lt;/li&gt;
  &lt;li&gt;“Analyze bitcoin sample data price in the last 30 days.” You can also see the actual SQL query that gets executed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5tIqhG4xebUYR6CL1bZnVx/08e82f4f590945ea5c77fecce7234ee7/Claude_desktop_3.jpg" alt="Claude desktop 3" /&gt;&lt;/p&gt;

&lt;h2 id="connecting-other-llms"&gt;Connecting other LLMs&lt;/h2&gt;

&lt;p&gt;In this article we used Claude Desktop, but the InfluxDB 3 MCP server itself is generic. Any LLM agent that supports the Model Context Protocol can be used. For example, ChatGPT Desktop. In a follow-up article, we’ll cover how to run the MCP server and an LLM locally using other tools. We would love to hear your comments/questions, etc., on our community &lt;a href="https://community.influxdata.com"&gt;website&lt;/a&gt;, &lt;a href="https://www.influxdata.com/slack"&gt;Slack&lt;/a&gt;, or &lt;a href="https://discord.gg/YFFJvkfb"&gt;Discord&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Fri, 30 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/influxdb-3-mcp-server-claude/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/influxdb-3-mcp-server-claude/</guid>
      <category>Developer</category>
      <author>Suyash Joshi (InfluxData)</author>
    </item>
    <item>
      <title>Getting Started with InfluxDB and Pandas: A Beginner's Guide</title>
      <description>&lt;p&gt;InfluxData prides itself on prioritizing developer happiness. A key ingredient to that formula is providing client libraries that let users interact with the database in their chosen language and library. Data analysis is the task most broadly associated with Python use cases, accounting for 58% of Python tasks, so it makes sense that &lt;a href="https://www.jetbrains.com/research/python-developers-survey-2018/"&gt;Pandas is the second most popular library for Python users&lt;/a&gt;. The InfluxDB 3 Python client library supports Pandas DataFrames, making it easy for data scientists to use InfluxDB.&lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll learn how to query our InfluxDB instance and return the data as a DataFrame. We’ll also explore some data science resources included in the Client &lt;a href="https://github.com/InfluxCommunity/influxdb3-python"&gt;repo&lt;/a&gt;. To learn about how to get started with the InfluxDB 3 Python client library, please take a look at this &lt;a href="https://www.youtube.com/watch?v=tpdONTm1GC8"&gt;video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1d5h3Jhciel0jmTuowdh41/b40b010caa9dc6b93377d04e6a41e734/pandas-influxdb.jpg" alt="Pandas Query: Getting Started with InfluxDB and Pandas | InfluxData" /&gt;&lt;/p&gt;
&lt;p style="text-align: center;"&gt;Me eagerly consuming Pandas and InfluxDB Documentation. Photo by Sid Balachandran on Unsplash.&lt;/p&gt;

&lt;h2 id="data-science-resources"&gt;Data science resources&lt;/h2&gt;

&lt;p&gt;A variety of data science resources have been included in the InfluxDB Python client repo to help you take advantage of the Pandas functionality of the client. I encourage you to take a look at the &lt;a href="https://github.com/InfluxCommunity/influxdb3-python/tree/main/Examples"&gt;example notebooks&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="dependencies"&gt;Dependencies&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;pyarrow (automatically comes with influxdb3-python installation)&lt;/li&gt;
  &lt;li&gt;pandas&lt;/li&gt;
  &lt;li&gt;certifi (if you are using Windows)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="installations"&gt;Installations&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;pip install influxdb3-python pandas certifi&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="import-dependencies"&gt;Import Dependencies&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;
import influxdb_client_3 as influxDBclient3
import pandas as pd
Import certifi #if you are on Windows
from influxdb_client_3 import flight_client_options&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="initialization"&gt;Initialization&lt;/h2&gt;

&lt;h4 id="direct-initialization"&gt;Direct Initialization&lt;/h4&gt;

&lt;p&gt;Take note that the “&lt;strong&gt;database&lt;/strong&gt;” argument in the function is the bucket name if you are using InfluxDB cloud:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;client = InfluxDBClient3(token="your-token",
                         host="your-host",
                         database="your-database or your bucket name")&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="for-windows-users"&gt;For Windows Users&lt;/h4&gt;

&lt;p&gt;Include certifi within the “&lt;strong&gt;flight_client_options&lt;/strong&gt;” argument within the client initialization to fix certificate issues:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;with open(certifi.where(), "r") as fh:
    cert = fh.read()
client = InfluxDBClient3(token="your-token",
                         host="your-host",
                         database="your-database or your bucket name”,
flight_client_options=flight_client_options(tls_root_certs=cert)&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="prepare-a-pandas-dataframe"&gt;Prepare a pandas Dataframe&lt;/h2&gt;

&lt;p&gt;Let’s use simple weather data:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;
# Example weather data
df = pd.DataFrame({
    "timestamp": pd.date_range("2025-09-01", periods=4, freq="h", tz="UTC"),
    "city": ["Lagos", "Illinois", "Chicago", "Abuja"],
    "temperature": [30.5, 15, 16, 32],
    "humidity": [20, 10, 10, 19]
})

# ensure timestamp dtype is datetime64[ns] and (optionally) timezone-aware
df['timestamp'] = pd.to_datetime(df['timestamp'], utc=True)
print(df.head())&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4DCmuV3jkjxRNPXnwnV2PL/3208f398b107a6054eacb0847c981e38/Screenshot_2026-01-16_at_11.25.42Ã___AM.png" alt="Pandas table 1" /&gt;&lt;/p&gt;
&lt;p style="text-align: center;"&gt; Weather timestamp data.&lt;/p&gt;

&lt;h2 id="write-the-pandas-dataframe-to-influxdb"&gt;Write the pandas Dataframe to InfluxDB&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;
client._write_api.write(
    bucket="my_bucket",
    record=df,
    data_frame_measurement_name="weather",
    data_frame_tag_columns=["city", "temperature"],
    data_frame_timestamp_column="timestamp"
)
print("DataFrame written to bucket=new-test-bucket, measurement=weather")&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Below is confirmation in your InfluxDB Cloud that the Pandas DataFrame was successfully written to the bucket.
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/23svEC9jVURIZ1AZR6Yjun/acc25ac179927a0f58a8649fa1d767f6/Screenshot_2026-01-16_at_10.04.51â__AM.png" alt="Pandas DataFrame" /&gt;&lt;/p&gt;
&lt;p style="text-align: center;"&gt;   Pandas Dataframe written to InfluxDB.&lt;/p&gt;

&lt;h2 id="query-influxdb-and-return-a-pandas-dataframe"&gt;Query InfluxDB and return a Pandas DataFrame&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;query = "SELECT * FROM weather"
table = client.query(query=query, language="influxql")
result_df = table.to_pandas()

print("Loading from InfluxDB:")
print(result_df.head())&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1KQGAVtQYLnxaKnDpq5m6n/5a4e222723596e843fd31b04e729c534/Screenshot_2026-01-16_at_9.59.00_AM.png" alt="Pandas table" /&gt;
                                  Returning a Pandas DataFrame from InfluxDB&lt;/p&gt;

&lt;h2 id="start-building-with-influxdb-and-pandas"&gt;Start building with InfluxDB and Pandas&lt;/h2&gt;

&lt;p&gt;InfluxDB makes it easy to integrate with your existing &lt;a href="https://www.influxdata.com/time-series-analysis-methods/"&gt;data analysis&lt;/a&gt; tools and frameworks, such as Pandas, to get insights from your time series data. Under the hood, you get the benefits of Apache Arrow for fast data transfers into Pandas DataFrames without any performance hits. Whether you’re building dashboards, machine learning models, or just exploring your metrics, combining InfluxDB 3 with Pandas gives you the best of both worlds in terms of performance and developer experience. As always, if you run into hurdles, please share them on our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=getting_started_with_influxdb_and_pandas&amp;amp;utm_content=blog"&gt;community site&lt;/a&gt; or &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=getting_started_with_influxdb_and_pandas&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; channel. We’d love to get your feedback and help with any problems you run into.&lt;/p&gt;
</description>
      <pubDate>Tue, 27 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/getting-started-with-influxdb-and-pandas/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/getting-started-with-influxdb-and-pandas/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>From Monitoring Signals to Observability Maturity</title>
      <description>&lt;p&gt;Efficient monitoring delivers fast results: alerts fire within seconds, dashboards refresh continuously, and teams know the moment something changes.&lt;/p&gt;

&lt;p&gt;Understanding arrives later. An alert may show that a value shifted, but it does not explain why it shifted, how far the impact will spread, or which components truly matter. Teams see the signal, not the system behavior behind it.&lt;/p&gt;

&lt;p&gt;This gap defines the limit of traditional monitoring. Detection has improved, but explanation has not kept pace. As environments grow more interconnected, reporting change without context leaves teams reacting instead of understanding. Mature monitoring must explain behavior and impact, not just surface signals.&lt;/p&gt;

&lt;h2 id="when-context-falls-apart"&gt;When context falls apart&lt;/h2&gt;

&lt;p&gt;Without context, change is difficult to interpret. An alert may confirm that something happened, but it rarely explains which dependencies influenced the behavior, how far the impact might spread, or which components are truly affected. Dashboards emphasize individual services or resources, while alerts trigger based on local thresholds instead of system-wide impact. Teams see the signal, but not the behavior behind it.&lt;/p&gt;

&lt;p&gt;In most environments, the missing context lives elsewhere. Dependency information is scattered across configuration files, infrastructure tools, and service catalogs. Ownership and escalation paths live in runbooks. Historical relationships are reconstructed during incidents through manual analysis or ad hoc queries. This separation creates &lt;a href="https://www.influxdata.com/blog/breaking-data-silos-influxdb-3/#heading0"&gt;data silos&lt;/a&gt;, forcing teams to stitch together metrics, metadata, and system structure to gain a comprehensive view of what is actually happening.&lt;/p&gt;

&lt;h4 id="the-cost-of-fragmented-visibility"&gt;The Cost of Fragmented Visibility&lt;/h4&gt;

&lt;p&gt;As environments scale, fragmentation becomes expensive. Root cause analysis slows as engineers trace upstream and downstream impact across multiple tools. &lt;a href="https://www.influxdata.com/blog/preventing-alert-storms-influxdb3/"&gt;Alert fatigue&lt;/a&gt; increases when signals cannot be evaluated against real dependencies. Mean time to resolution grows as teams spend more effort assembling context than resolving the issue. Even when an &lt;a href="https://www.influxdata.com/glossary/anomaly-detection/"&gt;anomaly&lt;/a&gt; is detected quickly, understanding why it occurred and what it affects often arrives too late to prevent broader disruption.&lt;/p&gt;

&lt;p&gt;The impact extends beyond incident response. Capacity planning becomes less reliable when demand shifts propagate through systems that teams cannot easily trace. SLO and SLA tracking lose precision when alerts lack impact awareness. Automation remains cautious or brittle because signals do not consistently reflect the true system state. What begins as a context gap turns into operational overhead, engineering toil, and inconsistent customer experience.  Closing the gap between detection and understanding requires monitoring to evolve beyond reporting change and toward explaining system behavior and impact.&lt;/p&gt;

&lt;h2 id="from-signals-to-system-understanding"&gt;From signals to system understanding&lt;/h2&gt;

&lt;p&gt;When monitoring evolves beyond reporting signals, it gives teams the context needed to understand system behavior and impact. &lt;a href="https://www.influxdata.com/what-is-observability/"&gt;Observability&lt;/a&gt; maturity shifts monitoring from answering when something changed to explaining why it changed and what it affects. Signals no longer arrive as isolated data points; they are interpreted within the system that produced them.&lt;/p&gt;

&lt;p&gt;With this context, teams can assess impact as soon as a signal appears. A latency spike is not just a breached threshold. It shows how activity in one component influences others, which dependencies are involved, and whether the change represents localized noise or broader risk. This perspective supports faster, more proportional responses and reduces unnecessary remediation.&lt;/p&gt;

&lt;h4 id="when-signals-gain-meaning"&gt;When Signals Gain Meaning&lt;/h4&gt;

&lt;p&gt;As monitoring practices mature, investigations become more focused and efficient. Teams spend less time assembling dashboards or reconciling data across tools. Signals are evaluated alongside related components, making root cause analysis more transparent and reducing the effort required to identify contributing factors. Mean time to resolution (MTTR) improves because understanding arrives earlier in the response cycle.&lt;/p&gt;

&lt;p&gt;Observability maturity also strengthens day-to-day operations. Historical telemetry reveals patterns that inform capacity planning, SLO and SLA management, and reliability goals. Alerting becomes more effective when signals are evaluated in context rather than isolation, helping reduce alert fatigue. Automation becomes safer to trust because actions reflect a clearer view of system state and impact.&lt;/p&gt;

&lt;p&gt;In this model, monitoring supports confident decision-making. Teams move away from reactive firefighting and toward proactive operations, using telemetry not only to detect change, but to understand how systems behave as environments grow more interconnected.&lt;/p&gt;

&lt;h2 id="how-observability-maturity-becomes-possible"&gt;How observability maturity becomes possible&lt;/h2&gt;

&lt;p&gt;Observability maturity depends on a platform that can ingest, store, and analyze telemetry within a unified execution environment. Metrics, events, and time series data must flow through the same data paths so teams can correlate change across systems rather than reconstructing context through downstream tooling or manual analysis.&lt;/p&gt;

&lt;p&gt;By unifying ingestion and querying for metrics, events, and telemetry, InfluxDB 3 provides a time series–first observability platform that supports infrastructure, applications, &lt;a href="https://www.influxdata.com/glossary/edge-computing/"&gt;edge deployments&lt;/a&gt;, and industrial systems through a single data model.&lt;/p&gt;

&lt;h2 id="a-unified-telemetry-foundation"&gt;A Unified Telemetry Foundation&lt;/h2&gt;

&lt;p&gt;Modern environments generate large volumes of time-stamped data with rapidly changing dimensions. Supporting observability maturity requires handling &lt;a href="https://www.influxdata.com/glossary/cardinality/"&gt;high-cardinality&lt;/a&gt; time series data without degrading ingest performance or query latency.&lt;/p&gt;

&lt;p&gt;InfluxDB 3 is built on a columnar analytics stack using &lt;a href="https://www.influxdata.com/glossary/apache-arrow/"&gt;Apache Arrow&lt;/a&gt; for in-memory execution and &lt;a href="https://www.influxdata.com/glossary/apache-parquet/"&gt;Parquet&lt;/a&gt; for durable, compressed storage. Telemetry flows through a single ingest path and is stored in a format optimized for analytical access, allowing recent signals and long-term history to be queried through the same interface. This design lets teams analyze live behavior, compare it to historical baselines, and identify trends without maintaining parallel storage systems or export pipelines.&lt;/p&gt;

&lt;h4 id="scale-without-fragmentation"&gt;Scale Without Fragmentation&lt;/h4&gt;

&lt;p&gt;As telemetry volume increases, many organizations separate ingestion, storage, and analysis into different systems. While this can address isolated scaling concerns, it fragments execution paths and makes correlation harder over time. Signals, metadata, and historical context drift into separate layers, increasing query complexity and slowing investigation.&lt;/p&gt;

&lt;p&gt;InfluxDB 3 avoids fragmentation by keeping telemetry, metadata, and related observations within a single execution environment. Queries are planned and executed through a unified SQL engine built on &lt;a href="https://www.influxdata.com/glossary/apache-datafusion/"&gt;DataFusion&lt;/a&gt;, allowing joins, filters, and aggregations to run across live and historical data without external synchronization or &lt;a href="https://www.influxdata.com/glossary/etl/"&gt;ETL&lt;/a&gt;. This preserves consistency as environments grow and keeps analysis close to the data.&lt;/p&gt;

&lt;h4 id="open-integration-and-interoperability"&gt;Open Integration and Interoperability&lt;/h4&gt;

&lt;p&gt;Observability maturity builds on existing tools rather than replacing them overnight. Telemetry must move easily between collectors, visualization layers, automation systems, and analytics workflows. Open interfaces make this possible without forcing teams into proprietary paths.&lt;/p&gt;

&lt;p&gt;InfluxDB 3 provides open &lt;a href="https://www.influxdata.com/solutions/apis/?utm_source=website&amp;amp;utm_medium=observability_maturity_influxdb_3&amp;amp;utm_content=blog"&gt;APIs&lt;/a&gt; and a broad integration ecosystem, allowing telemetry to flow freely between systems while maintaining a shared source of truth. Data is stored in scalable object storage using columnar formats, supporting long retention and elastic growth without changing query behavior or operational workflows.&lt;/p&gt;

&lt;h4 id="analysis-close-to-the-data"&gt;Analysis Close to the Data&lt;/h4&gt;

&lt;p&gt;As observability practices advance, analysis increasingly runs alongside ingestion and storage. Executing queries where data lives reduces latency and avoids inconsistencies introduced by exporting telemetry to downstream systems.&lt;/p&gt;

&lt;p&gt;By executing analytics within the same Arrow-based environment that stores the data, InfluxDB 3 supports correlation, pattern analysis, and advanced workflows without adding architectural layers. Aligning ingestion, storage, and analysis in a single platform provides the technical foundation for monitoring practices to mature into observability at scale.&lt;/p&gt;

&lt;h2 id="from-monitoring-signals-to-monitoring-nirvana"&gt;From monitoring signals to monitoring nirvana&lt;/h2&gt;

&lt;p&gt;Detecting change is no longer the challenge. Interpreting what that change means across an interconnected system is. As environments grow more complex, observability maturity depends on the ability to connect telemetry with context and history so teams can understand behavior, impact, and progression rather than reacting to isolated signals.&lt;/p&gt;

&lt;p&gt;InfluxDB 3 makes this possible by bringing ingestion, storage, and analysis together in a single platform. With telemetry flowing through one execution path, teams maintain consistent context as systems scale. This reduces investigative friction, shortens time to insight, and gives teams the confidence to operate and automate in dynamic environments.&lt;/p&gt;

&lt;h2 id="get-started"&gt;Get started&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/lp/influxdb-signup/?utm_source=website&amp;amp;utm_medium=observability_maturity_influxdb_3&amp;amp;utm_content=blog"&gt;Try InfluxDB for free&lt;/a&gt;: Launch a fully-managed instance and see how modern monitoring works in your environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.influxdata.com/?utm_source=website&amp;amp;utm_medium=observability_maturity_influxdb_3&amp;amp;utm_content=blog"&gt;Explore documentation&lt;/a&gt;: Access guides, integrations, and examples to help you connect systems and build monitoring pipelines.&lt;/p&gt;
</description>
      <pubDate>Thu, 22 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/observability-maturity-influxdb-3/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/observability-maturity-influxdb-3/</guid>
      <category>Developer</category>
      <author>Allyson Boate (InfluxData)</author>
    </item>
    <item>
      <title>A New Way to Debug Query Performance in Cloud Dedicated</title>
      <description>&lt;p&gt;I’d like to share a new &lt;code class="language-markup"&gt;influxctl&lt;/code&gt; ease-of-use feature in &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=new_query_tuning_feature_influxdb&amp;amp;utm_content=blog"&gt;v2.12.0&lt;/a&gt; that makes it easier to optimize important queries or debug slow ones. &lt;code class="language-markup"&gt;influxctl&lt;/code&gt; has had the capability to send queries and display the results in JSON or tabular formats for some time. (Note: this CLI utility is specific to Cloud Dedicated and Clustered, as are many of the specifics in this post.) In Clustered, you can monitor querier pods’ logs, and in both Dedicated and Clustered, &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/admin/query-system-data/#query-logs"&gt;metrics on individual queries’ performance can be found in the system tables&lt;/a&gt;. Both of those options offer a lot of data—enough that it can be hard to digest quickly. Additionally, associating a single execution of a query to its log entry is tedious. A new feature, the &lt;code class="language-markup"&gt;--perf-debug&lt;/code&gt; flag for the &lt;code class="language-markup"&gt;influxctl&lt;/code&gt; query command (&lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/reference/release-notes/influxctl/#2120"&gt;release notes&lt;/a&gt;), accelerates the experimentation cycle by providing real-time feedback, allowing you to stay in the context of your shell as you tweak your query.&lt;/p&gt;

&lt;h2 id="sample-output"&gt;Sample output&lt;/h2&gt;

&lt;p&gt;The new flag, &lt;code class="language-markup"&gt;--perf-debug&lt;/code&gt;, will execute a query, collect and discard the results, and emit execution metrics instead. When &lt;code class="language-markup"&gt;--format&lt;/code&gt; is omitted, output defaults to a tabular format with units dynamically chosen for human readability. In the second execution below, &lt;code class="language-markup"&gt;--format json&lt;/code&gt; is specified to emit a data format appropriate for programmatic consumption: in a nod to the querier log, it uses keys with shorter variable names, delimits words with underscores, and sports consistent units (bytes, seconds as a float).&lt;/p&gt;

&lt;p&gt;In the tabular format, you can also see a demarcation between client and server metrics.&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;$ influxctl query --perf-debug --token REDACTED --database reidtest3 --language influxql "SELECT SUM(i), non_negative_difference(SUM(i)) as diff_i FROM data WHERE time &amp;gt; '2025-11-07T01:20:00Z' AND time " '2025-11-07T03:00:00Z' AND runid = '540cd752bb6411f0a23e30894adea878' GROUP BY time(5m)"
+--------------------------+----------+
| Metric                   | Value    |
+--------------------------+----------+
| Client Duration          | 1.222 s  |
| Output Rows              | 20       |
| Output Size              | 647 B    |
+--------------------------+----------+
| Compute Duration         | 37.2 ms  |
| Execution Duration       | 243.8 ms |
| Ingester Latency Data    | 0        |
| Ingester Latency Plan    | 0        |
| Ingester Partition Count | 0        |
| Ingester Response        | 0 B      |
| Ingester Response Rows   | 0        |
| Max Memory               | 70 KiB   |
| Parquet Files            | 1        |
| Partitions               | 1        |
| Planning Duration        | 9.6 ms   |
| Queue Duration           | 286.6 µs |
+--------------------------+----------+

$ influxctl query --perf-debug --format json --token REDACTED --database reidtest3 --language influxql "SELECT SUM(i), non_negative_difference(SUM(i)) as diff_i FROM data WHERE time &amp;gt; '2025-11-07T01:20:00Z' AND time " '2025-11-07T03:00:00Z' AND runid = '540cd752bb6411f0a23e30894adea878' GROUP BY time(5m)"
{
  "client_duration_secs": 1.101,
  "compute_duration_secs": 0.037,
  "execution_duration_secs": 0.247,
  "ingester_latency_data": 0,
  "ingester_latency_plan": 0,
  "ingester_partition_count": 0,
  "ingester_response_bytes": 0,
  "ingester_response_rows": 0,
  "max_memory_bytes": 71744,
  "output_bytes": 647,
  "output_rows": 20,
  "parquet_files": 1,
  "partitions": 1,
  "planning_duration_secs": 0.009,
  "queue_duration_secs": 0
  }&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="notes"&gt;Notes&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Client duration&lt;/strong&gt; includes the time to open the connection to the server. In the example, you can see a big delta between that and the server’s total duration. When I ran this command, my client and database server were not colocated. Additionally, &lt;code class="language-markup"&gt;influxctl&lt;/code&gt; may not be tuned for optimal connection latency. Your native client probably caches connections and might not suffer this latency. When tuning your query, it’s more important to look at the durations recorded by the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output size&lt;/strong&gt; is the size in Arrow format in memory, after gzip inflation (if client and server agree on compression), so this metric does not report the bytes transferred. The network bytes transferred might be more useful, so that’s a potential future enhancement. However, the current metric can still provide a relative metric to compare between different queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingester metrics&lt;/strong&gt; are zeroed out if the ingester has no partitions with unpersisted data matching the query. In Serverless, Dedicated, and Clustered, queries always consult ingesters, so the 0 in ingester latency can be misleading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parquet files&lt;/strong&gt; indicate how many files were traversed for the query. However, if the query was optimized by a ProgressiveEvalExec plan (simple sorted LIMIT queries without aggregations, typically; verify with &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/analyze-query-plan/"&gt;EXPLAIN ANALYZE&lt;/a&gt;), this value may not be useful because it is calculated in planning and then reflects the potential number of files to be accessed determined by the time range, as opposed to the actual number accessed before reaching the LIMIT. For most queries, this metric is a handy indicator, but it’s worth noting that the query log also contains a related metric, deduplicated_parquet_files, which tells us how many of the files had overlapping time ranges, requiring the querier to merge/sort/deduplicate data. It’s normal to have a few files at the leading edge, but this operation becomes a serious bottleneck if too much data needs to be deduplicated (the main responsibility of the compactor is to manage this problem).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query durations vary&lt;/strong&gt;, and a query can be executed several times (and at different times of day) to get a sense of the variation.&lt;/p&gt;

&lt;h2 id="potential-sources-of-latency-or-variability"&gt;Potential sources of latency or variability&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cache warmup&lt;/strong&gt; (shows up in &lt;strong&gt;Execution Duration&lt;/strong&gt;): The first few times a query in a particular time frame for a table is executed, the duration may be significantly higher due to parquet cache misses. Queries fan out to multiple queriers in a round robin fashion, and each querier has an independent parquet cache, so expect a few cache misses as each querier may have to incur the delay of retrieving parquet files from the object store. Due to multiple load balancer pods and other clients executing queries, how many query executions it will take to warm up all queries is indeterminate. If the queries are on the “leading” edge of the data, be aware that persistence of new data OR compaction may also periodically cause cache misses. Large L2 file compactions lead to a greater disruption, while latency from typical small incremental persists may be imperceptible.&lt;/p&gt;

&lt;p&gt;Corollary: &lt;strong&gt;cache eviction&lt;/strong&gt;. Other queries executing may cause cache eviction to make room for their data. Given a high rate of queries covering a lot of data (many series and/or a wide time frame), it’s possible to thrash the cache. In this case, &lt;code class="language-markup"&gt;influxctl&lt;/code&gt; can’t provide much context about other queries running at the same time (exception: a non-zero &lt;strong&gt;Queue Duration&lt;/strong&gt; does indicate maximum execution concurrency was reached). You may still need to review the query log or observability dashboards. Some query loads are cyclical, and so is the work of the compactor, depending on ingest and partitioning rates; therefore, you may get better performance in the afternoon than in the morning. When the CPU is maxed out, it tends to increase all recorded server latencies.&lt;/p&gt;

&lt;p&gt;Variation in &lt;strong&gt;data density&lt;/strong&gt; or &lt;strong&gt;volume&lt;/strong&gt; will affect all queries to some degree, but will impact computationally intensive queries the most. This shows up in &lt;strong&gt;Execution Duration&lt;/strong&gt;. Monitor the &lt;strong&gt;Parquet Files&lt;/strong&gt; or the &lt;strong&gt;Output Rows&lt;/strong&gt; metrics as possible proxies for this. Be aware that changing a tag value in the &lt;code class="language-markup"&gt;WHERE&lt;/code&gt; clause or the time constraints may affect latency, depending on the underlying data. Not all writers may employ the same frequency. When tuning aggregate queries, you may occasionally want to add a &lt;code class="language-markup"&gt;COUNT&lt;/code&gt;() field and drop the &lt;code class="language-markup"&gt;--perf-debug&lt;/code&gt; flag to see how many records are contributing. For some queries (&lt;code class="language-markup"&gt;SELECT DISTINCT&lt;/code&gt;, for example), tag cardinality and time range can greatly impact performance.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://docs.influxdata.com/influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/optimize-queries/"&gt;documentation&lt;/a&gt; for more general information on query optimization.&lt;/p&gt;

&lt;h2 id="other-things-to-try"&gt;Other things to try&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Check if &lt;strong&gt;Planning Duration&lt;/strong&gt; is substantially higher than &lt;strong&gt;Execution Duration&lt;/strong&gt;. This can be caused by high numbers of tables or partitions, which may be excessive or intended. Custom partitioning can help reduce execution latency, but can increase planning latency—find the right balance for your workload.&lt;/li&gt;
  &lt;li&gt;Check if &lt;strong&gt;Ingester Latency&lt;/strong&gt; or &lt;strong&gt;Response&lt;/strong&gt; is abnormally high/large. It may indicate a need for, or a problem &lt;em&gt;with&lt;/em&gt;, custom partitioning, resulting in excessive delay in persisting partitions.&lt;/li&gt;
  &lt;li&gt;If &lt;strong&gt;Parquet Files&lt;/strong&gt; is abnormally large, check that the query has a time constraint and that it’s reasonable. Also, check observability dashboards to see if the compactor is not keeping up or look for &lt;a href="https://docs.influxdata.com/influxdb3/clustered/admin/query-system-data/#view-systemcompactor-schema"&gt;skipped partitions&lt;/a&gt;. If customer partitioning on a tag is in use, make sure the query is specifying a value for that tag in the &lt;code class="language-markup"&gt;WHERE&lt;/code&gt; clause (also note that regexes that don’t equate to simple equality checks on said field will also prevent partition pruning).&lt;/li&gt;
  &lt;li&gt;How much does increasing or decreasing the time range of the query change the execution metrics?&lt;/li&gt;
  &lt;li&gt;Compare similar queries against different tables, schemas, or partitioning schemes.&lt;/li&gt;
  &lt;li&gt;Compare different means of achieving the same result (&lt;code class="language-markup"&gt;SQL ORDER BY time DESC LIMIT 1&lt;/code&gt; vs INFLUXQL &lt;code class="language-markup"&gt;LAST&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn a lot through experimentation and finding correlations beyond those suggested here. We hope this minor feature makes it a little easier!&lt;/p&gt;
</description>
      <pubDate>Tue, 20 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/new-query-tuning-feature-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/new-query-tuning-feature-influxdb/</guid>
      <category>Developer</category>
      <author>Reid Kaufmann (InfluxData)</author>
    </item>
  </channel>
</rss>
