Modernizing the Tech Stack for a Modern Utility Grid: Scottish Power Energy Networks Journey with Capula and InfluxDB
Session date: Jun 04, 2024 08:00am (Pacific Time)
Join Scottish Power Energy Networks, Capula, and InfluxData for their upcoming webinar. In this session, we’ll discuss the importance of modernizing the data backbone of electricity networks, a critical enabler in the development of the future energy system.
SP Energy Networks (SPEN) delivers electricity to 3.5M homes and businesses. They’ve recently kick-started a $6.8B overhaul of the grid to incorporate renewable energy and low carbon generation, such as wind, solar, heat pumps, and battery storage. To support this transition to a future-ready energy network, SPEN has pledged to deploy monitoring devices and increase data collection frequencies at their 14,000 substations in addition to rolling out monitors through additional RIIO-ED2 initiatives.
In response, SPEN recognizes that data analysis is becoming increasingly critical in electricity network operations to manage the integration of intermittent generation and other low-carbon technologies effectively, allowing them to respond to voltage fluctuations and grid stability. With SPEN requiring a new approach to visualizing grid performance and data operations, they turned to Capula and InfluxDB.
In this session, you’ll learn:
- Different data scenarios to mimic real-life business use cases
- Indicators of data performance for integrity, quality, scalability, and speed
- The advantages of using a data platform built for scale
- SPEN’s plans to meet energy challenges in clean, secure ways
Watch the Webinar
Watch the webinar “Modernizing the Tech Stack for a Modern Utility Grid: Scottish Power Energy Networks Journey with Capula and InfluxDB” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “Modernizing the Tech Stack for a Modern Utility Grid: Scottish Power Energy Networks Journey with Capula and InfluxDB.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors. Speakers:
- Jessica Wachtel: Developer Marketing Writer, InfluxData
- Rebecca Eccles: Data Analyst, SPEN
- Jim Allen: Future Energy Strategy & Business Development, Capula
- Iwona Kandpal: : Digital Solutions Owner, Capula
- Neil White: Digital Sales Lead, Capula
Jessica Wachtel:
Hello, everybody, and welcome. We’re going to get started in just a minute or so. We want to make sure that everybody has a chance to join. So welcome, welcome. My name is Jess and I live in Manhattan, New York. If you want to tell us where you’re from, we’d love to get to know you a little bit so you can just share that in the chat. And the chat button is located on the bottom tab, kind of in the middle, towards the left side. So, for everybody just joining now, we’re going to get started in about a minute. Just want to make sure everyone has a chance to join. So, if you want to grab a tea or water or anything else, you do have a minute before we get started.
I also do want to let you know that the webinar will be recorded, so if you have to jump off or take care of something in the middle, you’re welcome to come back and you won’t have lost that information. And we’ll be sending the recording and slides out to everybody within about 24 hours of this webinar. So, by this time tomorrow, you’ll have all this information in your inbox. Yes, just a reminder, again, this training is being recorded. The recording and slides will be available after the training.
Please be sure to check out our events page on influxdata.com to learn about other upcoming trainings and other events. We have product webinars and webinars with fellow community members who share their InfluxDB expertise. Don’t forget to check out the community forums and community slack workspace. These are both fantastic resources for people brand new to InfluxDB and those who are looking for tips and tricks. There are amazing community members and Influxers who are ready to answer your questions. Don’t be shy. If you have a question, somebody else probably has a question. So be the change you want to see in the world and go ahead and ask it. Please post any questions you may have about the Q and A or in the sorry. Please post any questions you may have in the Q and A which you can find at the bottom of your Zoom screen. And we’ll answer at the end. So, we are going to prioritize questions in the Q and A tab. So, if you are asking, send them to the Q and A, not the chat. And after the webinar, we’ll get to as many as we can. So yeah, we’ll give it about another minute or so for everyone to join and then we’ll get started.
Okay, so it is my pleasure to introduce to you modernizing the tech stack for future for a future utility grid. Scottish Power Energy Network’s journey with Capula and InfluxDB. And here to present, we have Rebecca Eccles at SP Energy Networks. We have Iwona Kandpal at Capula, and we have Jim Allen at Capula. And Neil White at Capula. And now I’m going to hand it off to Rebecca to get us started.
Rebecca Eccles:
Hi, everyone. So, if we just move on to the next slide, and I will begin a story about who we are, who are SP energy networks. We’re known as SPEN in the industry, and we are responsible for keeping the lights on for 3.5 million homes and businesses across the central area of Scotland and also, northern Wales and into a little bit of England, as you can see on the map there. We decided, or we had a review of our data historian systems back in 2020. And could we have the next slide, please? Yep.
And in 2020, we realized that as our future requirements were taken into account, we were seeing some really dramatic increases in our data storage requirements. Even just within five or ten years from where we were, we were going to exceed the capacity of some of the storage solutions that we were using at the time. We currently store analog data and digital data separately, and so these two graphs represent how those were expected to change up to sort of the current time and over the next few years. The systems that we have are needing capacity. They’re very dated. We initially set them up over 20 years ago, and we haven’t properly modernized them along the way. The database world has obviously moved on significantly in that time, and we wanted to explore what options were out there before we started moving forward into purchasing something new. So, we’re very fortunate we received some network innovation allowance funding, and this is funding that comes from our sort of Rio funding licenses. So, it’s the way that we’re paid, depending on our performance. And there are some funds available that allow us to test smaller technical, commercial, or non-operational projects and different technologies. And if it’s related to our network, we can use some test funding to test out if we can have a financial benefit in the future by running pilot schemes.
So, we started to think about what our future requirements would need to look like. We decided we would prefer to have one big historian solution, and it would have one big change coming up in the next few years. And we would add together both the digital and analog data into one place, which would allow us to have easy comparison of data types as we look back in time and make predictions for the future.
So, in preparation for that, we needed to think about what we want to ask for when we go out to buy something. We wanted to build our knowledge of our technical requirements. We pulled together a functional matrix from what we think the users need, and we evaluated a range of options, one of which was InfluxDB. We’ve also assessed our data storage requirements as best as we can at this present time, and we are going to pull together all of this knowledge to go for a tender in the future, in the near future. So, this graph represents a big sort of step change between all of our different voltages of data storage that we require, everything from low voltage, medium voltage, and high voltage going across our networks. And when we add together all the different types of data streams that we’ll have coming in, we think we need about 5 million different sources of data streams coming forwards. We’ve been working with 100,000 up until this point in time. So, by 2028, it’s a really massive, significant change.
We have another graph that shows similarly, this is the total volume of data rather than the source streams of data. And again, we’re just, you know, it’s increasing so vastly that we are at a point in time where we have an opportunity to make a change and establish what we need going forward at this time. So, we reached out to Capula, so their OT systems integrator, and we have a good relationship with them because they are familiar with our current systems and our current data solutions. And it was then that introduced us to the InfluxDB database and its stack. We started with one smaller proof of concept, and that was successful, which then allowed us to move forward into a bigger proof of concept, which was better connected into our systems. And again, these were, this was funded by the innovation board I mentioned previously.
So, I’ll hand over to Jim here, and he will take you through Capula’s side of things.
Jim Allen:
Yeah. Hi. So good to be on this webinar. So, just to introduce you to Capula– So, as Rebecca explained, we’re a systems integrator. We’re part of the EDF group and the Dalkia group. And so, we’re very much focused on the energy sector and providing control system solutions and digital and data solutions for the critical national infrastructure that supports the energy and nuclear and civil defense. So as an organization, our focus is on being an advanced systems integrator focused on critical infrastructure and delivering services around systems integration, advanced panel build, digitalization, industrial cybersecurity, managed services, and consulting.
So, you can recognize that in most companies, we have pillars of success built around performance, people, customers, operational excellence, and safety. So just sort of a very high-level view of what that looks like. So, as I’ve repeated, we’ve focused on critical national infrastructure in the UK, but we do work internationally as well. And in the context of that, it’s providing turnkey support for nuclear generation. And we’ve been working in the nuclear sector for over 50 years in the UK, providing support to new build, construction, operation, waste management, and decommissioning works in power generation and energy networks simultaneously. So, we’re very much focused on delivering services and support to the sectors associated with traditional power generation and distribution and transmission operators, such as Scottish power energy networks. And we also focus now on supporting the implementation of new renewables and low-carbon technologies and their integration into the existing infrastructure.
So just a little bit about Capula, and that’s sort of moving on, really to what we see as the challenge, and particularly around delivering digital solutions to support the net zero carbon targets that have been set by the UK government. And we see that energy market reform is a significant and critical driver to deliver the required investment to support the delivery of net zero carbon. In the context of that, the UK could have in the region of 380 gigawatts of distributed energy resources connected to the grid by 2025. And that’s representing tens of millions of asset classes, potentially in terms of data points, and billions of potentially doctor-to-grid connection transactions.
So, as the future energy system decarbonizes, we’re recognizing that there’s going to be a demand for electricity, which is probably doubling over the next 30 years. So, in that context, what are investors and what’s the market design and what are the challenges around that? Well, investors are looking for the lowest possible investment costs, coupled with the right signals around energy contracts and making sure they’re investable and stable. And it’s likely that the reliability of renewable energy generation, as we can see now in the UK, with offshore wind, for example, is a dominant factor.
Really importantly, however, is the market design. And from a UK perspective, we have a regulated body called Offgem that regulates the market. And they need to ensure that the market is capable of delivering new sources of capital required. And the new design of the, of the energy market must encourage investor confidence and also maintaining the dispatch efficiency of today’s infrastructure, which is really important. And ultimately, this is all around sort of connectivity. So, it’s increasing interconnectivity into system dependencies, but that also means that these kinds of conflicts that have ripple effects on market prices and so on. So, it’s key to the success of the market in the efficient coordination of a diverse mix of distributed energy resources.
So, we recognize that electricity is the core of the energy system. And in the context of that, there are three strategic pillars that we recognize around data technology and digital. And that’s all around innovation and trialing new business models and enabling really what the future energy system could look like in the context of providing system flexibility, providing more resilience, providing certainty of supply, and also providing the ability of additional connections associated with things like battery storage, wind, solar, other technologies, hydrogen, potentially heat. So, it’s really essential that, that we innovate and trial different types of data solutions that enable the market and the operators within that market, and businesses within that market, including the supply chain, to really deliver value, not only for their shareholders, but also for customers alike.
So, in that context, what’s the energy market response in the UK? It’s certainly around innovation, and there’s, there’s tremendous amounts of innovation taking place currently, but also, it’s around integration and interoperability. And this is critical, really, in delivering the UK’s net zero carbon energy system by the 2050 target. So, we’re really looking for what is an open market architecture for data sharing among all participants in the energy value chain and coordinating efforts to maximize the value and energy flexibility. Digitalization and data are key to operations, future asset management, and also anticipating the investment, allowing increased connectivity for low-carbon renewable technologies. And as I’ve mentioned, connectivity. But also, integration is key in relation to these inter-system dependencies, which requires advanced integration to unlock resources to better manage more complex, dynamic, and more interactive energy systems.
So, in the context of the work that we’re doing and have done with Scottish Power and Rebecca over the last few months and years, this is really a reflection of their low voltage network meeting low voltage network, specifically, where we see a heavy demand associated with that, and also the resultant stresses on the network that result from connections of low carbon generation and other technologies, which will provide real issues around reverse power flows, voltage instability, power quality, generally harmonic distortion, and ultimately network faults that could result in customers going off supply, but also result in maybe investment challenges associated with delivering the capacity required for new connections of renewable energy generation, for example. So, it’s really important that we recognize data, and how we use that data is key in the context of an increasingly stressed network as demand grows, and this is across really all the full voltage ranges as a result of renewable energy and low carbon technologies as they’re connected into it.
So, this extra demand and increased amounts of microgeneration will cause high penetration of these technologies, which I said, which I mentioned before, causing thermal voltage, power quality, and fault level issues on the network, but also with the potential to cause disruption to customer supplies and slowing down the connections of these technologies into the network. So significant investment in traditional reinforcement is required to resolve these problems. But we believe that smart digital grid management extends beyond good practice and providing alternative opportunities driven by data really has the potential to defer expensive traditional grid reinforcement through better utilization of existing grid capacities. And this innovation project with Scottish Power really is seeking to use this data in new ways to drive insights through analytics in order to better manage the stress and costs associated with enabling real-time grid asset utilization whilst preventing grid stresses such as thermal overloading and power quality, but also being open for business, for connections of renewable energy and distributed energy resources onto the network, and ultimately establishing Scottish power in a position where they’re ready for the transition associated with achieving net zero carbon by 2050.
So that’s my rapid overview of why we’re here and what it is that we’re trying to help Scottish Power deliver in the context of data solutions, and I’ll hand it over to Ivana to talk you through the project detail. Thanks, Ivana.
Iwona Kandpal:
Thank you, Jim, that sounds good. So, as you would have heard, there were quite a few of business challenges that we needed to resolve using data. And so, when SPEN contacted Capula back in 2021 to help them build a technology solution, we were pretty excited to take on this project and we decided to deploy our consultancy services to address it. To be able to capture the whole breadth of the organizational context and their requirements, we’ve used our Pentagon approach, which is a layered consultancy process comprised of some key elements to ensure right first-time delivery of technology solutions and their alignment to the business strategy and objectives.
So, we started this project from extracting and analyzing vital information from various dimensions of the business. We analyzed what the as is versus to be state looks like for SPEN specifically, and we helped them design the roadmap to address that gap. During the project, we were also collecting the use cases and user stories to be able to really analyze and address organizational capability, but also their habits, and to better understand how confident they already feel within using data. So, we then proceeded to stage two, which focused more on the design of potential solutions, scanning the market for potential solutions, and then applying targeted technology to enrich the OT and IT convergence. So, following those early stages, we then have the plan deliverance support when we prioritize ideas and solutions that can help us achieve sustainable and positive impact. We also plan to deliver at scale, and we help customers understand what the changes and impact this is likely going to have in their organization, as well as we introduce measures to control, monitor, and then feedback and continuously improve those solutions. That’s it. Next slide then, please. Jim.
So, we started, obviously following the initial requirements gathering phase, we started to scan the market for a suitable, cloud based database for the time series data that could be fit for the span use cases. And we liked Influx from the start. And so, I can certainly highlight a few reasons why. Well, we like it because it’s able to handle diverse data types. Obviously, it excels at storing high volumes of raw data. And specifically, if you consider that there’s going to be quite a lot of data coming in from field devices, assets, whether that’s through IoT or other technologies, this data isn’t going to really have one specific formats. So, we like InfluxDB because it gives us the liberty to ingest a lot of data without a specific data model right from the start. We like the fact that it was quite fast and advanced when it comes to both ingress and egress, which is crucial for an organization such as Sven to be able to manipulate, address and analyze the energy sensor data. We also like that it helps us to manually cleanse the data. So, there are some specific processes that organizations like SPEN would like to apply for themselves actually, as opposed to being maybe tied up to the ready off-the-shelf solution. We also like the fact that InfluxDB enables us to blend diverse data types to provide them with one data pool, like you heard from Rebecca before.
So, for example, with aploxBC, you could blend field data, asset data, but also operational data coming in from various planning systems, even supply chain data. So InfluxDB can facilitate easier access and integration of those data types, enhancing decision-making process we also like the fact that system interoperability was quite easy to achieve, and the architecture supports real-time data ingestion and processing, which is actually ideal for applications that demand a high rate of data collection. Also, the scalability and efficiency of the cost of a database were very much suitable for the SPEN use case scenario. So, we know that InfluxDB has made many years’ worth of efforts to optimize the right intensive workloads and we have found out that to be working quite well during our tests. So next slide please.
To address the organization’s search for new time series, historian Capula designed a two phase proof of concept project whereby one was comprised more of a learning phase and the other one, the other side of it, focused on performance as well as functional capability and testing.
So, we had our early discovery of the characteristics of InfluxDB architecture, components, and ecosystem, as well as its feasibility to integrate within the SPEN current enterprise architecture. For that, obviously we had to consider the existing ecosystem and the range of our applications that they currently are using.
We then continued with functional benchmarking of InfluxDB, and this part included system performance tasks, tests, data integration, and ingestion, monitoring the speed which queries are executed, and then also data quality. We looked at the speed of ingestion. We also analyzed the impact of cardinality, which I’ll mention later on in our presentation. And then the final stage of our report was the business benefits summary, as well as some key considerations for the adoption at scale.
Okay, so the phase one, what we try to develop is a small, very much light touch capability profile concept. So, before we progressed to building a fully fleshed application, we wanted to learn much more about influx as a technology, learn what it can give us kind of from the start, and then proceed to our proof of concept. So, the first part focused on telegraph buffering and demonstrating that no data loss occurs in the event of a loss of connection. That test was concluded successfully that when our virtual machines were disconnected, the data was stored in the memory for telegraph clients, and then when connection was reestablished, the data was passed on. For some instances, you may want to consider a message broker, should that be a requirement from your organization.
We then proceeded to build a second demonstration, and this one was focusing on visualizing alarm data from the CSV database, which was provided to us by SPEN. And so, we know that the advantage of using time series database like InfluxDB is that you could use the data aggregation functions built in calculations and also ability to query data points for any given timeframe. And we have provided a demonstration that all of that happens to the specification by the use cases.
We did, however, find that some of the data types that we had in our source data didn’t really match with the InfluxDB built-in visualization functions because those are stored as strings. But for this particular use case, we didn’t see that as a particular blocker because we could tackle that quite easily at our end. So, we still did successfully build the demonstration and blended different types of data there for SPEN.
The third test was to demonstrate the capability of sending data from an OCC DA client, which was our dummy ScaDA, and then ingesting that into InfluxDB. And so Capula demonstrated the ability to ingest real-time data using OPCDA to UA protocols. We used Matrikon OPC server to simulate that ScADA for us and then at the other end python script, which ingested the data into InfluxDB.
And then our final tasks were to prove that once the correct queries were built, we could use the script editor for that. We could send the data and export it into CSV, and we used InfluxDB’s own exporting capability to do that. We also built two preconfigured dashboards to showcase the ability to perform aggregations or calculations on real-time time series data.
So, some of the learnings here that I can share with you is that it’s definitely worth spending some time to take a deep dive to understand the algorithm that builds DA automation dashboard, which is a functionality within influx. We found after some learning that after we understood that we could update our processes in order to reduce the complexity of scripting that had to be performed.
And another difference is, something I really mentioned is the difference between your data source and then the built-in dashboarding capability of influxDB. I think that essentially fed the functional requirements document for the application that we are going to be potentially building or expanding on in the future. But on phase one we pretty much wanted to test what could we have out of the box in InfluxDB.
So, if we move on to phase two, we also spend some time in understanding the characteristics of data of InfluxDB architecture, data model, query language, and also reliability. And so, we spent some time understanding how, for example within a cluster or metanodes must communicate with all the other metanodes. That is then a dependency that needs to be factored in when administrating the database to avoid invalid query results.
We also learned that the tax set values which are represented in strings cannot be updated later on, which essentially means that we need to update some of the processes at the step before and also schema free design. Whilst it was very handy and useful to us at the start, we then also took the learning from that that we had to stick to the schema result schema design right from the start, so that when we execute queries, we can get valid results each time. So, we documented all our findings in the first part of our report and then we proceeded to testing the performance of a database. And we tested two things really. First is cardinality and then equilibrium performance.
So, if you’re working on a sensor or acid data and in the energy vertical like we are, you probably already know that the data set that you’re working on is classed as high cardinality. So, for those who don’t, who are not familiar with this term, we call cardinality in data when there is a distinctiveness and uniqueness of high number of values within a data set. So high cardinality occurs within your data set contains a large number of unique values per particular attribute or field.
So, in case of how InfluxDB stores data, you must know that then when you have unique tags and fields, it is then leading into having a greater number of series within InfluxDB. And each of those series consumes a lot of memory storage and so it could potentially be the strain on your system resources unless you know how to tackle the challenge. So, during our tests, we found the magic number for cardinality to be around 3 million, which is actually quite high. And also consider that the version of InfluxDB we were testing is actually version 1.7 and you probably already have access to version three.
So, we know version three comes with quite a huge change and changes in how data is stored within InfluxDB, but we haven’t tested it yet. What we have done for the 1.7 version is we considered an ETL process and a data transfer in place that would essentially mitigate some of those ingestion bottlenecks, for example by using multiple write agents that would be load balanced and buffered to optimize the ingestion of data into influx.
So then for the query performance, we were testing varying numbers of concurrent users. We used Apache JMeter, which is a popular performance testing tool, and it was employed to assess the query performance of InfluxDB.
In this scenario, we simulated some of the real-world use cases provided to us by SPEN to make sure that we addressed that. And when it comes to performance tests as well, we took quite a lot of learnings from that. But overall, we were very impressed with the speed of response from InfluxDB, and those tests also provided a lot of input to testing the stability of the system.
So again, we took quite a lot of learnings here, which we’re happy to share. Obviously, it’s good to remember that when your cardinality increases, the number of unique series within InfluxDB also rises. That will then have an impact on the speed of data ingestion, and each of those unique series represents a distinct data stream which then matters once again, once you then proceed to the next step, which typically is going to be some sort of data cleansing or data manipulation processes, we also noticed that your data schema design will have an impact on your cardinality as well as the processes that you apply to the data set.
So how you do what you do is going to either have a higher impact into how the resources of a database such as memory are consumed, and for a company such as SPEN. That really did matter because as you have heard from Rebecca, the data requirements and data volume requirements are going to be quite high in the future. So, it was important for us to test this feature early on so that we could, together with SPEN, forecast how that future system might look like both architecture-wise and then cost-wise. Okay, next slide, please.
Then. The final two parts of our report focused on testing some of the functional requirements, and the functional requirements were mainly regarding the back end data management tasks, data cleansing, manipulation, and so on.
To be more specific, we tested functionalities such as extracting the data off of InfluxDB or exporting it into Excel. We developed a real-time observability tool that is meant for data administrators. We tested some of the specific data administration capabilities which were based on the use cases provided to us by SPEN and their internal stakeholders. We also tested system integration such as Azure Synapse and a number of other custom-built applications that SPEN has developed in-house. So, we just wanted to make sure that there are data journeys and data flows that happen in the way that your organization demands. And then we tested multiple Excel integrations. Once again, this is how SPEN users like to use their data and they feel quite confident.
So, we wanted to make sure that once we migrate, for example, large volumes of data to InfluxDB, their processes would still remain unchanged. Following on from that, we also rounded up a lot of information regarding the application ecosystem and the APIs that are already compatible with InfluxDB. So, we discussed visualization methods. We discussed methods for message bursts and data processing, monitoring, web frameworks and more.
So, this was all to kind of almost paint a future picture of what the application could look like, which would then be built on InfluxDB. And as mentioned before, reliability and stability of the system was one of the key requirements for SPEN’s future time series historian. And we know InfluxDB, and their distributed architecture is up to the task and performs quite well. So, we know that within InfluxDB the data is spread across multiple nodes, effectively creating a fault tolerant ecosystem whereby if any one of those nodes was to become unavailable, the access to actual database would be maintained. And this guarantee was very important to SPEN as a use case.
So, we know that within InfluxDB and other tests that we have done, we proved that it could handle vast quantities of data without any performance issues or any major bottlenecks or anything that potentially would be disastrous. So, all of that, we were really pleased with the results. We also did like the fact that the users within InfluxDB could design their own data retention policies.
So essentially, you decide whether you want to automate or not the amount of data that is archived and historized, as opposed to data that you prefer to have readily and very quickly available. Overall, that makes a very cost-effective solution.
And then we also looked, just to quickly mention, before we move on to cybersecurity, we also looked at the future roadmap for Influx to make sure that SPEN and organizations are ready for that, and that they develop and update their InfluxDB as and when new versions are released.
So obviously we know with version three coming on, we’re very excited to test that one. Okay, so the last point from me was cybersecurity, which is also one of the key focuses for organizations such as SPEN. We were very pleased to see, obviously, that InfluxDB is already built on industry standards such as ISO 27,001 and then 20 719. And it was very helpful that you could choose from services from InfluxDB, such as frequent vulnerability scans, third party pen tests, continuous security monitoring, and standard encryption. And again, it just helps organizations with capacity, I guess, and sometimes capability.
What was also very useful is that you could choose to have the services from InfluxDB, which, for example, provide threat modeling and code review software composition analysis, as well as other, the open source specific kind of code vulnerability tests. And once again, these were quite helpful because you might hear this question from many organizations that when you’re using an open source software, you don’t need to enhance your cybersecurity.
So, we were very pleased to see that, obviously, InfluxDB is already safe and secure when it comes to the cloud environment. You probably already know it’s backed up by AWS, which again, made it easier for us to assess the cybersecurity processes and procedures to make sure that they meet the demands of our client, and also for any gaps that were identified on the way.
Capula provides its own comprehensive cybersecurity portfolio, which is based around the NIST framework, and essentially that we were able to plug in any potential gaps there for SPEN to make sure that their system is fully secured and resilient.
Okay, and I think that concludes my slides, and I’ll hand over to Rebecca for a few final thoughts.
Rebecca Eccles:
Hi there. Yes, so, obviously, Capula has undertaken a huge amount of work for us over the last couple of years, and we have our InfluxDB test environment, which is, is still up and running, and we are continuing to use it for this current period of time where we’re storing the data that’s above the capacity of our other systems.
We do require to do a formal tendering process just because of the size of the company that we are that’ll be kicking off in the next year or two. And yeah, until then we’ll be using Influx as a stop-gap and we perhaps could be using VoxDB in the future as well, depending on how our tendering process goes. And yeah, very pleased with the work that we’ve undertaken by Capula on our behalf. And we’re going to use this knowledge that they’ve gathered for us to inform our future data historian requirements.
So, I think that concludes the slides for now, and I’ll pass over to Jess, who has the Q and A.
Jessica Wachtel:
Hello, thank you so much for that great and informative webinar. So, I do have a few questions for you. So, the first one is, what advice would you have for other grid companies looking to manage lv data?
Rebecca Eccles:
Is that for me? I mean, I can start.
Jessica Wachtel:
That’s for everyone. So, whoever has that information, we’d love to hear it.
Rebecca Eccles:
Yeah, I mean, it really is a big, massive step change in the volume of data that we’re going to need to store. And we store our data for ten years. So, it’s going to become really important that we think of different pricing structures and whether we’re looking at storing the data and calculating the costs versus the sort of ingestion and looking at the data, or if we’re paying for the volume or storing these, our new ways of storing data and databases. And if we’re looking at the ingestion rather than the volume, it really will help with things like the LV data.
Iwona Kandpal:
Thank you.
Jessica Wachtel:
Who’s using–
Jim Allen:
I’m sorry, I can add to that, really. I think from our perspective at Capula, really, it’s around the asset management systems and looking at how you can provide enhanced capabilities in your businesses associated with, you know, simulation, modeling, predict analytics, visualization, contextualization, and this real-time data ingestion. So, these are all opportunities and challenges that we face. And obviously then there’s the leverage of leveraging machine learning and artificial intelligence as well.
So, I think it’s all really focused on ensuring that we’re, we get involved and drive innovation in this area specifically. And one of the biggest challenges, I guess, in that we’ve got a lot of old assets with lots of data associated with them. So, actually, data migration as a particular challenge, and also the quality of asset data, its integrity, and also its standardization.
So, I think harnessing your business skill set and your capability to really start to look at your data quality and your integrity, that standardization is the key foundation to be able to move forward and then looking at the opportunities around blending of this data with other external resources and other connected devices, which is really where this project is ultimately leading.
So, I think the sooner that we start trialing new tools and technologies such as InfluxDB and applying those to the real world, the better.
Iwona Kandpal:
Thank you.
Jessica Wachtel:
Okay, I have another one who is using the systems that you’ve built and is it internal or external, and how are the changes you’ve made benefiting them?
Rebecca Eccles:
It’s internal users that are using our system or taste systems. It’s electrical engineers and data analysts and project managers.
Iwona Kandpal:
Yeah, so we were focusing on mainly on building the tasks for the back end if you win. So, the use cases that we were working on, first off were prioritized and then they were mainly for database users. So, that’s back-end tasks and then the front ends, which are also internal users to SPEN.
Like Rebecca said, I think that still some work to be done potentially, but during the proof of concepts that we were running, it was mainly database management, data cleansing, visualization, just to essentially prepare the engine room, if you will, for the future application.
Jessica Wachtel:
Thank you. And do you have any plans for using artificial intelligence and machine learning and if so, where and how?
Rebecca Eccles:
We do? We have some AI ideas, but we need to start by pulling together suitable databases. You have to start with getting the data sensible and then you can start looking at trends and predictions from there.
Jessica Wachtel:
And then that factors similarly and factors a little bit into the next question, which is how do you ensure data quality is factored into the analytics performed using the real-time data that’s ingested?
Rebecca Eccles:
Yeah, we’re trying as much as possible to collect accurate, real-time data from the new data monitors that are getting put on the network. We’re adding 14,000 data monitors over the next short couple of years, and that’ll give us a huge amount of more information for managing our network.
Iwona Kandpal:
Thank you.
Jessica Wachtel:
Do you have any plans for using enhanced sensor data to improve responsiveness to adverse weather events, like localized wind speed monitoring?
Rebecca Eccles:
We already look at the weather data through some of our older traditional systems and we get accurate. We have an API that we connect into for that. It’s not so much looking at the future, it’s really just looking at storm events so we can analyze the last storm and predict what might happen in the next one.
Iwona Kandpal:
Thank you.
Jessica Wachtel:
Do InfluxDB queries give you all the information you need, or do you need to do any other processing before or after ingesting to InfluxDB?
Iwona Kandpal:
Yeah.
Rebecca Eccles:
So, you can do anything you like, really, can’t you? Yeah.
Iwona Kandpal:
It depends on what you’re looking for. For the proof of concept, we had some specific queries we wanted to build. So, for example, retrieve such and such time box of data for such and such asset. So, for. For that, yeah, it was sufficient. But once again, be mindful that these were the tasks of people who would be using the database, and in a sense, they need access to that raw data so that they can execute certain queries and there will be different types of users requesting different things.
So, if your question was around, did we get all the answers that we needed from InfluxDB? Yes, it was very much fit for purpose. Yeah. Rebecca, you can send me your thoughts as well. Pretty much.
Rebecca Eccles:
I think so. I mean, it’s so adaptable, you can make it do anything you need it to. We’re pulling the data into some of our traditional spreadsheets at the moment in a very clunky manner, but it will modernize as we go forward. It does what we need it to do. We’ve got the raw data coming through from our monitors, through InfluxDB, and into our systems to our project managers, and the electrical engineers. Everybody can see what they might need to find.
Jessica Wachtel:
Thank you.
Rebecca Eccles:
Our biggest problem at the moment is that we’re now running three data historians because we have the two original ones, the. The ones where we were having our analog and digital data separately. Plus, we now have our low voltage monitor data going into InfluxDB. So, we really need to sort of streamline everything going forward. But it’s not a problem with InfluxDB at all.
Jessica Wachtel:
Awesome. That’s great to hear. What are the pros and cons of build versus buy?
Iwona Kandpal:
That’s a big question. I think if you’re looking at building your own solutions, you need to ensure organizational fit. So, for example, for someone like SPEN, where they’ve got a relatively high number of users who are comfortable using raw data, then build is probably better because they’ll have a number of custom requirements.
So, for example, it’s an organization that already invested lots of time and resources into building its own applications to suit various different demands as they did come up along the years. So, the type of organization that’s confident using raw data, whereby multiple users are quite excited to try and use new things, that’s the sort of an organization that’s going to be happy with the build rather than buy solution. If you were to impose a solution that’s ready off the shelves, then you’ll often come across some challenges whereby people may say, okay, but we are a little bit too rigid and too disciplined in using specific functions and into using a database specifically.
Rebecca Eccles:
Part of our view on that is that you end up buying things you don’t need or want because it’s all packaged together in one piece.
Iwona Kandpal:
Indeed.
Rebecca Eccles:
So, if you only want a small proportion of what the built environment offers you, you’re overpaying for the bet you do need.
Iwona Kandpal:
Yeah.
Neil White:
If I could just add on to that as well, just to a certain extent, reiterating what Rebecca’s saying there, I would always advise customers to start with, what’s their use case? What’s the challenge? What’s the problem they’re trying to solve? What KPI’s are they trying to measure the output of that? And if a product, in terms of an off-the-shelf product, is trying to be bent out of shape to try and fit those requirements or is only containing a subset of functionality, then it is potentially a better fit to actually do a design and build that’s more tailored and fit to specific requirements. So don’t necessarily focus directly on the technology. Start with a use case and work backward and make sure you’re choosing the technology for the right reasons in the first instance.
Jessica Wachtel:
Okay, cool. Thank you. And when considering new technology, what are the first topics for consideration?
Rebecca Eccles:
User requirements.
Jessica Wachtel:
Okay.
Rebecca Eccles:
Yeah. Just going to reiterate what Neil said there, but we need to think of what our users really need. So, you know, I’m a data analyst. I’m not one of, I’m not the whole system and what we have in the company, but we need to think about what the engineers really need as well as the data people.
Iwona Kandpal:
Yeah.
Neil White:
And, you know, just. Sorry, just to add on again, you know, when you’re looking at user requirements, you need to also actually what time to work backward and consider your technology. Rebecca mentioned it there. You’ve got to think about your people, your process. It’s not just about the technology. Everything has to come together as a package, really, to answer that exam question, which is your use case, your benefit realization. And people and process play a massive role as well in terms of making sure you’ve got the right technology that’s going to fit the organization not just for the short term, but for many years into the future as well.
Jessica Wachtel:
Perfect. Okay. Well, thank you guys so much for this wonderful webinar. It doesn’t look like we have any more questions in the Q and A. So, at this time I am going to end. But just remember, everybody, that we do have other webinars. So, check out our events page, and then this webinar will be sent to you within the next 24 hours, so you will have permanent access to this information. Thanks so much for joining. Thanks to our presenters, and I wish everybody a lovely day.
Rebecca Eccles:
Thank you. Bye.
[/et_pb_toggle]
Jim Allen
Future Energy Systems Strategy and Business Development, Capula
An advocate for decentralised and digital energy systems, Jim is a specialist in OT systems integration, automation and control, digital and data solutions, supporting major customers in Power Transmission, Distribution and Renewable energy markets. Renowned for innovation and knowledge of the digital evolution of our future energy system, Jim has 25 years’ experience in utility network engineering, programme and operations management, and strategic business development.
Iwona Kandpal
Digital Solutions Owner, Capula
Iwona Kandpal is experienced Business Analyst specializing in Industrial Digital Technology with a demonstrated history of working with UK based manufacturers helping them achieve sustainable growth, efficient asset utilisation and process improvements through the adoption of Industry 4.0 solutions.
Neil White
Digital Sales Lead, Capula
Neil White is currently working in the capacity of Business Manager at independent system integrator, Capula. He has previously held the portfolio of Sales Lead – Services and Support, Operations Manager – Service and Support, Team Lead and Business Development Manager – Utilities at the firm. Neil is an alumnus of the prestigious Leed Metropolitan University.