How Worldsensing Uses InfluxDB to Make Cities Smart and Safe
In this webinar, Albert Zaragoza from industrial IoT company Worldsensing will share how he and his team built an end-to-end IoT solution for cities: from traffic flow management, smart parking, emergency & security response, and critical infrastructure monitoring-he will focus this talk on how the team integrated a real-time platform featuring data science and a huge variety of data sources including InfluxDB, a time series database to store a city’s critical IoT time series data.
Watch the Webinar
Watch the webinar “How Worldsensing Uses InfluxDB to Make Cities Smart and Safe” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “How Worldsensing Uses InfluxDB to Make Cities Smart and Safe”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Chris Churilo: Director Product Marketing, InfluxData
- Albert Zaragoza: Head of Engineering, Worldsensing
- Daniel Lázaro: Software Developer, Worldsensing
- Fuad Mimoun: Software Developer, Worldsensing
Chris Churilo 00:00:00.185 All right. It is three minutes after the hour and that is when we get started. And first of all, I’d like to welcome everybody for joining us today. We’re going to have a really wonderful session. I already got a sneak peek at the slides and the guys at Worldsensing really go into quite a bit of detail especially with Kapacitor and that tricky TICKscript. So if you do have any questions, please feel free to post them in either the chat or the Q&A panel. You can find the icons in your zoom application on the bottom of your screen. And we are recording the session so after I do an edit I will post them and you’ll get an email, an automated email in the morning with the links, or if you’re keen to listen to it today, it’s the same link that you used for registration. So it’s all really accessible. And if for some reason the email doesn’t come or the link doesn’t work, you guys have my email so please feel free to reach out. It has happened on occasion and I sure was glad that I had my email available to you guys so you could get that addressed very quickly. So without further ado, I am going to hand it off to the guys and you guys can start with the directions, and then take it away.
Albert Zaragoza 00:01:16.779 Thanks, Chris, and hello everyone. We are Worldsening. Basically we are going to be presenting to you a little bit about how we use the TICK Stack to transform operations for our customers. My name is Albert Zaragoza and I’m the Head of Engineering here at the Worldsensing and I have Fuad and Danny to introduce themselves.
Fuad Mimoun 00:01:39.200 Yeah. I’m Fuad and I’m the Software developer at Worldsensing. And welcome to the Webinar.
Daniel Lázaro 00:01:47.804 Hi, yeah. I’m Danny. I’m also a Software developer here, and we will be explaining to you the more technical part of the Webinar.
Albert Zaragoza 00:01:57.123 Great. So let’s get started. Basically today we are going to see this-we have the agenda. So we’re going to present a little bit about Worldsensing and what we do. We’ll introduce you to some of our products. We’ll introduce you to One Mind, it’s our software stack to build solutions for our customers. And then we are going to deep-dive into how we use it at Worldsensing. And then a little bit of more advantageous cases where we can see genetic time series data, alerts, aggregations, anomaly detection. Basically in all of them we’re using the TICK Stack, no? And then we’ll present you with conclusions at the end, with our learnings and I hope that helps you guys. So let’s get started. Basically what do we do at Worldsensing? So we transform operations in cities, mines, infrastructure, and construction sites. Basically what we call achieving operational intelligence, right? So I’m going to dive a little bit into what that means because it’s not that easy, right? Basically what happens is a public and private entities are digitizing their operations, they also need to rely on a data strategy, right? So to make sure all the necessary information is available, okay? So the key is really to as you can see here, not to break the silos. So over the years most of these organizations they operated in silos and they didn’t share any information and that’s happening at really every level, right? So from smart cities to construction sites to private entities, no? Any company has this issue, really. And basically what we provide is efficiency and tools to minimize, no, the way they have to react to issues and events, okay? So what are the cities and infrastructure trying to do to reinvent themselves?
Albert Zaragoza 00:04:03.362 So here we present you with the stack of how you achieve this connected operational intelligence, right? You start obviously by digitizing your assets, and monitoring them. So but that’s something that in the IOT, it’s something that’s already been done and it’s quite common. Basically to then improve your operations you have to move into predicting acting and ultimately engaging, being with your city sense, workers, whoever is involved on operating the systems. They have to be able to have more data, no? To be able to react faster, okay? So basically this is a new way of thinking. This is a new way of thinking and operating and really not all the industries and governments are ready yet and prepared to jump onto this new data strategies, no? Because it requires really complex architectures and people are not prepared yet, right? So basically that’s where we get in, no? So how Worldsensing is really helping this cities and construction sites become smart or smarter, right? So as you can see here we’re presenting a little bit with all the stack that we work on. We basically at Worldsensing, we build end-to-end solutions, okay? So we understand the pain point challenges that are present at each level of the IOT. So over the years, over 10 years now, we have been building sensors, right? You can see there that we build parking sensors, infrastructure sensors, and traffic sensors.
Albert Zaragoza 00:05:51.015 Therefore, we really understand the data of our customers, no? We understand the source of truth from all these systems. Then if we move up the stack, we have a software team that builds the systems you can see in there. So we build incidence systems. We build prediction systems. We build anomaly systems. On top of that data, to be able to help our customers to scale and understand their data better. And as you can see we’ve gone even higher to build business applications that actually integrate data from any third-party sensors. So you can see in there in grey, CCTV, weather stations, anything, and from other systems. So we can integrate really any API or any data, be it in any format, no? It could be a spreadsheet, an Excel, well-built API, an access to a database. We actually integrate all of this data together with the data that comes from our sensors to kind of build the solutions, for cities and for infrastructures. So what we’ve done so far, we’ve been able to be and monitor data from mobility, parking, security, traffic in our 60 cities now. So we are supporting the decision making, six major construction sites as well. And as you can see in there, we are collecting lots of data, no? All around the world. So although we are a Spanish company based in Barcelona, we are really operating and serving customers all around the world, okay? So here you can see a little bit of our background. And let’s start a little bit. So today obviously we will cover more deeply our software stack and more in details the TICK Stack, no, for real-time analysis of our data. But let’s first dive into understanding some of our sensors. I think it’s important to understand the source of truth of the data, no? Where the data comes from to our system.
Albert Zaragoza 00:08:06.724 So basically our customers are traditional customers that actually were managing their infrastructures, be it bridges, tube stations, dams, mines with cabled sensors, okay? So we came with a solution that this actually wireless, long range, and low power. Basically we revolutionized this industry by presenting these sensors that you can see in here. So actually what you can see in there, it’s a data logger that actually connects any third-party sensor and then helps these customers communicate and build this data all up the stack, no? So they can evaluate if they are having any issues, no? Any infrastructural issues on their operations. So very briefly if you want to know our [intel?] system architecture. So you have the third-party sensors that are measuring different analogic signals on infrastructure sites. You have our nodes that connect directly to those sensors. And then that’s where we actually communicate and send this over radio, so we use normally Lora or SigFox for this type of communication. We build our own gateways as well so we put our software stack on a gateway, that then communicates to the internet, to the cloud. And then from the cloud we have our monitoring software and the network management for these sensors, okay? So basically that’s the whole end-to-end in there and that’s how we help these companies monitor their infrastructures over time, okay?
Albert Zaragoza 00:09:53.571 Here you can see it very briefly but that’s a typical installation with our sensors. So that’s a mining site. As you can see in here you can see sensors that are deep underneath the earth, no? 200, 300 meters. They monitor the status of the landsite. And then you can see in here we have placed several gateways to then communicate all this data to the cloud. And then when you are outside, outside the mines, so you are in open air. You can see that you can place many sensors with one gateway and then deliver that data quite quickly to the cloud as well. So this is one of our solutions. These solutions are for example, monitoring things like the Ponte Vecchio in Florence, Italy. We have placed several of these sensors to monitor anything that can go wrong with this reach. We are in places like the cross rail project in London, in UK as well. We are in lots of mine tailing dams. These sites are quite dangerous from an environmental point of view. For them understanding in real time what’s happening, it’s really important. And obviously rail networks and tracks as well. Then we also build [fast park?] sensors. So very briefly we’ll cover this one. But we build these sensors here that you can see in there that actually is also wireless and low power. And we help see this digitized and understand what’s the parking usage on the cities, no? Once you digitize the signal, you can then start the real time monitoring. You can help cities actually instantly, no? On enforcement for payments or enforcements because the car is not allowed to be there. You can start as well predicting this parking behavior and then help engaging with the city and sense through apps and other systems.
Albert Zaragoza 00:12:02.927 So all of our products have an API as well to be able to build these applications on top. We cannot build a whole end-to-end [inaudible] you’re seeing here. And then the way it works is exactly the same as with the other sensors. So I hope that’s clear for you. And last one, we build also a bit carrier sensors. It’s a sensor that actually tracks traffic flow in cities. So we are tracking WiFi and Bluetooth signals from any car. That’s basically showing cities what’s the traffic flow in real time, right? So once you understand the data, obviously you monitor real time, you act. And then once you’ve acted a few times, you are able to predict and to engage with the city sense, okay? So this is actually our end-to-end knowledge and how we’ve been able to build our solutions on top of that, okay? So I will pass the word to my colleague Danny right now. He’s going to introduce you to One Mind. It’s our software tool.
Daniel Lázaro 00:13:07.693 Okay. Well, yeah. So now I will explain One Mind which is the last of the products that we’ll present and this is the one that we’ll discuss for the rest of the Webinar. And so basically what it does is this has a-we have a lot of systems in a city where there are a lot of-we have lots of sensors taking information from several places. And the point of One Mind is to unify all this information so it can be shown to basically to the operators in the control center of the city, so that they can know what is going on in their city, and act upon it, and find out if something is not going okay, and try to fix it. So this is what this product does. I’ll explain just a tiny bit of the architecture so it will be useful for understanding a bit the rest of the use cases. Just a very simplified view. We have two types of data. Ones that we consider the core verticals of One Mind and the other ones that are considered just custom data. And in custom for each project, but it’s not considered a core vertical for us. So there are connectors that connect to each of the systems that generate or monitor this data. So the connectors inserts the data into OneMind. So there are these services for each of the core verticals and then also connectors can insert this custom data into what is called the custom object service. So all this data then can be visualized together. And I mentioned we use to have micro service architecture, everything is running on dockers and kind of independent modules.
Daniel Lázaro 00:15:01.513 And now we’ll go for a short demo so you can better understand what we’re talking about. Okay. So this is the main view. As you see the main thing in the UI is the map because most of the information is geo-localized. And this is the main thing that the operators want to be, where are things happening? And then here we see that these are different layers that we can add to our map. This is some custom information, for example, I can see where are located the panels, traffic panels in the city. And, sorry, this is environment sensors, like air quality. These are the panels. So we have a customization for it. In this city they have air quality controls so they have panels, so they have cameras. And this is not something that we’ll do a lot about. We take this information and visualize it. And this is customized for each installation. Another city may have other types of information that they want to display on the map. And then there are the core verticals, that is for example, where we take the information of travel times, the traffic flow. As we said, we can take this from our system with carrier and here we can show more detailed information like how it evolves over time and some computer KPIs and metrics. Because as I said, this is a core vertical for us and we know what to show about this information, what is useful for the operator, right? And also we do not only have this more visualization, we also have here a specialized view for each of the core verticals, okay? So here you can see more things. Yeah. The traffic one. The parking one.
Daniel Lázaro 00:17:02.856 And you can see more information here about each of these verticals, okay? So this is a basic introduction to what One Mind is and what it does. It does more things and we will see a few of them in the next minutes. So now we’ll see a bit how did we get to using the TICK Stack for this software, for One Mind. Well, we started using it for the normal things I’d say, for monitoring. That’s the most common use case for the TICK Stack, and we do use it for that. We deploy usually in Google Cloud. As I said, we use docker. We use docker compose. We are now moving to Kubernetes, to do more scalable deployments. And we use it for monitoring the resource usage of our machines and our containers. And we visualize the information with Chronograf. So that’s pretty standard. By the way, we also started to do some other things, maybe more not so standard, not so common. Basically using it for some prototypes or for innovation projects where we needed to just input some data and we put it into Influx, once it’s enough. And then want it also to visualize the data with Chronograf or Grafana. It was very easy to build a dashboard and see the data we were getting. And then also we started implementing some simple processes, processing the data or triggering some alerts with Kapacitor. So that’s cool. Now, let’s try and see if we can also use this stack for parts of the core software, for some use cases in One Mind. So now my colleague, Fuad, will explain some of them.
Fuad Mimoun 00:18:52.113 Perfect. Let’s start with the generic engineers data, okay? As my colleague, Danny, has explained it, there’s two kinds of services. The core verticals and the custom data. The core vertical are fully independent microservices. An end-to-end applications. Each one of them have functionality related with the specific domain, like parking or the city traffic service. So at the end, they have specific aggregations and visualizations as we have seen. So to make these aggregation and visualizations, the APIs of this microservices are fixed with some mandatory fields that allows us to make this happen. So the challenge for our clients-it was that they want to see more data, more basic data, apart from these verticals, no? So for that, we have the Custom Object Service that allows us to show on the map simple data, static data. And actually relate that popup with this kind of data, as you can see in the slide. So for example, it can be the variable message signal for example, no? We can locate this data on the map and then the client can define which kinds of fields they want to see in the pop-up, for example. And with this, they only have the last value or the current value, no? Like the message or if they have environmental stations so they have the last value of the carbon dioxide, no?
Fuad Mimoun 00:20:50.977 So let’s explain it from the technical view a little bit, technical view. So how this service evolves? So first we need to define a connector between the sensor or the sub-system and the Customer Object Service. So for us the connector-it’s like a translator that allows these sub-systems to talk with our platform. In this connector, the client needs to define the json schema for the data type that we want to insert in the platform, like geometry fields, or temperature fields, or strings. Once the json schema is registered, then we can start to insert data matching this registered schema. And as I said, this data is only the current or the last value. It’s not historical, okay? After that, once our service saves the data on our Postgres database, then we can build the visualization with this data, for example, the locations or the pop-up. So the thing that we want also to show time series data in this kind of popups, generic data, no? So for that, we have an integrated InfluxData in our systems. So as you can see, the architecture is very similar. But now, we start all the data in the Influx. So we have the evolution for example, for temperature during the time. And with this, now we can visualize charts in our popups.
Fuad Mimoun 00:23:04.385 So saving this historical data allowed us to create new series and features like alerts at risk that we’re going to explain in the next slides, okay? Let’s go. So we have this service that allows us to trigger alerts based on simple thresholds. The main idea-it’s to automatically detect alerts using this data and then send an alert to the end user. For example, we can trigger alerts when we detect traffic jams in the city when the speed values in the crowds are beyond some critical value. Or we can also detect when the actual values of tiltmeter sensor exceeds some critical value. And that means for example, a building it’s happening something wrong in a building, no? The inclination of a building, it exceeds some critical value, no? So let’s explain how we have it. So at the beginning of this service, we were thinking to implement it using some kind of business rules framework or [inaudible]. But we discarded it because for the complexity. At the end, we only need to implement a simple threshold rules. So for that, to trigger the alerts we have used the Kapacitor script to implement this simple rule in [inaudible]. That allows us to detect when some field is over a threshold during some period of time for example. So for this implementation, we used the task templates from Kapacitor that can be used to make this threshold’s configuration.
Fuad Mimoun 00:25:10.615 We’re going to see an example of the code of these templates in the next slide. So once we have detected this alert, we send it to a queue. So for that, we have also implemented a user defined function from the TICK Stack, from Kapacitor, to send the data to a queue. And then store this data in our Postgres database. For then to consume this data and visualize it in our platform. So let’s see this example. In this slide, you can see an example of the alert tasking plate on the left. And on the right, how we are defining the value of the threshold. So the rule of threshold is configurable and we can change it depending on the use case. It’s this and in the next session, my colleague Danny, is going to explain another cool use case, yeah. Okay. I’ll go on with the third case.
Fuad Mimoun 00:26:26.740 Well, one of our core verticals is parking of course. We also make parking sensors so we wanted to integrate parking data into One Mind. So this is what we have. We have some KPIs and then we want to aggregate data. The KPIs that we have, well, the basic, I don’t know, KPIs or metrics that are used in parking are the occupancy percentage. That is how many of the parking spots are occupied, as far as percentage. The turnover, this is how many cars park in a place for a period of time, usually per hour. And then also the session time, this is how long each car remains parked in a parking spot. Okay. So of course, well, we are interested in having the average of this and then we can have the average. Well, of course, in a spot, in a sector that is up a street or a part of a street, this is meaningful from a parking perspective. Or by district, or in the whole city. So we want to aggregate at these different levels and we also want to aggregate by time. We want to aggregate hourly to see how it evolves over time. And we also want to aggregate daily to have a measure of how it has been performing during the last day, okay? And then we can see if the situation is normal or something has changed.
Fuad Mimoun 00:27:53.782 So this is what we want to compute. And we said, okay, let’s try to do that with TICK Stack We started storing the historical data in Influx, great, no problem here. We can store both the spot level occupations, occupied or free, and also the sector aggregation, how many places, how many spots are free or occupied at some point. That’s great. So also some of the aggregations, for example, aggregating by sector or whatever, we can compute those aggregations or query time. When the user requests it. Also using Influx with InfluxQL. And it’s great because aggregating by time like we see in the query, you just say, “Look by time, one hour.” And you have the average for the hour. While implementing this in Postgres or any other relational database, it’s maybe not hell but close to it [laughter] as can get. And so we said, okay. Now, we also want to have-some of the things we want to have pre-computed like the sessions times, we can have this already computed and then just do the last level of aggregation when the user does the query. So we thought: let’s try to do this with Kapacitor. Well, not so okay. Kapacitor is great for some things for example, for the alerts case that we presented just now. It was perfect. The ability to do templates, it was great for that case. But for this one, well, not so good. First thing, we wanted to do some relatively complex aggregations and it was difficult to implement in TICKscript. We’ll see that this script on the left is what we got for computing session time. It didn’t get to work yet. I think it kind of worked with this and another script of a similar size. Working together, and it kind of worked. But maybe sometimes it didn’t. Because that takes us to the second thing that we didn’t really like.
Fuad Mimoun 00:30:02.281 It’s a bit harder to put your TICK scripts when you have a long analog script with a lot of logic. It’s not so easy to debug, to know what is going on and to test, also test it with a set of known data. Well, yeah, it becomes kind of hard. So in the end, we thought that it would be better to just replace with some cron jobs in Python, through some InfluxQL queries because we see that the script does the queries to the right. But there’s just in three queries, we compute the time that each occupation lasts with the last function of InfluxQL. Then in the second query, we filter the ones that set to occupied FALSE. That means that is when a car left. So we know that the duration that we computed before is the time that the car was parked. And then we do the average of that, and we have it. And also we can do it hourly and daily. And just you can do a select and into another measurement. So you can have these aggregations very easily using the power of Influx from our Python script. So in the end, we went to that. Also because we didn’t really need to compute this in real time because as I said, the session time at turnover, you usually want to have the hourly average, or even daily average. So you don’t need to do that in real time. That is a use case where Kapacitor would be good, for example, the alerts case. Where you want to react as soon as something happens. In this case we said, “Okay. Maybe this is not the case for Kapacitor.”
Fuad Mimoun 00:31:50.503 So we go to final case, anomaly detection. That sounds cool and it is quite cool actually [laughter]. Well, we just wanted to also give to the operator the information of, is this working normally? And so we can show, as we see here, just an indicator. Here a percentage close to 100%, these are 100% match of what is expected. So it would be everything is okay. If it goes to the red area it means, okay, the speed of this street is very slow so maybe you have to do something. And also of course, you can even automatize this with the alerts system that we mentioned before. You can just put a threshold on this score and pop-up an alert to the operator when we know that something is not going as expected. So how is this done? Well, we prepare a forecast of the metric that we want to measure, in this case it’s the speed on the street. So we forecast this. It’s based on historical data meaning, if today’s Tuesday, we take into account all past Tuesdays at this time of the day. So we take that into account. We compute a forecast and then we compare the data that is arriving in real time to the forecast. Okay. So we also saw some examples using Kapacitor for doing anomaly detection. Basically the Holt-Winters prediction is shown in the tutorials. So we said, “Okay. Let’s try to use that.”
Fuad Mimoun 00:33:25.621 So first step, implementing the forecast. Okay. We tried doing it with Kapacitor as big [as we had seen?] the Holt-Winters. And we go, “Okay. That’s good.” But there’s a difference in Holt- Winters, it’s basically based I believe in the recent data, the recent past values, while we are based more in the historical data. Like I said, values for the next day of the week, same day of the week, same hour. So we didn’t really need that much implementing the forecast in real time because we can do it in advance for the next day or for another. Because we already have all the historical information that is relevant for that. And on the other hand, it was also a bit complex. So the same as we said before, that maybe Kapacitor for doing complex things at some point, you become a bit limited. So the same happened here. Putting things in TICK script was not easy and putting things [inside?] UDF, we also thought maybe it was going to be a bit too large, because we wanted to use Panda, Scikit. So in the end we just did also a cron job. Well, we store the real-time speed data in Influx, and then we do a cron job that queries InfluxDB, performs a forecast, and stores it again in InfluxDB. Okay.
Fuad Mimoun 00:34:51.350 So next step is comparing the current values that arrive in real time to the predicted ones. Okay. So here we have the database with the predicted speed data. We can query that at any time. And then we have the real-time speed data that is arriving and being inserted into Influx. So we configure Kapacitor to read the stream, and we wanted to join that with the information that we already had. But there was a problem. Querying Kapacitor, you cannot join a stream of data and a batch query. You can join two streams or two batch queries, but not one of each. And we had this case. So there was an option to implement both as batch queries but we discarded that because we wanted to really receive the data, the core run data in real time. Maybe we can iterate over that and at some point try to do it as batch query, but for now we went with this option. And then the comparison itself is completed, it’s saying now UDF. This means that the UDF, that’s a query to Influx, to retrieve the relevant forecast data and then compare it to the stream. Well, that works. As I said, maybe it’s not the best option because we have to do a query inside the UDF. But it’s working and it allows us to compute this anomaly score data.
Fuad Mimoun 00:36:24.266 And then finally once we have computed this, of course, we store it in InfluxDB which is something that can be done easily from Kapacitor. And then also as we explained in the case of alerts, we also put it within a queue so it can arrive to the rest of our system and be processed in this case by the Traffic vertical service, okay? And this is one of the good things also in Kapacitor that we mentioned, that you can easily implement an UDF and put values into the queue so you can actually build whatever output you want for the system very easily. So that’s the whole picture. We see the real time speed data. It’s used both for the forecast algorithm and to join it with the predictions and compute the anomaly score. And then it goes out of Kapacitor into the rest of the system. So this is the last case that we wanted to present. Now, some very short conclusions.
Fuad Mimoun 00:37:26.439 Well, we have tried InfluxDB and Kapacitor in several application level use cases. Some pros and some cons. Pros, InfluxDB is great for time series queries. Really, it’s very easy not only doing survey data but to query aggregate by time. It really makes things way easier than try to write SQL queries. Also, Kapacitor is very good for some things like simple triggers. In the case of alerts, it’s very obvious that implementing it in Kapacitor saves a lot of development time. You just put the simple expression and you have it working. And also doing simple processing on a stream of data. You pull it there, it connects to the stream, and it works. Also the possibility of having your own UDFs, the user defined functions provides a lot of flexibility as we said for sending the data to another system like in our case, through the queue or for doing some computations that you cannot easily put inside the TICK script. Then the cons. Well, we tried to use Kapacitor for a lot of things and maybe at some of them, we failed. Maybe we tried to do too much with Kapacitor and we see that it’s hard to test, and the book, especially when you try to do some long and complex scripts. So we wouldn’t recommend to do that. If you have to do a complex task, maybe it’s better to do it in another way. For example, one thing that we saw easily, in Kapacitor you can put a batch query and scale it to run every five minutes or whatever. Well, if you want it to use as a scaler, maybe it’s better to just put the scaler outside of Kapacitor or you can build something, test it well. You can still use all the power of InfluxDB for the time series queries, and it will be probably easier to develop. So, yeah, that’s our conclusions, and kind of how it works. So that’s it. Thanks for your attention. And now, time for questions.
Chris Churilo 00:39:37.092 Wow, that was a lot of information in a relatively short period of time [laughter]. So my question really quickly is, so why did you choose a time series database and how did you find InfluxDB?
Fuad Mimoun 00:39:57.501 Well, as I said actually we started thinking about time series databases a long time ago but I think at that time, InfluxDB didn’t yet exist or wasn’t very known. So we went to relational databases and worked with it, and at some point we just started using it. I think probably we started with the monitoring that is what it’s best known. And after a while, using it for monitoring we thought, “Okay. Maybe we can use this for other things too.” And then started playing with it and we came to this.
Albert Zaragoza 00:40:33.034 Yeah. I think it’s giving us as you can see, no? There’s a lot of real time data analysis in our platform for many different use cases. And I don’t think we found another tool that gave us that flexibility that the TICK Stack gave us, no? Therefore, we start developing, like Danny said, with monitoring and as soon as we understood the tool and we could use it for other things, we just continued, no? Continued with it. So right now, as you can see, we have some issues every now and then with Kapacitor and we now managed to solve these problems by building our own Python scripts and set it up. But overall, TICK Stack gives us this flexibility to kind of build everything we need for real time analysis.
Chris Churilo 00:41:22.393 I think it’s a theme that we hear quite often, that it’s not just limited to one particular use case. It’s a development tool so you can use it in all kinds of different ways. And that’s why-whoops, sorry?
Albert Zaragoza 00:41:34.597 We thought maybe consider, no, during this talk maybe talking about how we’re using it for our metrics and our analytics. And as Danny said, our infrastructure is quite interesting as well. We have many different micro services. So we thought maybe adding it to this, but probably maybe we can discuss it in another Webinar because we are really using it for that quite a lot and it’s really useful, so.
Chris Churilo 00:41:59.827 Yeah. And actually I think the purpose of these kinds of Webinars that we’re doing is to share with our committee members the varied ways that people are using it to maybe spark some inspiration on how you could use it within your infrastructure. So we have a question from Arum, and Arum says, “Hi. Nice use cases. One question, if speed reduces below threshold for a couple of minutes and then it comes back to normal, can we avoid unnecessary alerts in Kapacitor?”
Daniel Lázaro 00:42:33.844 Yeah. That’s something that we haven’t still figured out. Actually, we are still working on the alerts functionality. The order in which we presented them is not exactly the order in which we developed them. And the alerts functionality is something we are still working on. So this is something that we still haven’t implemented. We still have to work more on that. So actually if someone has the answer it will be interesting [laughter] also to know because we will really to do that very shortly.
Chris Churilo 00:43:09.206 But it is a really good question, Arum. I think you’ve been really thinking about it. And he says, “Okay. Thanks team.” So as this team mentioned, if you guys have the answer to that, that would be great too. And if anyone else has any other questions, let me just check the Q&A and the chat. Feel free to write your questions in there or if you want to raise your hand and I can unmute and we can have a conversation that way as well. I’m more than happy to do that. So have you guys started playing with Flux at all or started to look at that?
Daniel Lázaro 00:43:43.902 Yeah. Actually, well, we went to InfluxDays London this summer. Yeah, we liked the presentation there, and we started playing a bit with it. We see it’s very powerful but at that point it wasn’t still released, in the final release. I’m not sure if it’s already [crosstalk] production version?
Chris Churilo 00:44:08.712 No, no. I mean, it’s still in a kind of a preview version. But it continues to get stronger and stronger.
Daniel Lázaro 00:44:17.964 Yeah. We played with it and we intend to keep an eye on it because it looks very promising. And maybe some of the shortcomings that we saw in Kapacitor for some of these use cases, we think that maybe Influx can also be up to those tasks. So, yeah, we’ll definitely keep an eye on it.
Chris Churilo 00:44:40.065 Yeah. I think the shortcomings that you presented are definitely things that we’re trying to address with Flux. I think we kind of hit a wall with Kapacitor and I think people like yourselves are starting to use it in ways that we never imagined. So now it’s time to be able to take it to the next level.
Daniel Lázaro 00:44:58.330 Yeah, yeah. That was our impression. We said, “Okay. Maybe this is something that we’re trying to do that this is not where Kapacitor was built for because it looks so flexible that you kind of want to use it for everything.” But at some point you have to say, okay, maybe this is too much. Let’s wait until with Influx, that all of these cases seem to be considered. Maybe that will be a better tool for those cases because we kind of knew this is the place to stop. Kapacitor was not built for exactly this so maybe not [laughter].
Chris Churilo 00:45:30.905 Yeah, yeah, definitely, definitely. So as you were building out your solutions, how did you learn about how to use the TICK Stack in such unique ways? Was it just a lot of trial and error? Did you reach out to the community? I think we probably have a lot of people on the call that might be pretty new and so it would be kind of good advice to share with them.
Daniel Lázaro 00:45:54.256 Well, I’d say-
Albert Zaragoza 00:45:56.001 I think in the team as we mentioned, we started with the monitoring stack. And there was someone in our team that had used the TICK Stack for monitoring beforehand. Therefore, he kind of pushed the tool. And then we started learning, no? But actually we haven’t really [reached?] that much to the community. And going to the InfluxDays recently, basically that’s where we wanted to kind of grow and start knowing more people that are using the TICK Stack And this here in Spain, we don’t know that many companies using it. Therefore, it’s great to start building this community, no? But basically, yeah, it’s been by trial and error, and trying ourselves. And as Danny mentioned before, no? So we try to explore. We have a use case. We use the tools and sometimes it works, sometimes it don’t, no? But that’s mainly how we got into using the stack really.
Chris Churilo 00:47:00.107 And you guys are actually starting a time series Barcelona meet-up, right? We’re going to kick that off in the new year. So if anybody on the call would like to join, I’m sure the guys would love to have you come and participate, present, discuss.
Albert Zaragoza 00:47:14.698 Exactly. We are very excited because we know the TICK Stack I mean, there’s a lot of people building at least in the IOT world. A lot of people building real time applications. And I think knowing this stack will be very useful to everyone, no? So it’s great to have a community and a meet-up here in Barcelona, and it will be great to also have some great talks from you guys, from InfluxData if possible here.
Chris Churilo 00:47:42.274 Looking at what you guys built out, I mean, I have a really great appreciation for what you guys are doing. These are things that I guess a lot of us take for granted. We just assume that things in the mines are safe, or the traffic analysis is just somehow managed magically. So definitely, looking at this I’m realizing, wow, there’s a lot more to it than I think a lot of us realize. And I sure appreciate that you guys-companies like you are doing things like that to make the world a lot easier for all of us.
Albert Zaragoza 00:48:14.618 Hopefully, yeah. We are trying. We are trying [laughter]. It’s not that easy to solve the traffic problems, right? As you know, there in San Francisco it’s also not easy. But, yeah, we’re trying at least to give the tools and the visibility, right? So that’s what we do, no? We try to give data as soon as possible to the people that can make a difference, no? And can help operate the cities and do changes.
Chris Churilo 00:48:39.970 Yeah. And I think a lot of cities they just say, “Oh. Well, let’s just not allow cars. Just no cars at all.” Which just seems like it’s so black and white and it just doesn’t allow for a better way to be able to manage traffic, or manage any of the things that we’re trying to manage with data. So maybe we can just talk a little bit about-? So what precision level data are you collecting? How much data are you collecting? What’s your retention policies look like? Maybe kind of describe a little bit to the audience. And I’m going to guess that it’s going to vary depending on each application of the TICK stack that you have.
Albert Zaragoza 00:49:22.071 Exactly. Exactly. It varies a lot. One of the things we have very particular to our architecture infrastructure is that most of our deployments are done in-house, in the data centers of each city or company, no, that we deploy for. Basically there are many different legal reasons and other ones that the cities are interested on having the data as close as possible to themselves. And also to make use of the data centers that they’ve built over time. Therefore, it’s varies a lot. So for example, when you see the load sensing use case, the ones where we’re monitoring infrastructures like bridges and mines etc. The amount of data that we’re gathering from there is not too much at the end, no? It’s important to have real time, but the amount of data is very small. At the end you want to know, as Fawwad was saying before, if the yield of the bridge is falling or not? But that amount of data is not massive, no? So in those installations we are dealing with very reasonable amounts of data. But then when we’re talking about cities like Bogotá for example, then that amount of data scales quite [inaudible]. So, yeah, it really depends. So it gets to terabytes in some of the installations and in the other ones it’s really not that great-
Daniel Lázaro 00:50:55.014 Yeah. And about the ease of Influx. Well, we start usually developing with the default settings and then after we have a bit more insight on the case, on how the installation will be, then we try to adjust those things. I can say there’s no magic number, right?
Chris Churilo 00:51:14.938 Yeah. No, I think this is another common thing that I hear. But the great thing, just for audience, is that with these different use cases you’re not stuck with a tool that can only be used one way. And it’s important to understand that it really is going to be very flexible. The amount of data you collect, how precise you want it to be, when you want to get rid of it. These are all things that you should consider when you’re building out your solutions because you can make those changes.
Albert Zaragoza 00:51:43.567 Exactly. And as we were saying in our particularly case, when we have in-house scenarios that we have to implement the tools inside of a data center, it’s very useful to have a tool like TICK that can help us as well with the metrics and what’s really happening inside of the infrastructure. Not just only delivering the use cases, no? And the data that the customer needs for their operations. So, yeah, it’s really useful to have one tool that can serve you in many ways, no?
Chris Churilo 00:52:16.488 Very nice. Well, it looks like our audience is a bit shy but that usually means questions are going to come via email [laughter] later on. So before we end today’s session, what advice would you give to our audience if they’re just starting out?
Daniel Lázaro 00:52:36.235 Well, I’d say do like we did and start trying it for themselves. And try to see if the use cases match. And also the thing that we mentioned, if they don’t see that it matches then maybe don’t try to shoehorn it. Accept that there are some places where TICK Stack will be great and there are some places where maybe it’s not the tool, but just try. Think about it and try it.
Albert Zaragoza 00:53:05.563 Definitely. I mean, here I can see you have a lot of Webinars and use case. I think these types of talks are great for people, no, to kind of understand how other companies are using it. And maybe find a similar use case or something that someone has done that actually gets similar, no, to the needs that have. So I think that would be the first key point, no? Try with the tool and also try and learn from the community, see if they find any use cases that can help.
Chris Churilo 00:53:38.762 Yeah. I think those are great pieces of advice. And I liked what you were touching on that, use the right tool for the right job. Sometimes InfluxDB is not going to be the right tool and that’s okay. If it makes more sense to put it on a Postgres database because it’s contextual data, then do it. It’ll save you definitely time and a lot of effort. On the flip side, also don’t shove stuff into a relational database that really should be in a time series database. Awesome. All right. Good. Well, thanks team. I thought that was really great. And I will do just a quick edit of the video. You probably heard my phone ringing in the middle there, so I’ll edit that out, and then post it to the website so we can share it. And if anybody on the Webinar today has any other questions for the team at Worldsensing, just send me an email. I will forward it to them. This is common and so don’t be shy about that. And then we’ll also make sure that we come up with the written version of this so you can take a little bit deeper look at what they’re doing. And if you happen to be in Barcelona, please look up the time series meetup there because we really want to make sure we can build out a really strong community in that area. Because there’s probably a lot of people that could benefit from these kinds of things.
Albert Zaragoza 00:55:00.033 Sure. We are aiming at doing the first one in January, so it’s always a good time to come to Barcelona [laughter].
Chris Churilo 00:55:07.522 As long as you can avoid Mobile World Congress, yes.
Albert Zaragoza 00:55:11.173 Yeah, exactly. Exactly. It’s done, it’s past already.
Chris Churilo 00:55:17.004 So thank you so much. Wonderful, wonderful presentation. And I will definitely chat with you guys later. And everyone else, I hope to see you at our trainings on Thursdays or our Webinars on Tuesdays. So thanks and have a fantastic day. And make something really amazing with the TICK Stack.
Albert Zaragoza 00:55:34.142 Thanks, Chris. Bye.
Daniel Lázaro 00:55:35.485 Thank you.
Fuad Mimoun 00:55:35.428 Bye.
Chris Churilo 00:55:36.452 Bye-bye.
[/et_pb_toggle]