RESTful API – How to Consume, Extract, Store and Visualize Data with InfluxDB and Grafana
Session date: Sep 14, 2021 08:00am (Pacific Time)
Nowadays, every single modern application, system or solution does expose a RESTful API. On one hand, this is absolutely great and it has led to where we are today, having hundreds of other solutions or applications that can leverage these APIs, extend them, or even build on top of them.
On the other hand, we have difficulty monitoring these new and modern systems, applications or solutions.
In this session, we will learn how to query the data first using Swagger, when available, extract and parse the data that’s useful for us, store it in InfluxDB, and finally how to create beautiful and meaningful dashboards to have everything on a single pane of glass.
Watch the Webinar
Watch the webinar “RESTful API – How to Consume, Extract, Store and Visualize Data with InfluxDB and Grafana” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “RESTful API – How to Consume, Extract, Store and Visualize Data with InfluxDB and Grafana”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Jorge de la Cruz: Systems Engineer, Veeam Software
Caitlin Croft: 00:00:00.000 Hello, everyone. And welcome to today’s webinar. My name is Caitlin Croft. I’m very excited to have Jorge de la Cruz who’s based in London. He’s one of our amazing InfluxAces, and he also works at Veeam. Very excited to have him here talking about how he uses InfluxDB and Grafana. Once again, this session is being recoded. And please feel free to post any questions you may have in the Q&A box or the chat down at the bottom of your Zoom screen. All right, Jorge, are you ready to go?
Jorge de la Cruz: 00:00:34.896 Yeah, I’m always ready.
Caitlin Croft: 00:00:36.924 Well, I’ll hand things off to you.
Jorge de la Cruz: 00:00:40.647 Perfect. Thank you. Thank you so much, Caitlin. So yeah, thank you for being here today. I hope that it’s not too late or too early from wherever you’re consuming this session today. I’m just here to share how I solved the problem of having different RESTful APIs. Especially here where I work at Veeam, there are more and more product announcements. And all of them of course they include a RESTful API meaning that sometimes, the monitoring, the observability of those applications, they do not [inaudible] natively. But of course, thanks to that RESTful API which is exposed, you can always extract it or consume it, extract it, and store, and of course, yes, visualize that RESTful API. So today’s slides are really about how I solved that problem. I had been doing that for a couple of years already, probably like three, four years, and it’s been really good, really successful just for myself in terms of knowledge and understand how all of this is done, and as well for the community, in this case, Veeam community or Veeam work community. I mean, I’ve been touching so many other APIs. So yeah, let’s go there. Let me see. You can find me on twitter, @jorgedelacruz, or you can find blog posts around InfluxDB, Grafana, VMware, Veeam on jorgedelacruz.es if you speak Spanish or jorgedelacruz.uk for English-based content.
Jorge de la Cruz: 00:02:22.121 Okay. So the agenda for today, it will be an introduction and quick overview of RESTful API, super, super-basic. Then it will be the components and diagram of the how-to. All of this, it’s done, so hopefully you can understand without diagram what we are trying to achieve over here. Then, we are going to move into a deep dive in Swagger. And I will mention that later on what is Swagger. Deep dive on Bash Shell and jq. And finally, we’re going to take a deep dive as well on InfluxDB 1.x queries and using Grafana as a dashboarding system. So as Caitlyn has mentioned, if you have any questions during any slide, just please put them over there, and we’ll try our best to answer them probably at the end. We’ll allow like 15, 20 minutes. I don’t know. It depends. Let’s see how it goes. So introduction and quick overview on RESTful API. And I’m aware that you might be already an expert on this, and this might be too basic for you. So hopefully, bear with me until we pass this introduction. Slides please from zero to time series.
Jorge de la Cruz: 00:03:40.329 What is an API? This is acronym Application Programming Interface. I think I’ve seen this somewhere in a post or another video. I don’t even remember. It was probably in the AWS presentation. So think of an API as a waitress. You have a menu, and you select some food, and then the waitress takes that order of your food to the kitchen who prepares the food, and the waitress delivers the order to your table. So at the end, you never speak with the kitchen directly. You just use that middle API in the middle just from your request to get the response. That’s probably a really good analogy. So bear in mind that when you’re thinking about an API and what you can do with it, just think about that you request something to the API, the API queries the system, the application, and it gives you a response. So a world without an API, well, imagine when travelling, certainly when we start travelling again, just think, a world without an API where you’re trying to arrange some holidays, and then imagine without an API you will need to open one website, and then even to do the payment of that holiday that you’re booking, you need to go to another website. And imagine that you’re trying to arrange some taxis as well at the same time, some hotels, it will be so difficult for you. You will need to visit maybe 20 or 30 different websites to arrange a quick trip somewhere even local within your country. That would be a world without an API. That would be a world where I would not want to live, because of course it would be so complex, so difficult of opening so many websites to do basic things that we do today like comparing prices or comparing the best time or paying, once again, that is done through an API.
Jorge de la Cruz: 00:05:41.784 So how does it look, a world with an API? Well, a world with an API it is like this. So you then visit some websites like, I don’t know, booking.com - it depends on the country of course. I think here in Europe we tend to use a lot of booking.com. I remember in Spain a website called atrapalo.com where you just go to the website and then you can just search for flights, search for the, well, most flights, search for the transport as well. You can just book train tickets. You can of course make the payment without going anywhere else, just directly there. It auto-connects to your bank and does anything. Maybe you got Multi-Factor Authentication, it will prompt something on your phone. But once again, you do not need to open millions of websites. Just one website, that website is talking with so many different APIs, and you don’t need to worry about that. So that’s a world with an API, and that’s a much better world of course. So APIs are everywhere in every single website that you visit. More and more, you are just consuming kind of a result of a system which is querying all their applications or other systems through an API.
Jorge de la Cruz: 00:07:02.171 What is a REST API? That would be representational state transfer, REST, and the [inaudible] will be architectural style based on web standards and the HTTP protocol. We’re going to see that much in the next slides, so you understand a bit more about the responses. Everything is a resource, so you just need to query those resources. And then you can use CRUD or HTTP method so server-client model as we have seen before with that example of the waitress taking the order back and forth. So the HTTP methods - the most popular protocol used for RESTful implementations is HTTP. I’m pretty sure every modern application that you are functioning today which exposes an API is going to expose it using the HTTP or HTTPS probably. And uniquely address data using a uniform and minimal set of commands.
Jorge de la Cruz: 00:08:05.313 So usually you are going to have these four commands. You might have, maybe, a few others, but these are the most common. So the GET of course you just read the data on the resources. The POST, you will create data or resources depending on the application. The DELETE, that will delete data within the application or the resources. Or PUT that will update the data or the resources. So over the years, what I’ve experienced myself is that the GET - it gave me everything I needed really. So I guess a lot of GET just to consume information. In the case of [inaudible] like Veeam for example, just to consume their latest shop has been successful? How many VMs I’m protecting, and so on and so forth, storage that I’m consuming. So the GET, to me, is one of the most important just to obtain that information and do something else with that information. But, of course, if you’re talking about other exercises like orchestration or you have built as a part of your automation systems, some delete of some data when it’s [inaudible] or [inaudible] you can use things like DELETE and so on.
Jorge de la Cruz: 00:09:28.377 The HTTP codes that I was mentioning - sorry, you cannot see that very well. I changed the format of the slides last minute, and it’s pretty difficult to read this. But when you do queries so far or into the APIs, you will receive some response of the server itself. So it will give you 100 messages that will be information codes. The 200 that will be successful codes or you just get a 200 is that the HTTP query, the RESTful API that you run, it was successful and you will get some data most likely. The redirection codes in the case that you’re trying to do something and that API redirects to another point. The 400 that will be client error codes. Probably you know the 404 already for a long, long time, they’re the internet really. Any 400, that will be 400 that will be client trial and errors. And sometimes if you try to do something strange with the query, you might receive 500 server error codes, and you need to debug that at the server-side. But the most important ones for us is probably the 200. If you run a query and you obtain that 200 code, that’s good. Thumbs up, and then you will receive some information from there.
Jorge de la Cruz: 00:10:52.669 Talking with an API is different tools that you can use to talk with an API. The most common one, that might be Postman as well that was - at the beginning, it was embedded in Google Chrome applications, then it was released as a standalone. And I think recently, wasn’t it bought, Postman, for a lot of money by somebody? So it’s one of the most common ways to consume APIs. I really love Postman since the early days. I still use it because you can build your own libraries, and you can publish those libraries. You might find on GitHub, a lot of Postman libraries from pretty much every API in the world. So you can import it, and then you have everything there ready to consume. So I really recommend Postman. Then if you’re using Mozilla Firefox, you can use, always, a RESTClient. I tend to use it from time to time, most likely reverse-engineering stuff. So applications websites or, once again, the applications that have been released by vendors, and they do not expose the API directly but you know that what you are consuming in the website itself is built using an API. So I just use a RESTClient on Mozilla Firefox to do the reverse engineering to see the API call stat when I click, for example, [inaudible] repositories to see that API call, and then, well, instruct that API call and do other stuff with that like of course saving the data into InfluxDB.
Jorge de la Cruz: 00:12:27.954 Finally, we have Swagger which is the software needs to expose that Swagger itself. That’s my favorite. So you can always - when you trying to consume an API, probably I would recommend it to you to search for the Swagger version of that because usually it’s an implementation by the software or by the vendor where you can just go quickly into your browser and you have it here on the top right. And you can just introduce the API over here, the token. And you can perform queries directly from here, so you can just do GETs or POST, and it’s really comfortable. It’s that visual way of doing a lot of things. So I totally recommend to search for the Swagger when it’s available.
Jorge de la Cruz: 00:13:15.257 And finally, you have cURL for the command line gig. I tend to combine, most likely, RESTClient with some Swagger with some cURL. The cURL will be the final part, the final piece that you will add later on your Bash Shell Scripts or even PowerShell Scripts, and then there will be the line that will trigger the same as you would do it visually really. So if you have any questions for now, because you can see that this from less to more, more, more, more, yes, please type it there on the Q&A or on the chat. The components and the diagram full of the how-to, how to do all of this, to comprehend visually what’s what. Super, super, super simple. You can see here a few components that start from the top. So we have that REST application, or that REST software, or anything which is at least exposing that RESTful API. Usually, thanks to the tools I mentioned before, I tend to comprehend, to understand what are the queries that I want to do with cURL. Then I introduce those queries into a Bash Shell Script. I have a lot of examples on my GitHub. And then finally, this Bash Shell Script, I can automate it every 30 minutes, every hour to run the queries to the RESTful API, download what is important for me, once again, talking about Veeam, talking about the latest backup status and so on.
Jorge de la Cruz: 00:14:56.946 Then that data that I extract from the RESTful API, I save it into InfluxDB into 1.x or 2.x - it doesn’t matter - InfluxDB Cloud. You can just put it there if you want it. And finally, once I have the data in there, which of course is what you probably want to do because later on when you are going to download dashboards that I tend to make ready to download. So a lot of things are on Veeam. I build a lot of dashboards already, ready to consume, and this dashboard, it kind of gives you a lot of granularity. Meaning, if you want to load in this every five minutes, then when you’re doing a query of over the last year, if you try to do over the last year, every five minutes to a RESTful API server, probably the server will hang up, and probably you will have troubles sizing that and, yeah, it will probably be really slow.
Jorge de la Cruz: 00:15:57.213 But you know what? InfluxDB, you make a query with [inaudible] data of every five minutes, and you will not have any problems, because sometimes there is a purpose-built database for this. So that is why for me this is so important that InfluxDB, this component over here, it makes total sense. Instead of kind of bypassing Grafana, going directly to the REST. Once again, if you’re trying to do a long query, that might be problematic. So I prefer doing these, extract it from the REST and put it into InfluxDB and have a purpose-built time series to query data. Plus, of course, if you lose this because any problem - something happened to this application - the first thing you want to do is do some troubleshooting. Take a look how was the performance yesterday or a week ago? In the case that it was exponentially growing problematic, you can always do this with a Grafana dashboard or in InfluxDB which of course is just like an outside component out of your RESTful API. Hopefully, you understand this diagram. It’s not many components. Here is the RESTful API plus one Bash Shell script pointing to InfluxDB, and then the Grafana dashboard, and that’s it.
Jorge de la Cruz: 00:17:13.624 So let’s move to the next slide. Let’s, quick, do a deep-dive in Swagger. It’s not always available, but when it is, it’s the best resource. So I put it here a GIF. So this is the API version 4. First thing I need to do to this Swagger is authenticate myself. So I go into this token. I put my username and my password, and this is going to give me a 200, hopefully, and give the token. And you can see the response code. It was a 200. So now with the Bearer, I click here, explore. And now I’m logged into the Swagger, and then I can do stuff like for example get the backup repositories. And there you go. I can see here the backup repositories. This is in JSON format. So with this JSON, you can do, later on, what you need within JSON. And now, this is something in the [inaudible] that it should be [inaudible] store. Because we are going to share the slides with you, it will be super easy for you to take a look later on into this specific GIF. But again, it’s nothing special. Just doing the authentication. Getting the access token. Usually, access token expires around - here, it’s an hour as you can see. But yeah, usually, it’s around 15 minutes, an hour or 30 minutes, so you need to refresh those tokens. As you can see, super simple. Everything visual. And now, you even have the cURL over here, so you can even grab that cURL, and do this directly from the command line, and you will get of course, you will obtain [inaudible] with the JSON.
Jorge de la Cruz: 00:18:49.789 And with the JSON, we need to do something with the JSON, not taking as it is and that’s it. What are the things that from the JSON are important? Well, that depends on you, on the application you’re reading, what are you expecting to obtain out of the JSON. So in this case, a lot of data is here which is great. But I think, for me, what’s more important, the storage ID, the storage cache path, the capacity bytes, the free space bytes, so I can do some mathematical operation and see how much I’m using, and the ID, and plus the name. So that was what - it was important for me. Maybe as well if it has encryption or not, that would probably be a good one as well to add. But once again, that is what we want to extract from this JSON which is really comprehensive. So how do we do that? How do we kind of parse a whole JSON that it might be extremely big or really, really long? How do we parse and take what is important for us? We do it with, well, with a Shell, as you have seen, and jq. Jq is the set for JSON data. “Sed” as well, at the end we’re going to parse like a pro. So it’s really a tool that it will help you to query and to just extract the data, parse the data from a JSON file. So as you can see in here, do you remember that JSON response from out my application which it was really, really long, and a lot of data there that it wasn’t that important to me. So I think what I mentioned to you, it was that a few fields, they were important for me.
Jorge de la Cruz: 00:20:38.194 So as simple as you have jqplay.org. I don’t how many hours I spend there. Someday I will download that Google Chrome extension to tell me. But I spend there many, many hours just taking the JSON from this application, pasting it there, and then doing, well, the parsing to see what is important and how do I take it. For example, as I mentioned, to me, what is important will be the storage ID, the cache path, the capacity bytes, free space bytes, ID, and name. So with a simple jq and then this command over here, from this really long, horrible JSON, the result is this. As you can see, this will be objectStorageID, this will be the cachePath, this will be the capacityBytes, this will be freeSpaceBytes, ID, and name. Thanks to having all of this into a Bash Shell file. I assign every single of those into a variable. And then that variable, later on, I just push that with another call to the InfluxDB API. In this case, I just push it and [inaudible] storage ID it’s equal to this long string over here. Hopefully, this is simple. I know that we started really, really basic. We’ve been moving into the middle, and then now we are at the level of doing jq and doing the parsing. And sometimes, this probably is the final step that can be tricky or difficult to really understand, to really parse a JSON into variables that of course you can save later on into InfluxDB. So hopefully this is clear. And then, I will move into the next slide.
Jorge de la Cruz: 00:22:39.159 Once again, you will have these slides with you, so in the case that you want to take a look into these examples later on, you can always just go into the slides. I don’t know which number of slide this is. But yeah. Anyways, putting all of this on a Bash Shell Script, I mentioned already that that’s the most important thing. At the end, if you will just have [inaudible] with the Bash Shell Script. What I’ve done on here, it’s really taking the InfluxDB because that’s the target where we’re going to put the data. Some variables for the source of the data, meaning the API, so you can see the user, the password, the REST, what is that REST server exposed, which ports, because sometimes the ports, they might be different. And finally, do you remember the GIF that I showed you, that it was really [inaudible] - to obtain the token, it was a cURL over there. So I just copy that cURL and put it on here on the specific variable. So this is a specific variable. What it does, it just invokes a cURL with a POST, and then you just - as you can see right here with the password with the username, and then to the specific HTTPS with a specific port to the specific URL for obtaining the token, which is its version 4 token in my case, [inaudible] even then included a jq to just give me the access token because the JSON, it will be maybe four, five fields. There are first token, then ID, at time when [inaudible] and so on, I don’t need any of that. I just need give me the access token because I will need it at the one variable to keep going deep down, like a rabbit hole, do different scripts.
Jorge de la Cruz: 00:24:30.150 So that’s how I obtain the Bearer on this case. And I can use that Bearer later on. And then I use that later on, that Bearer, for example, and I don’t want to be really, really deep. Even if this looks deep, I’ll explain it to you because it’s quite simple code really. So in this case, I just took, once again, the REST server. Remember HTTPS with ID blah, blah, blah with the port. In this case, I call the [inaudible] for backup repositories. Give me the backup repositories. And then what I’ve done here is just I do another cURL. And then with this, into the video URL. So you can see that I’m injecting the Bearer, the access token into this query. And then I will have, probably, a big JSON. And then from that JSON, I need to parse things. So for example, I’m doing a loop over here because I may have more than one repository in this specific case. And then I just do from the repository, and here’s the magic with jq. Jq raw output from the first backup repository. Give me the name. Give me the capacity bytes. Give me free space bytes. Give me the storage ID, and give me the storage encryption enable, yes or no. With all of those variables, what I do at the end? The final part of the diagram. I just directly write this data into the InfluxDB database using the client API. So once again, this Bash Shell Script, it’s really smart. You just download, stay there for one place, and then you just send it into the other side and writes that data into InfluxDB.
Jorge de la Cruz: 00:26:10.923 So yeah, I have other things like [inaudible] storage repositories and so on. But this was a simple example of how to obtain the data. On the loop, you can see that at the end. I just do it at the end of the loop, increasing one so it passes to the next repository, to the next until there’s not any more repositories, so. The code is quite simple. My skills as a developer, I’m not really advanced. It’s really basic, so I’m pretty sure that if I have done this, you can do this as well. Or hopefully, and for your own sake, probably you can do it better with more functions and a lot of stuff to save you time. I know that I need to improve, a bit, the code, but this works. So for now, it’s okay, and it give me the opportunity to show you this on our webinar, so that’s good.
Jorge de la Cruz: 00:27:07.202 Let’s do a deep-dive on InfluxDB 1.x where it’s using the Grafana. Now that we have all the data into InfluxDB, this is happy days. We have parsed the most difficult time probably. That really deep dive of doing code, of doing the jq parsing which, I say, is fun, but it’s tough. But we have parsed, now, we have the data in InfluxDB. So let’s move into the more visual part which using the Grafana. How to quickly build gorgeous yet useful dashboards. So this is using InfluxDB 1.x, so once again, this might be different with InfluxDB 2.x or if you use a different InfluxQL. But anyways, if you’re still using Influx 1.x and the legacy, you will see that from here, I just query this Office 365 jobs, where the Veeam job name is [inaudible], and then give me the total duration, the mean() and then the tag (veeamjobname). Really, really simple query which it will give us this. Of course, I just changed this on Grafana to be just dots. So you can see that where different jobs like the backup job, BLOGUK, EXCHANGE, ONEDRIVE, SHAREPOINT, TEAMS - they are really, really beautiful. And you can see that they are by time, the duration, seconds. We have two minutes. And usually, they are down the mark of one minute really. So extremely good, really, really simple. All of this cURL of data, it came down from that ugly exercise of being on [inaudible] and doing all of the hard work within the removing all the sand. Finally, you can see the beautiful gems over here, the inaudible.
Jorge de la Cruz: 00:29:05.222 So, hopefully you like this better. Now, for example, another query. This time instead of being visually, it’s kind of the query, the raw query over there. Because I’m doing a small mathematical operation which is just [inaudible] select last capacity minus the last free space, and then this will give us this. Simple as that. So nothing difficult, but you see the power of combining the Grafana, plus of course from extracting the data from InfluxDB. So this was quite useful because at the end, I will see the usage capacity on every backup repository, in real time depending on how often you’re querying the API, once again. But yeah. This is enough for me to say okay, this is just 6 GB. And of course you can just select it by days and so on. So this might be changed according to more space on disk.
Jorge de la Cruz: 00:30:05.580 Finally, the Grafana dashboard variables, this is another kind of the next level work on this. You can, for things like, if you know that you have things that they were the same. For example, jobs in the case of this Veeam or [inaudible] jobs or VM names. Something that is always the same, and you want to have it on top of the dashboard, you can select, and then as soon as you select [inaudible] or another VM or another repository, the dashboard changes in real time, you need to add [inaudible] variables. What are [inaudible] variables? It would be as simple as make one query here for InfluxDB. Sometimes, the [inaudible] values from the Office 365 jobs with the job name, that’s the trick. Some others, you might need to improve that and do kind of query with a subquery so you will have something that - you query the job names that we see in the time frame that you have on the top of the dashboard. I don’t know if that makes sense. So for example, if you have jobs that you know that you deleted three months ago, maybe you do not want to appear on the variables list, because it is a job that on the top of the Grafana dashboard, say, the last seven days, and you know you did it a long, long time ago, so you don’t need it to appear on the variables list. Then you will need to tweak a bit more the queries over here. But anyways, I just wanted to show that in order to put these variables, the [inaudible] on top of the Grafana dashboard, you need to make them variables, and this was just an example for you to see. You can see over here that they’re here at the bottom, and they appear like this.
Jorge de la Cruz: 00:31:56.361 So for that reason, I mentioned that they are quite important because later on when, not you, somebody on your team is consuming these dashboards or even a client, when they’re consuming the dashboard, they will be probably really happy to see this normal human way of ordering data, just going to the top, just filtering by the jobs or proxies or VMs, whatever. So that was quite a good example there. All the examples that you can do with all of this data, once again, so it was tough. It was probably difficult to do all of this API directly into InfluxDB. But later on, look at this. So useful. This an example of a product that [inaudible] which is protecting the [inaudible] VMs. But this dashboard, it is showing to you, to the customer, the VMs that are not protected on a map. So you can see just the use case is extremely simple here, right? You can see the dashboard, or the customer can see the dashboard quickly and say, “Look, we have red dots on some parts of the Azure regions which are not protected. So we just need to have zero red dots, really, on the map.” So in this case, it was more about central US, west US too, east US too, and so on. So extremely useful. Once again, all of this came just from the raw data we were parsing that we went with the coding and so on. And I’ll show you as well an example here with a pie chart. Nothing extremely exciting, but, well, anyways, I wanted to show you.
Jorge de la Cruz: 00:33:35.395 Another example, this case on the dark theme, and on this case, it is useful because of course, the last job result, for me, was quite important. I just wanted to know if the last backup that I’ve run is successful, right? Or if it’s running, or if it’s an error. So once again, thanks to all of this work that we’ve been doing, you can see a really good example of that hard work, putting into a nice, beautiful way - those beautiful dots with the time as well. And finally, the Job Read/Write Rate here on different colors. Probably this is not as exciting, but this and the latest job status, really quite important for me on my case. Another example, this case from an enterprise manager. This is another API. Once again, imagine if you have hundreds of VMs, you probably want to know if that specific VM or VEMs or even jobs, they have been successful or not. So this is not even just jobs. This goes really granular and say, “Okay, Jorge, do we have these Veeam computers protected or these backup jobs to AWS protected?” And you can see the job if it was failed, success. And as well, inside the VMs in the case that you needed. Another example, we have seen on here on timing. You can always put some tables with data. So in this case, I just put the value. It will be a check like, I think, it’s kind of like an emoji with a green check. Grafana supports any kind emoji, so if you don’t want that green check but you prefer other color or any other thing, you can have it. It can supports images as well, but I haven’t played with those. These, once again, they were just quick examples for you to see and to understand that from that hard work, now you can finally have the payback on visibility at the end. In that case, it makes sense to [inaudible].
Jorge de la Cruz: 00:35:43.779 I mentioned, quickly, around the way of not using InfluxDB but actually Grafana grabbing information from JSON. To me, I’ll say that I will not recommend that because you are going to stress a lot - yeah, well, especially if you do long, long queries with a lot of data. The server giving it to you, that information, from the RESTful API, it will take time. For that reason, I prefer my small Bash Shell script running more often, extracting less data and saving it into purpose-built like it is time series purpose-built like the InfluxDB. But anyways, I know it’s there on the community. It’s of course a great effort by Marcus Olsson’s. I wanted to mention here his amazing work with the JSON API plugin directly querying into APIs. You have the link over here. You can just google JSON API plugin. So I think with that, I just wanted to show you a quick slide about InfluxDays? Maybe you want to mention a bit more, Caitlin, about this, or?
Caitlin Croft: 00:36:58.748 Yes. So in October, we have InfluxDays North America coming up. Like I mentioned, it is free. We have, on October 11th and 12th, the Hands-On Flux Training. So that’s a really great hands-on course. The trainers are from Italy. They’re a couple of data scientists, professors, and of course because they’re Italian, the use case that you’ll to the entire course is a pizza oven with sensors. And then on October 25th, the day before the conference, we have a free Telegraf training. And then on October 26th to 27th, we have the actual conference itself. So it’s right over two days. It will be completely virtual. Lots of great sessions filled with engineers here as well as community members. So we would love to have you there. It’s completely free, so we definitely look forward to seeing everyone there. And Jorge, you already have a question which I’m not surprised at. How are these tools more beneficial than just writing a check in a Python script using something like Request?
Jorge de la Cruz: 00:38:18.140 I have seen use cases downloading the data from the RESTful API with Python and sending it back into InfluxDB. This is quite important and different from there because the dashboarding that you can build within Grafana later on - so in the case that you want to use Python - I mean, this is to just solve one use case which is in the case that you need a dashboarding system to build beautiful things that you have seen with the map, with the tables, with a couple of time series. If what you need is just a quick check, a quick slack notification that the last backup it’s not okay, then yeah, Python would be great to obtain the last status. And if it’s equal to fail, give that notification into WhatsApp or Telegram or Slack and so on. That’s another use case from the [inaudible] APIs, yeah? 100%.
Caitlin Croft: 00:39:18.870 So someone’s asking what’s the cost of the Hands-On Flux Training? So it is $500 per person to USC. Yeah. I think it’s completely worth it. I’ve seen people come in that course, and they might know InfluxQL. They know how to use InfluxDB, and they come out of it really having a really good understanding of the Flux Training. So, Debbie, if you’re interesting in joining, let me know. I might also be able to comp you a ticket. All right. Another question, do you know if it’s possible to use a Telegraf agent to collect data from APIs?
Jorge de la Cruz: 00:39:59.275 Yes, there is a telegrafinput.http, if I’m not mistaken, that I use. You can use that to download query data, anything that is post in JSON directly. I have not made it work with a username and password, but for APIs which is both JSON without any username and password, that’s perfect. For example, the Pie monitor, I used that input.http, and it worked really nice. Other than that, it might be other plugins, but I’m not aware, other than that Grafana directly JSON HTTP query.
Caitlin Croft: 00:40:48.784 Jorge, what are some tips and tricks that you’ve learned along the way of using InfluxDB with Grafana related to Influx of the two of them together?
Jorge de la Cruz: 00:40:59.712 I think what I’m going to reply as well answer the question that we have on the question and answer which is what’s your speed with the heavy load at APIs, for example, 500 per second. So I think my experience is that - it’s just actually that if you do queries on an HTTP or an API with any other tool directly, even to the API, you are going to have problems in scaling that. Wherein less data, so, just the last 30 minutes in the case that the API chose that, and save it into InfluxDB, it allows you later on - because you have a lot of data on a time series database, it allows you later on to query much more data with a month worth of data, with a year worth data, with more granularity around like, I don’t know, every 30 minutes for the last month, something like that. It will be 100% much more efficient on directly from an InfluxDB into Grafana than directly querying it into the API. So that’s what I learnt doing all of this exercise. I’ve learnt as well that vendors, they tend to expose more and more APIs, but the pace of monitoring tools that some of these vendors develop or visibility within the application itself, sometimes it’s more limited than what you see on the API. So you can take this an opportunity to take that API which already has a lot of information, and usually exposes more, and build more dashboarding with Grafana.
Jorge de la Cruz: 00:42:45.223 So that’s another thing I learned - that by reading directly from the API, you don’t have any limits at the end of what you can see, and how do you see it? You want a map? Okay. Just put in the map. Do you want it on a pie chart? Put it in a pie chart. You want table? Yes, make it a table. So you can be extremely flexible that in another way, you might not be able be that flexible.
Caitlin Croft: 00:43:08.442 Great. So it sounds also asking Grafana looks great, but again, there’s a learning curve. Any advice for self-training on Grafana?
Jorge de la Cruz: 00:43:19.197 I will say, same I just said at the beginning, Caitlin, just leverage the community. I think in Influx and Slack there is a specific group for Grafana which I tend to reply over there. Try your best. Consume YouTube videos. Consume some information that there is on the Grafana website. Try to see this webinar, the recording or some other similar. But if you have questions, leverage the Grafana community or if you’re using InfluxDB plus Grafana, we will be there waiting for you on the Grafana channel inside the Influx Slack, which is really active to be honest.
Caitlin Croft: 00:44:02.299 And also, Debbie, since I know you also asked about the Flux Training, so the Flux Training really actually uses InfluxDB Cloud which is the newer iteration InfluxDB where there’s a ton of visualization tools built directly into the platform. So you’ll actually get pretty familiar with how to create graphs and stuff using Flux. So it might be a really great class for you to attend if you’re interested in using InfluxDB as it shows some of the visualization tools built directly into the platform. I’m not saying that Grafana isn’t great. I know that people, like Jorge, lots of people - Grafana’s great. There’s a lot of benefits using Grafana, but there’s also a lot of visualization tools directly in InfluxDB. Is it possible to manage secret data passwords in such Bash script or using tokens to solve the problem exposing such secrets?
Jorge de la Cruz: 00:45:09.659 At the moment, I have not found any way of doing this better. But I found that usually, vendors also were exposing APIs, now, more and more, they are just allowing you to create users with only role is to access this read-only, so just the GET, for the APIs, and if you remember the HTTP port or the API port, that’s another way where you can control who is accessing that. So you just limit that to a few IPs that you trust. That’s another way. I’ll try to find out more about how can I obfuscate or protect better that username/password within the Bash Shell script because, yeah, it’s not ideal that in a Bash Shell script you see a username and password. I agree. So let me double-check, and if I find something, let me get back to you although I’ll probably blog-post about that. So thanks for the question.
Caitlin Croft: 00:46:15.396 Yeah, Jorge is quite the prolific blog writer, as well as he shares a lot of his Grafana dashboards. So that’s why we’re very happy to have him as an InfluxAce. Someone has asked if the slides will be made accessible. Yes, they will. So Jorge will be sharing them with me, and they will be made available on the recording page. So how it works is by tomorrow morning, you can actually go to the page that you registered for the webinar, and the recording as well as the slides will be made available. So the short answer is yes. The long answer is the recording and slides will be made together. All right. We’ll keep the lines open here just another couple of minutes in case anyone else has any last-minute questions for Jorge. I know he’s always so helpful in providing his knowledge. So if there’s ever anything else that after this session you forgot that you wanted to ask him, you can always email me, and I’m happy to connect you with Jorge, or he is in the community as well. And he’s pretty active, so it’s awesome. All right. Well, thank you everyone for joining today’s webinar. Thanks -
Jorge de la Cruz: 00:47:34.613 Thanks so much.
Caitlin Croft: 00:47:34.613 - for sticking with it and asking lots of questions. Jorge, do you have anything last that you want to share with the community or say?
Jorge de la Cruz: 00:47:44.858 Yes, I thank you once again. Yeah, we appreciate all the questions. And if you want more one-to-one or something, go there to the Influx Slack channel and then just maybe if you have any questions or just directly just add me on that Slack, and I’ll be very happy to help you there with anything really. Yeah. Have a good day.
Caitlin Croft: 00:48:04.643 Awesome. Thank you everyone joining. And I hope you have a good day.
[/et_pb_toggle]
Jorge de la Cruz
Systems Engineer, Veeam Software
Jorge de la Cruz is a Systems Engineer, husband, and father living in the UK. He has been an active blogger since 2011 and his main expertise, when not working as a Systems Engineer, is to consume and dissect RESTful APIs and send data to InfluxDB to later show it within Grafana. When not working on the computer, Jorge enjoys preparing home-made food like bread, pasta, and roast "everything" with special expertise in Spanish dishes. Jorge de la Cruz is an InfluxAce.