How to Setup and Use the Google Core IoT Telegraf Plugin
Session date: Nov 13, 2018 08:00am (Pacific Time)
With the new Google Core IoT Telegraf Plugin, Google Cloud IoT customers can be up in running in a few hours and can gather and analyze sensor data that will help them improve operational efficiency, anticipate problems, and build higher order capabilities such as AI to achieve superior economies of scale and ROI. In this webinar, David Simmons, IoT Developer Evangelist at InfluxData, will walk through how to setup and use this new plugin.
Watch the Webinar
Watch the webinar “How to Setup and Use the Google Core IoT Telegraf Plugin” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “How to Setup and Use the Google Core IoT Telegraf Plugin”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Chris Churilo: Director Product Marketing, InfluxData
- David Simmons: IoT Developer Evangelist, InfluxData
David Simmons 00:00:01.010 Great. Thanks, Chris. And welcome, everybody. Thanks for the welcome. Jim and actually I have been working all morning on debugging your problem with the Particle Plugin. So I’ll get back to you on that pretty soon. But let’s talk about the Google Core IoT Plugin that is currently in evaluation for being included in the next release of Telegraf, which will allow you to connect your Google Core IoT devices directly to InfluxDB through that. And so let’s get right into this. Apparently, I can’t advance slides. Here we go. So we’re going to talk a little bit about all of these things, and I’m going to go into a lot more detail on all of them. And in fact everything that’s on this agenda, I may cover more than what’s on this agenda. But we’ll go through sort of an overview of what this does, and then how to set up the various parts of it, and then finally sending data to it. And I’ll include a link to a sample application that I wrote that you can run pretty much on any platform that will send some sample data to Google Core IoT, which then you can use to test to see if it’s getting all the way through to your end device.
David Simmons 00:01:43.536 So here’s sort of how it all fits together. You’ll have your IoT device, and you will run your code on that IoT device. That device publishes its data to the Google Compute Platform IoT, the Google Core IoT, which then forwards it to the Google Pub/Sub, which will then forward it to a Telegraf instance running this plugin. And Telegraf will then store that data into InfluxDB. So there’s sort of a chain here to get there, and we’ll go through all parts of it. And I’m sorry that my dog is joining in on the conversation here. She likes to talk when I’m on the phone.
David Simmons 00:02:32.002 So a couple of requirements that you absolutely have to have. You have to have an Influx Stack that is accessible from the internet. You can’t just run it on your private machine behind a firewall because Google Core won’t be able to find it. And it must have an SSL Certificate. Google Core only communicates via SSL. So anything that doesn’t have a certificate, it won’t be able to talk to. And you can use self-signed certificates. All of these can be run in a GCP instance, if you want to set that up, but I am not going to cover in any way how to set up InfluxDB or any of that in a GCP instance. I believe that there are InfluxDB instances in the GCP marketplace that you can just run, but I have not actually played with that. You, of course, have to have a GCP account. And one of the things that I did not put on this slide is that at least on whatever machine you are developing this on, you must have Go installed. Those are the sort of the base requirements.
David Simmons 00:03:42.996 So setting up InfluxDB and the TICK Stack is really fairly straightforward. These are instructions that I took directly from our documentation. You can also just go and download it and run their downloaded binaries. But if you use these methods, you’ll actually set up your ability to have updates delivered to you through the standard Linux package upgrade process. If you’re running on a Mac, you can use the Brew package to install all of the Influx packages. If you’re running on Windows, I apologize in advance, but Windows is sort of a-for Influx it’s a reference platform, and we don’t recommend running any of the Influx Stack in production on Windows. If you really would-running on Windows and you’d like, you can always run all of these in a sandbox or via Docker, and we have all of those images on our website as well. You can download those.
David Simmons 00:04:51.960 So these are the sort of the two ways to get the Influx repository loaded on to your system for both Debian/Ubuntu and for RedHat and CentOS. These commands are taken directly from our documentation. What this will get you is the Influx repositories installed on your system so that then you can use the standard package management tools to install, which brings us to actually installing them, and it really is this easy. Just an apt-get or yum install of the packages and then using the service to start InfluxDB. I didn’t include all the commands. You’ll need to do a system control to start InfluxDB, start Chronograf, start Telegraf and start Kapacitor, and then you’ll have the whole stack up and running, right? And, again, you’ll need to run this on a system that is accessible from the open internet so that Google Core can talk to it.
David Simmons 00:06:09.242 So now comes the really fun part, and that’s installing the Google Compute Platform tools. And you’ll notice that, up here, you’ll need to install the Beta tools, and that is because that’s what you need for some of the Google Core IoT stuff. Once you install those you need to log in, and that’ll take your credentials from your account that you’ve created. Again, you do need to have the Google Compute Platform account already created. And you’ll create a project, and you can call that project pretty much whatever you want, but you’ll need to remember it because it’s got to be exactly the same everywhere you use it, everywhere you reference it, okay? And once you create that project, you’ll set that in your configuration with this config set project command here. And, again, use your project name, right?
David Simmons 00:07:17.577 And now here’s where it’s important that you install the Beta, is you’re going to use the gcloud beta command to create a topic. And, again, you can use anything you want for IoT topic here, but you’ll have to continue to reference exactly that topic every time you use it. And you’ll also create an IoT subscription here. This will all make a little bit more sense as we go forward. I will say that having done this and having spent a huge amount of time doing these steps over and over and over again, the Google documentation’s not really great on how to do this, and so some of this was found via trial and error. Some of these I got from Google engineers themselves, and we sort of distilled it down to these instructions. It is possible to do these things through the GCP web-based console. And I’ve got some screenshots of doing exactly that coming up, and I hope those actually show up correctly.
David Simmons 00:08:35.482 So, again, all communication with GCP is secured using SSL, uses OpenSSL. You use it for communicating to and from the device. You use it for communicating from GCP Pub/Sub to your Telegraf instance. Everything has to be secured. That’s the only way that Google Compute Platform talks to anybody or lets anybody talk to it. Now, the communication from your device to GCP, you’re actually talking to an MQTT broker. Unfortunately, Google lets you talk to the MQTT broker but doesn’t let you [inaudible] from it. So that’s why we have to go through their Pub/Sub.
David Simmons 00:09:27.269 So since you have to use SSL, here’s what you need to do to configure your security certificates. Because you need to have security certificates installed on a local device if you’re going to run this as your IoT device as well as your server. So here I’m making a directory for my PEM files, and then I’m calling OpenSSL to create a X.509 certificate and all the various parts of that. And I’m storing it into-you’ll notice I’m storing a private key and a public key PEM file, right? And then I’m going to call the Google Cloud IoT registry to create an IoT registry in the US central one region, again, using my IoT topic. And then, finally, I’ll create a device, and I’ll use the public key that I just created for that device. So here where it says, path public key file, I’ll be using this public_key.pem file for that. And that will allow my device to use that key file to talk to Google.
David Simmons 00:10:55.857 Now, here’s where we get to see if my-so here’s actually doing the same thing in rapid-fire on the Google website. So you’ll see I’m creating a registry-I’m creating a device, a registry. And then what’s not shown here is that you can actually cut and paste your certificate contents into the UI and enable your device that way, right? I’m sorry this runs so rapidly. I’m not quite sure why animated GIFs run it that breakneck speed in PowerPoint.
David Simmons 00:11:39.479 So basically now what you’ve got is you got a device and a registry and a subscription built in Google Core IoT Platform, and you’re sort of ready to start sending data. But here’s the rub. You have to build your own custom version of Telegraf right now. This will not be true forever. But right now, the Telegraf plugin for Google Core IoT is-it’s in a pull request to the main branch, but right now is living in a development branch. So you’ll have to sort of build Telegraf from source, right? It’s really not that hard. It’s very easy, very straightforward to build your own version of Telegraf. The things you’ll need are to have the Go Runtime installed and to have Make installed because those are the two tools that you’ll use to build Telegraf.
David Simmons 00:12:51.891 All right. So building Telegraf is really simple. First, using Git, you’ll clone the repository, and the repository you’ll notice is mine right now. You can go and pull the branch from the pull request that’s in the main branch. We’re working on integrating that into a release to be released fairly soon. But if you pull this Google IoT branch, change directories into the new directory that you made and just type make, it will build Telegraf for you. Complete Telegraf executable. Now, if you’re building this on a Windows machine or on a Mac and you’re going to deploy it on a Linux machine, actually that’s really simple too. You just export the OS version, which is Linux, and in this instance I was building for an AMD64. I can also build for an ARM64 or an ARM32. Depending on the architecture, you can build for different architecture. Basically cross-compile, type the make, and you have a version of Telegraf built for that operating system and that architecture. It’s really that easy. I do this on a several-times-a-day basis as I’m building stuff for plugins, and compiling it, testing things like this because I build a lot of plugins, as Jim can attest to.
David Simmons 00:14:39.541 So once you’ve built it, you need to configure it. And here’s where you can use the built-in config command to Telegraf. And it will generate a default config file into telegraf.com. And then what you’ll need to do is go through and uncomment the parts that you need, right? You’ll need to uncomment the parts for the Google Core IoT Plugin. You’ll need to give it a service address. And you can give it pretty much any port number you choose as long as that port number is also enabled on the machine. So if you’re running this on GCP or on AWS, you’ll need to make sure and open up that port, right? Some of the defaults you can just continue to use, like the default write path, which is /write. The default is nanoseconds here for the precision. I don’t really recommend using nanoseconds just because you don’t really need that level of precision, but you can use, again, millisecond, microsecond, nanosecond, whatever you want for the precision.
David Simmons 00:15:58.379 This plugin will take two different kinds of protocol. One, it will take Influx line protocol, and the other is JSON. So you can format your incoming data as either JSON or line protocol. You just have to define in the configuration file which one you’re planning to accept. You can’t tell it line protocol and then send it JSON or vice versa, or it will just-well, it’ll be very unhappy, right? And since, again, this is all encrypted, you need to provide your certificate and key for your web server to Telegraf so that it can properly encrypt and decrypt input and responses, right? Those are required. If you don’t have those, then you won’t be able to get data from Pub/Sub.
David Simmons 00:17:05.750 I know there’s a whole lot of code in these slides, and we’ll make all these slides available. Not only that, all of this stuff is available in my GitHub, and you’ll be able to download all the source code and see everything that’s going on by looking there. So you’ll save your configuration file to /etc/telegraf/telegraf.conf, copy your newly built Telegraf into /usr/bin, and then run it. And that command will run that version of Telegraf with that config file. And actually running it this way from the command line will allow you to actually see some of the output from Telegraf. If you don’t need to do that, you can simply use system control to start it and let it run in the background. So now we’ve got Google Core, the GCP platform set up, and we’ve got Telegraf set up. And what we need now is a device to send data to Google Core IoT and see what we got.
David Simmons 00:18:17.667 So, again, you’ve got to have Go installed. I wrote this sort of IoT simulator in Go. And so you’ve got to have Go installed. And you can install this Client App from my GitHub, which is GoogleIoTClient, and just build it and run it. Now, here are all the commands you need to clone it, to build it. And then running it, you’ll notice there’s a whole bunch of command line flags, right? So I gave it my certificate file. The device name, that’s the device name that I created way back in GCP. The private key file, which we created with OpenSSL. The project ID, which is what I created at the very beginning. The region, which was US central one. The registry name that I created back there. And the format of the data. Now, in the previous examples I had, the format being line protocol, and here I have it being JSON. Just need to make sure that those two match. And here I give it the -virtual flag.
David Simmons 00:19:34.524 This code will actually run. I wrote this code to run on a Raspberry Pi using a couple of sensor boards. So you can use this code on a Raspberry Pi, run and go with actual sensors plugged in. I wrote it for the Bosch BME280 board that I got from Adafruit and a CO2 Sensor, which you probably won’t have and probably don’t want to buy since it’s fairly expensive. But you can set-if you don’t use that -virtual flag and you run this on a Raspberry Pi with those sensors installed, it will actually send real live data from those sensors to Google Cloud. Using the -virtual, it basically randomizes some values and sends those randomized values to Google Cloud so that you don’t actually have to have a real IoT device with real sensors just to test this. So it’s useful either way. And, again, it’s all written in Go. Including the device drivers for the Raspberry Pi for the various devices, is all written in Go.
David Simmons 00:20:51.350 And here’s what you get when you run that. And, again, I’m running it virtually, and you can see that it’s taking some readings. It takes four humidity, temperature readings, one CO2 reading, and then it sends all of those to my Google Cloud instance. And it will just keep doing that on and on and on. I can basically send as many values as I want in one call to the GCP MQTT server. I choose to send five at a time just because it’s easy that way. And if everything has gone according to plan, you’ll see that I have a Google IoT measurement, and I have data flowing in. So all of this data is going from my device to Google Core IoT. Google IoT is then sending it to a Pub/Sub agent, which is sending it back out to my Telegraf agent and putting it into InfluxDB for me. I know I sort of ran through that at breakneck speed, but I wanted to be able to have lots of time for questions, and we can go back to slides that you have specific questions on and talk about the various parts of those as you wish. So with that, excuse me, let’s open it up to questions.
Chris Churilo 00:22:33.409 Well, let’s do it. I don’t see any questions right now. Let me just look for you. Nope. So everyone, as David mentioned, he did kind of go through that a little bit quickly. If there are parts that you want him to review, or if you have any questions, we could definitely go through any of these portions of the presentation today and look one more time. So we’ll leave the lines open. And then if you-unless you guys want to be muted, let me know. Just raise your hands. We can also have a conversation that way because we still have some time that’s available for everybody.
David Simmons 00:23:11.701 One of the easier parts of this, I will say, was actually writing the plugin, which was as easy if not easier than actually getting the GCP stuff running. Writing Telegraf plugins is a fairly straightforward thing to do. I see that Naveen has a hand up.
Chris Churilo 00:23:32.178 Yep. Naveen, I can unmute you so you can-actually, you can talk, so if you want to unmute yourself and ask your question directly, you can do that. I don’t want to unmute you and maybe you’re not in the spot to talk to [laughter]. But the permissions have been turned on. So David, now that you’ve played around with this, what are some of the things that you can see people doing differently now that this is available?
David Simmons 00:24:12.164 Well, one of the things that you’ll be able to do-I mean, it’s really nice, at least for me, to be able to collect all my data in one place, and that place for me is, of course, InfluxDB, right? And so I have an InfluxDB Cloud instance of InfluxDB running, and I basically forward data from everywhere that I come near to that cloud instance, right? So having the ability to send all of this data to a single InfluxDB instance is really useful for me. And being able to do it from a Particle device or a Google IoT device or whatever kind of device is really helpful so that I don’t have to limit myself to, say, I only can use this platform, right? Part of the beauty of this is that you sort of avoid platform lock-in where, well, now that I’m using Google Core IoT, I can only do that for all of my devices. No, that’s not entirely true. I can send my data from any device to InfluxDB, whether I’m using Google Core or something else. The answer to your question, Naveen, is yes, the video for this will be available afterwards. Chris will do a little bit of cleanup and post it. And I believe you’ll get an email later today with the location of that video for you to watch sort of at your leisure.
David Simmons 00:26:02.205 Do we have for Azure as well? Not right now actually. There are several other integrations that are on my list. AWS Greengrass and Azure are two of them. So I will hopefully be getting to those sooner rather than later, but it just depends on what else comes at me, and it comes at me pretty fast. I do not have an ETA for any of that. It’s just it all depends on prioritization and time, and there’s just never enough time for the priorities. It looks like Mauricio has his hand up, so.
Chris Churilo 00:26:54.794 Awesome, Mauricio. You can talk if you want. You want to ask your question, just unmute yourself. Or type in your question. Either works.
Mauricio 00:27:05.231 Yeah, sure. I was wondering if there were any other options for sending data in different formats other than line protocol or JSON.
David Simmons 00:27:17.121 Well, so here’s the problem, is that there are an infinite number of protocols and formats for data to be sent, right? And it’s really not possible to sort of decide in advance what those formats are going to be. And so we - and by we, I mean me - had to make some choices as to what sorts of data formats we could accept. And the two easiest were line protocol and JSON, right? And after that, if you want to format your data in some other format, you’ll probably going to have to write at least your own parser for it, if not your own plugin for it. But if you want to write your own plugin for it, you can always base it off of the Google Core Plugin that I wrote. Now, I will caution you not to start doing that yet. We have completely rewritten some parts of the underlying Telegraf service on which the Google plugin relies. And so I am right now in the process of refactoring a bunch of stuff and rewriting some other stuff so that it uses the new underlying parts of Telegraf. Will be much faster and a lot better. So I would not go forking my branch of it just yet. Wait until I’ve got that finished.
Mauricio 00:28:52.716 Thank you, David.
Chris Churilo 00:29:00.530 Any other questions? You can raise your hand if you want to chat directly with David as well. Well, it’s easy to do when we have a smaller group of people.
David Simmons 00:29:13.028 Right. Sometimes when we have hundreds of people on the line, it’s really not practical, but this is a fairly small audience right now, so. And I love having questions, so feel free to ask away.
Chris Churilo 00:29:22.548 Yeah. And actually since we’re here, if you have other questions about InfluxData, the TICK Stack, we can answer those as well. Might as well use the time wisely, people. Well, maybe we can ask some of you guys some questions. So I’d love to hear from everyone that’s on the call. You can just write into the chat or the Q&A or raise your hand. But I think David and I would love to hear what projects you’re considering potentially on Google Cloud Platform or Azure and how you’re hoping to use InfluxDB to store all your metrics. And with that, we can always maybe give you some advice or pass along some knowledge that we’ve accumulated. All right, well, I’ll keep the lines open for just another minute or so. [crosstalk]-
David Simmons 00:30:31.599 I think Jim’s not going to ask any questions because he wants me to get back to debugging his Particle integration problem. So he doesn’t want to ask any questions that will keep me from getting back to that.
Chris Churilo 00:30:42.138 So Mauricio asked-he was mostly interested in how to avoid platform lock-in. I mean, we do have output plugins as well, so you can definitely send data out. We get it. Nobody wants to be stuck with something forever.
David Simmons 00:30:58.705 This is actually one of the things I like about the Influx Stack, is that, again, I can use Telegraf to collect my data from a whole bunch of different sources, from a whole bunch of different platforms, so I’m not necessarily locked into. Well, I’ve started using GCP, so now that’s all I can use, and everything has to go through there. Or I’ve started using AWS, so now everything has to go through AWS. I can send my data from a whole lot of different sources to InfluxDB. And one of the nice things about using InfluxDB as well is that I can run InfluxDB on a whole bunch of different platforms so that I can move my stuff around. If I decide that GCP is too expensive and I can’t afford to keep my InfluxDB instance running there, I can very easily migrate my data and my database and the instance over to AWS or Azure or something else to host my data there. So I’m not really dependent on a platform to host my data, to host my database, to run any of this. I can really move stuff around as I see fit, mostly just by changing a few configuration buttons, so.
David Simmons 00:32:29.314 Can this solution be deployed within a GKE cluster? I believe that InfluxDB can be deployed in a GKE cluster. I’m not certain of the details of that. And once you’ve got that running, I know that you can run Telegraf and the rest of it in GCP. So you can basically point it all to your GKE cluster and run it from there. Mauricio also asked, what kind of write rates will this architecture support? There’s a big-it depends on that, right? It depends on-well, at least for most cloud platforms, it depends on how much you’re willing to pay for it, right? Because they typically charge for the size of the instance, for the throughput, things like that. We have users on InfluxDB Cloud that are writing millions of data points per second into InfluxDB. So it is capable of taking very large amounts of data on an ongoing basis. It just depends on the size of your instance and how much you want to pay for it. I’m not sure what the throughput of the GCP Pub/Sub client is. I have not tested that. So that could possibly be a bit of a bottleneck as well.
[silence]
Chris Churilo 00:34:51.505 Okay. Well, it looks like the questions have slowed down. So if you do have questions later on, please feel free to put them in the community site, or you can always email me, and I’ll post them and make sure that David sees them as well. You can also follow him on Twitter. He’s very active there. And probably you can see sometimes his grumblings of some of these docs [laughter] or some of the happiness that he has when things are really right. But always a great source of information, so I suggest that you follow him there. I will do a quick edit of the video and share this with everybody. And I hope you guys have gotten a lot out of this webinar. And we look forward to hearing about your great projects that you’re going to be building. Thanks, everyone, and have a lot of fun playing with the TICK Stack. Bye-bye.
David Simmons 00:35:41.991 Great. Thanks.
[/et_pb_toggle]