Making Kubernetes a Better Master of Its Components through Monitoring
Kubernetes is a powerful system for managing containerized applications in a clustered environment and offers a better way of managing related, distributed components across varied infrastructure. In this webinar, Jack Zampolin demonstrates how to use InfluxData to help Kubernetes orchestrate the scaling out applications by monitoring all components of the underlying infrastructure.
Watch the Webinar
Watch the webinar “How InfluxData Makes Kubernetes an Even Better Master of Its Components through Monitoring” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version="3.17.6" title="Transcript" title_font_size="26" border_width_all="0px" border_width_bottom="1px" module_class="transcript-toggle" closed_toggle_background_color="rgba(255,255,255,0)"]
Here is an unedited transcript of the webinar “How InfluxData Makes Kubernetes an Even Better Master of Its Components through Monitoring.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers: - Chris Churilo: Director Product Marketing, InfluxData - Jack Zampolin: Developer Evangelist, InfluxData
Jack Zampolin 00:04.522 Good morning, everyone. Today we’ll be talking about Kubernetes monitoring with the TICK stack. So, how to monitor your Kubernetes instance with the TICK Stack as well as, sort of, what you would want to monitor. And then they’ll be a live coding session, I’ll go through how to set up the TICK Stack on your Kubernetes instance sets, and then answer some questions and take some feedback. So, let’s go ahead and dive right in. So, for those looking to follow along, there is a webinar repo. It is github.com/jackzampolin/tick-charts. So if you would check that out, I’ll give you just a second to go do that. Okay, now that you’ve got the webinar repo downloaded, let’s go ahead and get started. So, when we’re talking about monitoring on Kubernetes, most of you are probably familiar with Heapster. So, what is Heapster? Heapster was developed as Kubernetes was coming up to be an easy place for all kinds of different applications to go get CPU memory and other common metrics, network ingress and egress, fun stuff like that. So, most of you might know Heapster is backed by an older version of InfluxDB. They’re currently on-I believe it’s version 1.0. And there’s a couple of issues with Heapster. One is: it’s wrapped in the same API that’s not very well-documented. Another is: it’s got very short retention times for data, and one of the reasons for that is the schema was poorly designed for Heapster to fit with Influx. We’ve also redesigned the storage engine since they started the project. So, a lot has changed, and Heapster is no longer, sort of, state-of-the-art in Kubernetes monitoring. And in fact, if any of you’ve worked with the system, you probably know it’s a little bit difficult.
Jack Zampolin 02:16.549 So, what does the landscape of Kubernetes monitoring look like now? Well, there’s us InfluxData, so you might be familiar with that. You might also be familiar with Prometheus. That’s developed out of Borgmon, which is Google’s internal datacenter monitoring tool. We’re different than Prometheus in that we decided to focus on the storage engine side of things, so extending that storage timeline allowing you to have historical metrics, and also store other event-type metrics in Influx. Now, Prometheus is solely pull-based monitoring, and they haven’t done quite as much work on the storage engine, so they have a lot of the same limitations that other time series databases do. There’s also Elasticsearch. Now, many of you are probably running Elasticsearch clusters with Filebeat on all of your servers to pull logs for your applications and Kubernetes. And then, obviously solutions like Sysdig and Stackdriver with Google Cloud Monitoring or CloudWatch with AWS. So there’s quite a bit out there you could do to monitor Kubernetes. But I think we have the best offering on the market, and I’m going to go through it right after this. So why would you use TICK monitoring on Kubernetes when there’s all of these other options out there? One thing we really offer is being an open source company with an active community. We have many different integrations with pretty much any part of Kubernetes you can want. A native Prometheus integration to pull Prometheus metrics endpoints. So for pulling the Kubernetes master, pulling Kubernetes events, if you have existing applications with Prometheus endpoints built-in, anything like that, we natively integrate. And then, also the ability to store time series for long periods of time and perform that kind of historical analysis, especially as you reach a larger scale-it becomes extremely important. And that’s great for forecasting and even some Vi use cases, too, especially software businesses.
Jack Zampolin 04:41.420 There’s a lot of directly out of-let’s say your Nginx instance that you’re polling that are going to be sort of business intelligence metrics. So I think we have quite a bit to offer. So why tick-charts? As many of you know, deploying Kubernetes, if you’re using just naked manifest files, quickly gets pretty unwieldy. Even if you’ve got a separate folder for each one of your applications with a group of manifest files in it, you’re going to get to a point quickly where you’re cloning those folders and what you really want is a templating system. And the technology we’ll be using in this webinar is one called Helm. It’s sort of the package manager for Kubernetes. But what it does under the hood is allows you to easily template those deployment service and PDC files to quickly deploy things to your cluster. And it also helps with managing those deployments as well. If you pull down the webinar repo, you’ll see that there’s instructions for installing Helm there. You can go ahead and do that now if you’re going to follow along.
Jack Zampolin 05:58.642 So before we dive into the live coding session, I want to actually give you an architectural overview, to give you a little heads up on what we’ll be doing and sort of what we’ll be deploying. So in this architectural diagram, I’ve got the machines themselves, or visualized as three squares at the bottom. And then Kubernetes is kind of that control plane. It’s tying them all together. The first piece of infrastructure we’ll be deploying is the database itself. That has a surface and an optional persistent disk. After that, we’ll deploy Kapacitor. And when Kapacitor deploys, it actually subscribes to data coming into InfluxDB automatically. And Kapacitor needs a service so that InfluxDB can send data over to it. And then also persistent disk as Kapacitor does source its data information. We’re going to deploy Telegraf on each one of our host nodes. And those Telegraf instances are going to report to our InfluxDB that’s running in cluster. That data will flow over into Kapacitor to be made available for alerts. And then we’ll also have Chronograf. So Chronograf is the visualization tier for the TICK Stack. And I’ll be demoing that on this as well. And then Chronograf will connect to InfluxDB and Kapacitor. Chronograf acts as a user interface for the Kapacitor alerts. And then finally, you’re going to want an instance of Telegraf to poll various pieces of your infrastructure. Let’s say you’re running an AWS and you want to get some CloudWatch metrics and Influx to visualize in Chronograf or you’re running an Nginx instance. You’d like to poll that. Maybe you’ve got some database infrastructure. And this diagram shows the metrics flowing in. So in addition to the Telegraf metric, the time series data that’s flowing into the service. And then you can also write application data to that Telegraf instance there.
Jack Zampolin 08:25.445 Any questions on this before I move ahead to the live coding? Is this only for on-prem? Yes, this is an on-prem example of what you would need to do to run it on PRIM. We do also offer a cloud service. And the next time I give this webinar, it will be how to connect to your cluster to the cloud. If you’d like more information on that, I’d be happy to discuss that with you offline. If you go to community.influxdata.com, ask the question there, and I’ll answer it. What does Kapacitor need disk for, and how much does it need? So Kapacitor takes in the stream of incoming data points that Influx takes off, and it’s going to need disk for-let’s say you have some alerts that are taking hour windows. Alerts are going to need a little bit of disk space to store data. Kapacitor also stores any running tasks it has and any metadata about the server in a binary file. Kapacitor.db. It needs data for that as well, and then there’s also recordings. So if you’re recording blocks of your data to play them back in a task to test them, that will be stored on disk as well. Does that answer your question, Scott? Awesome. Okay. So let’s go ahead and pop into the code along.
Chris Churilo 10:07.050 We have one more question from Karen. How would you compare Kapacitor with Graphite?
Jack Zampolin 10:17.266 So Kapacitor is the alerting system for InfluxData. So it would be more comparable to like, Grafana Alerts, or Prometheus alert manager. And you can actually use Kapacitor with Graphite data, too. But it’s the alerting portion, so when you need to take action on your metrics. When I think of Graphite, I think of more the data recording format, those period-delineated strings of metadata. And then Graphite Whisper, the database itself, and then the various different collection agents. So it’s more of the alerting side. So in older infrastructures, this would be like Bosun or Nagios-I think has some alerting functionality. Does that answer your question, Karen? Awesome. Okay. So let’s go into the code along, and again, that webinar repo is here, and while I’m getting that all setup, I’ll pause for a minute to let you guys pull that down if you’d like. Okay, so I’m going to go ahead and share my screen here. Okay. So if you cloned the repo, you’ll see it looks like this. Here’s the ReadMe, here. And we’ve got a few simple commands to go ahead and install it. Just to walk through what we’re going to install from that architecture diagram earlier. InfluxDB, that’s the database, this particular repo has it installed without persistence, there’s information in the ReadMe of each of the products on how to enable different features. Telegraf, for pulling the individual pieces of infrastructure. This Telegraf that I’m going to deploy is going to be pulling at InfluxDB, in an instance, to get stats out of it, and then it’s also going to be pulling the Kubernetes API. There’s a Telegraf Debian set, so this will deploy one Telegraf node on each of the-one Telegraf pod on each one of the Kubernetes nodes to pull those host-level statistics as well as the cluster-wide Kubernetes statistics-so how much CPU am I using by namespace, which of my pods is the noisiest, those kind of questions.
Jack Zampolin 13:01.960 We’re going to install Kapacitor for the alerting aspect and then Chronograf, which is the dashboard. The deployment files for each of them are here, and then, there’s even more, documentation in here and again, at the end, during the Q&A portion of this, I’m happy to dig into any of the individual products and sort any questions you have on those. So to go ahead and get started, I’ve created a create.sh shell script that runs all those Helm install commands that I had earlier, I’m going to go ahead and run those. Run that. Okay. There’s the InfluxDB instance that comes up, Telegraf, and Kapacitor, and then Chronograf, finally, so I’m going to wait for the load balancer for Chronograf to come up and we can walk through this output here. So when you deploy a Helm chart, at the end it gives you some output, it tells you what it’s deploying here. So we’ve got a config map to hold the configuration for Influx, a service to expose it to the rest of the cluster members, and then, obviously, the deployment to manage that pod. And then in the notes below, it gives you some information. Where is it in the cluster, how can I access the service, if I would like to forward this service port to my local machine, how do I do that. If I would like a terminal inside that container, how do I get it, and then obviously logs as well? So for the various Telegraf, Kapacitor and Chronograf deployments, there are also those same things, and if you’ll see down here, we’re still waiting for our load balance start to provision. I’m running this on Google Compute, and they will automatically provision load balancers for you.
Jack Zampolin 15:01.151 So let’s go ahead and grab that IP, and here you can see all of our infrastructure here that we just spun out. Our pods, and sets, and deployments, and then services. So let’s go ahead and pull up that Chronograf instance. There it is. So it will be the Influx instance, I think it said it was accessible at dataInfluxDB.tic8086, so let’s go ahead and point Chronograf there. Awesome. And right away, we’ve got dashboards. Telegraf data comes in a predictable format, and we’ve created custom dashboards for a lot of these Telegraf plugins that we’re running. We’ve got a default Kubernetes monitoring dashboard, which gives you the amount of CPU each node is taking up, memory on each node, network ingress and egress from various pods. Obviously, this is kind of the beginning of these. You can go dig deeper in. Another example of a canned dashboard is the InfluxDB dashboard. So anything you would need to monitor on Influx, sort of how many series you have, write and query volume, any client failures-my clients are exceeding 500s-query performance, number of points written, all kinds of fun stuff like that. And then, obviously, Docker statistics here too, so if you need to dig down into your container-level statistics, how many containers am I running on this cluster, that kind of thing. So that’s very useful. Next, let’s plug in the Kapacitor instance. So that’ll be at alerts-Kapacitor.tick. So that’s release name, so I’ve called this instance of Kapacitor, I’ve called alerts, and then the chart name is Kapacitor, and then namespace. And then you can also just find that from the service name on the dashboard as well. Then we’ll call this cluster clusterKapacitor
Jack Zampolin 17:52.105 Okay. And then obviously, configuring various different outputs for your Kapacitor alerts, Alerta, Hipchat OpsGenie, PagerDuty, Sensu. The Slack one’s extremely easy to set up. It just requires you get one year from Slack. So that’s, generally, the one people start with. And then, a variety of other ones. And once you do that, you can go into Kapacitor and create alert rules based on the statistics that are in the database. And if we poke over here to the Data Explorer, you’ll see what statistics we do have in the database. So this is CPU, Disk, and Disk io for all of our nodes, some statistics from the Docker Daemon. So these would be overall statistics, like how many containers? How much resources is Docker taking up? And then the two below that are container-specific. Some InfluxDB data. This Kubernetes data down here. So this is CPU usage in the nano-cores, and then we can group that by host. That’s in the Kubernetes node. So this is all node-level statistics. If we want pod-level statistics, we can get that too. And maybe group running that by pod name, so you can see the CPU usage by Pod. The same things for network and disk. So these are some Prometheus statistics that we’re polling from the API server. So some SED statistics here. API server latencies. And then swapping systems finally or some more of those host-level statistics. Okay, and then if we want to tear it down, it’s pretty easy to pull down with the destroy script. But before I do that, I’d like to take a second to take some questions while I’ve got it running. If there’s anything you’d like me to show you before I tear it down, we can do that. And then, we’re going to take some more questions at the end.
Jack Zampolin 20:22.495 Karen, currently Chronograf doesn’t have that full-featured dashboarding functionally yet. Where is that URL? Here it is. So, you see this dashboard section here, the only thing that exists currently are some pre-canned dashboards that we have, Kubernetes being one of them. There will be a Create New Dashboard button up there, and you’ll create charts over in the Data Explorer. That functionality is going to be available in the next beta release. Either this Friday, which I doubt it makes it in, or a week from Friday. So that dashboarding functionality will be there very soon. Chronograf itself is still beta currently. We’ll be in GA in April or May likely.
Chris Churilo 21:19.073 What are the retention policies for these metrics and how many series do you generate?
Jack Zampolin 21:24.028 So the number of series that we generate, you can pull out of InfluxDB pretty quickly. Currently, this is creating around 2,000 series. When I run this for a longer period of time, just host server monitoring for four hosts creates, roughly, that number of series. Obviously, if you start spinning up more pods, more series-generally the limits for a single server that I’ve seen for Kubernetes installations are around 500 to 700 nodes. So full pod-level, container-level statistics for 500 nodes. Beyond that, you do need to cluster and scale out a bit. Does that answer your question? Oh, and the retention period for data here is infinite currently. And I have this same demo that I run live for the company, and I’ve got at least 30 days of data in it. And I’ve used this Influx database for a number of different things. But currently, there’s around 12,000 series in it. I’ve got two other data sets in addition to just those host-level statistics. But as you can see, the dashboards are still quite snappy. And-
Chris Churilo 22:59.558 Is there a view of how many pods there are in the ReplicaSet and how many ReplicaSets are there in deployment?
Jack Zampolin 23:08.994 You can pull those-they are not currently in an auto-generated dashboard. In the Kubernetes pod container section, you can pull out pods by name phase or container name. I don’t think we currently have an option to group by ReplicaSets. The sets we’re getting-Kubernetes don’t currently give us all the labels on all of the pods. So pulling out which ReplicaSet pods belong to is not something we have currently. It’s definitely something we’re looking get as we develop our capabilities and demo taring here. Any other questions? Okay. I’m going to go ahead and tear this down. So heading back to the slides, once we’ve deployed it, we’ve configured it. So next step’s obviously adding more metrics to this default setup-is one of the big ones. Allowing people to do Nginx monitoring, agent proxy monitoring very easily is something that this setup currently enables, but I’d like to put together more how-to’s on stuff like that. We do have a couple of projects to make it even easier to discover Prometheus/Metrics endpoints and pull those statistics in. So be looking for a better integration between the TICK stack and Prometheus soon. And that’s kind of what we have coming down the line for our Kubernetes integration. But what I’d really like to hear from you guys is what would you like to see? What problems are you facing in your deployments? And what can we do to help you solve those with us? So I’d be very interested in some feedback on that, and that kind of falls into the questions category as well. So I’ll go ahead and leave that prompt up. I’ll be here for another 30 minutes, and I’d love to answer questions and chat about the TICK stack on Kubernetes.
Chris Churilo 25:52.364 So Caron said thanks. Thought it was a great demo, nice, quick demo. He also would like you to share with him some of the top benefits of InfluxDB versus Graphite Whisper DB.
Jack Zampolin 26:08.987 So Graphite Whisper is a file-based store. So on Linux, the maximum file handling app is 65,000, so I’ve seen people actually start scaling Graphite installations in order to grow the number of open file handle limits. Now, obviously, that’s one very specific thing about the database storage engine, but it sort of goes to the overall flexibility of the system. Graphite just can’t achieve the types of performance numbers we can on the type of hardware that we use. I think the standard box that people run Influx on in productions is [inaudible] and 16 gigs of RAM. That’ll get you pretty excellent performance up to around two million series with continuous queries, running query load on the system. And that kind of performance with a Graphite server is going to take multiple very large boxes in a clustered configuration, that’s obviously more expensive and much more difficult from a management’s perspective. So, I think that the primary difference between Graphite and Influx is that performance. Also, query language, Graphite has a functional query language. InfluxDB is more simple like. It might depend on what you’re more comfortable with. A lot of folks really do like that Graphite query engine, but I think the speed and flexibility that’s available in Influx-we just had some queries recently-it is definitely comparable. So performance, I would say, is the main defining characteristic. And then, also, the ecosystem tools around InfluxDB itself, I think, it’s a bit more updated than the Graphite ecosystem, currently. Does that help answer your question?
Chris Churilo 28:03.202 So, he says that he completely agrees with you, so that they are facing some challenges with Graphite DB performance.
Jack Zampolin 28:13.895 Yeah, it’s something I’ve seen in a number of people run into-I believe we have a video of the webinar that I did a couple of months ago on Graphite migrations. Let me give you a hint. We do have a lot of tools available to help folks migrate from Graphite. Telegraf itself acts as a Graphite parser to help turn those Graphite statistics into line protocol and allow you to utilize tagging in InfluxDB. So, in this Kubernetes architecture, how you could plug that in is with that single Telegraf instance. Run your points through there, parse them from Graphite format into Influx, and then insert them in the database. So, if you have Legacy applications that are writing Graphite to a metrics sync, you can just point them straight to the Telegraf instance, and then, parse those secondhand without requiring any changes to the client application. And that’s something that our Graphite tooling is very strong on. It’s not requiring changes to client applications, and allowing you to attack it from any layer of it. You actually have control over. So, if you can rewrite the client, we do offer client libraries and pretty much any programming language you could possibly want. And then, if you can’t change the client, we do offer a number of options to intercept those points in a way of the database, parse them, accept them directly into the database. There’s a number of configurations, but Graphite migrations are very widely supported.
Chris Churilo 29:55.781 One of our major issues is Kapacitor planning for metrics generated by Kubernetes or InfluxDB deployments. You mentioned that there are 700 series per node. However, we’re considering that we upgrade Kubernetes deployments regularly as time goes on, we’ll be getting more and more series. It would be nice to have some sort of calculator to estimate InfluxDB, memory disk requirements, depending on Kubernetes service deployment velocity and number of nodes in the cluster.
Jack Zampolin 30:23.805 So that would be really nice. We do have a major feature that we’ll be landing soon that will make this problem significantly less. We’ve been working for a number of months on a major change to the way that the index is persisted. We’ve added on-disk persistence for that in-memory series index. And as part of that, we’ve also added the ability to have, sort of a hot index. So, what are my current pods, what are my current deployments? And I mean, those queries to return quickly. But things that are older than two weeks or a month, we don’t need to keep those series indexed. So that’s a major change that’s coming in the database, that will allow individual Influx instances to support upwards of 50 to 100 million series. So sort of an order of magnitude increase in the amount of series that an individual instance can support. And that means that, as you’re rolling your Kubernetes deployments and generating more pod IDs and container IDs, those things will sort of fade seamlessly into the background and not actively take memory. So this also makes Kapacitor planning with Influx much easier, as you don’t have to worry about continually flushing old data. But as far as a calculator to estimate your InfluxDB hardware requirements based on the amount of metrics, that’s definitely something we’re working on. I don’t know if it would be specific to Kubernetes. It would probably be more generalized. Does that help?
Chris Churilo 32:22.264 Okay, we have another question. Oh, Laura says, “Yeah, that sounds good.”
Jack Zampolin 32:26.702 Awesome.
Chris Churilo 32:26.702 And then, the other question that we have is, you mentioned telegraf-s and telegraf-ds in your demo. What are these tools about?
Jack Zampolin 32:37.050 So Telegraf is our collector technology. It’s a plugin-based agent for collecting statistics from various different places. So, obviously, on host, pulling CPU memory info from the host itself, or pulling APIs to determine latency. Telegraf’s a very full-featured tool that’s an open source member of the TICK Stack. When you’re in a Kubernetes context, you’re going to want to deploy Telegraf in a couple of different ways. One of those is as Daemon Set to poll pod-based host-level, so nodes. Your nodes and your Kubernetes cluster. Poll host-level memory and CPU statistics. In order to do that effectively, the best way to do it in Kubernetes is to run them as a Daemon Set. There’s also a certain list of plugins that you’re going to want to run in those Daemon Set pods. Obviously, you’re not going to want to pull your API from every node in your cluster in care about those individual statistics. You’re just going to want the stuff that’s specific to the individual nodes. So that’s what telegraf-ds is. It’s the Telegraf Daemon Set. Telegraf-s is a single instance of Telegraf with a bunch of configuration options exposed. So, for doing stuff like building a layer of Telegraf instances to consume from a Queue. You could do that with that chart. Or having a single-Telegraf instance that polls all of your database servers within your Kubernetes installation. Or your edge-load-balancers, there’s Telegraf plugins for that. So it’s sort of this division between what is node specific? So, specific to single instances, and what is cluster wide. Node-specific stuff is going to be MD telegraf-ds and cluster-wide, or individual-use stuff is going to be in the telegraf-s. Does that make sense and how can I clarify that further and make that more clear in the future?
[silence]
Jack Zampolin 35:11.781 Any other questions? I’d love to hear more about what you guys would like to see. Are there any particular Kubernetes how-to’s with the TICK stack that you’re interested in? Any particular pieces of infrastructure that you’re interested in monitoring?
[silence]
Jack Zampolin 35:37.653 Okay. So Garrick asks, “Instead of waiting for the new feature you described, it sounds like it is far away from feature-complete. The index. What is the best way to do sharding in order to handle more pods?” So, it depends on your deployment infrastructure. Are you running multiple different Kubernetes clusters and shipping all the metrics into one? If that’s the case, then I would say that the easiest way of sharding would probably be too shard by cluster, so have individual InfluxDBs for each of your cluster, that’s definitely one way to do it. I’ve also seen folks shard by instance type. So maybe you’re running some more of the resource intensive instances they kick off more matrix isolating individual instances to it. To one InfluxDB instances one way I’ve seen folks do it, but what kind of limitations are you running into? Do either of those sharding strategies sound like they would figure use case? Also, that on-desk implementation of the index will be in our [inaudible] starting next week and they’ll be in the 1.3 release in about a month. So it’s been in testing for about six months at this point, and I would say pretty close to ready, and I would encourage you to give it a try when it comes out, but yeah, does that help Garrick? So Garrick, a bit. Does Influx handle sharding? It does. This requires setting a load balancer that expects look cluster is directed towards. So if you’re looking to scale out your InfluxDB installation, Influx enterprise is the best way to do that. It does give you the ability to spread that in memory index across multiple nodes, tolerate higher throughput levels and a bunch of other fun error handling stuff. I think that’s the best way to scale your Influx installation. We don’t really have any sharding handling within the open source server. As far as setting up a load balancer that inspects what cluster it’s directed towards, the way I’ve normally seen folks handle that particular problem is with Grafana. They would connect to each of the Influxes as a different Grafana dashboard, a good different Grafana data source, and have similar schemas in both instances and be able to switch the data source for their different dashboards. Does that make sense?
Jack Zampolin 38:39.367 Okay, I have another question from Coran. Is the container images for Tick Stack managed by you guys or is that community-managed?
Chris Churilo 38:50.044 We actually manage all the official versions of our images, there are on the Docker Hub under the product name. The only image that we have that’s not on Docker Hub is the Chronograf image, that’s because Chronograf is currently in beta and that team prefers using quake once Chronograf is GA We will have a Docker image for that. But yes, we do maintain official Docker images for all the products, and I used them in this demo. Does that help? And I can drop the URL for that repo with the source in this channel here.
[silence]
Jack Zampolin 39:49.303 I’m going to go ahead and drop that in the [inaudible] chat and it’s InfluxData/Official Images on GitHub.
[silence]
Jack Zampolin 40:31.388 Garrick, we will definitely have a video of this presentation available. I think Chris is going to send an email later. Is that correct? So that’ll be in your inbox later this week. We don’t really currently have a guide for setting up sharding through the load balancer. And, in fact, our recommendation for folks that have outgrown a single instance and to shard is to check out the enterprise product. We do have an open-source implementation, of a highly available setup at influxdatas/influxDB-relay on our GitHub if you’re looking for that. But otherwise, I would really suggest you check out the Enterprise product. Any other questions or any feedback, what you’d like to see in future webinars?
Chris Churilo 41:32.387 Okay. We are going to conclude our session this morning. I will post this video in just about an hour or so. And we’ll also send out an email to everybody that participated and also registered for the webinar with the link to this presentation. In addition, if you have any other suggestions for any other webinars pertaining to this topic or to any other topic, please feel free to just drop me a line, and we’ll make sure that we get those things scheduled. So thanks for participating today and thanks, once again, to our really great speaker, Jack. And hopefully, you guys found this to be informative. And take another look at this video at your earliest convenience. Thank you and have a good day.
[/et_pb_toggle]