Multi-Cloud Monitoring with Containership & InfluxData
Anyone can appreciate the promise of flexibility and optimization that containers can bring to your development cycles. But the reality is that most enterprises have to deal with containers across multiple environments-in the cloud, on-premise, or in hybrid cloud environments-which can prove to be quite the challenge for the enterprise.
In this webinar, Phil Dougherty and Gunnar Aasen will show you that it actually can be simple to deploy, manage and scale containerized applications on a mix of any public or private cloud with Containership. And with this varied environment, you gather metrics and events from your apps and microservices and centrally view and manage all components with InfluxData.
Watch the Webinar
Watch the webinar “Multi-Cloud Monitoring with Containership & InfluxData” by clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version="3.17.6" title="Transcript" title_font_size="26" border_width_all="0px" border_width_bottom="1px" module_class="transcript-toggle" closed_toggle_background_color="rgba(255,255,255,0)"]
Here is an unedited transcript of the webinar “Multi-Cloud Monitoring with Containership & InfluxData.” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers: Chris Churilo: Director Product Marketing, InfluxData Gunnar Aasen: Support Engineer, InfluxData Phil Dougherty: Co-founder, Containership
Chris Churilo 00:01.575 All right. Good morning, everybody. Thanks for joining us in our webinar today. Today, we’re pleased to have one of our fantastic partners, Containership. And they will be going over what it means to monitor in a multicloud environment. And so, we have two speakers today. We have Phil Dougherty, co-founder of Containership and their fearless leader. And we also have Gunnar Aasen, who is our partner, technical engineer. So with that, I’m going to pass the ball over to Phil so he can get started. And before I do that, I do want to remind everybody that these sessions are recorded, and you can listen to them at your leisure at the end of today. And I’ll make sure that I post that for everybody. So let’s get started.
Phil Dougherty 00:51.407 Wonderful. Hey everybody. How are you doing? I’m Phil from Containership. Really happy to be here today to talk about this really important topic of monitoring in a multicloud world and, also, kind of just multicloud in general and what that means and why it’s so important. So happy to be partnered with InfluxData and to show you some really cool stuff that you can do to make multicloud monitoring more simple. So we’ll skip here into the slides. So, like I said, I’m Phil Dougherty, co-founder and CEO of Containership. Been working on this project for about three years, and I can’t wait to show you some of what we’ve built. So the first thing I want to talk about is kind of understanding what the importance of multicloud is. Then we’ll get into kind of our approach, talk a bit about time series data. And then, we’ll go through a quick demo and some Q&A.
Phil Dougherty 01:42.277 So understanding the importance of multicloud and why it really matters. So the thing about web hosting companies and cloud hosting companies these days is that they’re all solving problems in incompatible ways. So what does that mean, exactly? Well, you may have a database service on one provider that is sort of an equal product from a different provider but they’re completely incompatible. The APIs are completely different. And once you start using one of those, you’re kind of locked in. And that goes for all services within a cloud provider. Even launching a virtual machine or scaling is done in a completely different way on each cloud provider, but they’re all kind of-at the end of the day, all kind of giving you the same thing. And what that ends up creating is lock-in, and it makes it difficult to be able to utilize multiple providers or switch between providers should you ever need to. In this day and age, servers alone are not really enough to solve all of your problems. So if you just have a couple of VMs, you’re not going to have a really easy time scaling out if your traffic takes off overnight. So what you need is automation, and that’s really what is locking people in today. They go to a cloud provider because they’re after some automation. It makes their job easier. They want to be able to deploy their software more simply. They’re looking for stability and high availability. And in order to do all this, they need to adopt a lot of these automation tools provided by the hosting provider, which is what ends up locking them in. So that can be a bit troublesome.
Phil Dougherty 03:20.711 And with Containership, we really tried to take an approach of building things in a cloud agnostic way so that you can take that automation with you and use it on any cloud provider or, really, any hosting platform that you choose, which gives you a lot more flexibility to make decisions that are smart for your business and be able to kind of make decisions about where it makes sense for you to run, when it makes sense for you to run there, so. Our approach, that we have taken with Containership, is to build a lightweight cloud management layer. And what that means is that we have kind of a single pane of glass that hooks into 12 cloud providers and then to on-premise hosting solutions, which allow you to host in all those clouds and all of those private data centers. And once you’ve kind of set up your infrastructure to work on one of them, you’re instantly compatible with all of them. That makes it so that you can use, maybe, DigitalOcean, or Linode, or another provider like that for maybe your staging and your test environments. But then, you can really easily use Amazon web services, Microsoft Azure, Google, Oracle, or many others for production and also be able to take advantage of their tools as necessary. So it makes it kind of really simple to be able to utilize all these providers at the same time, which gives you some pretty cool super powers. So you’re not going to be in a situation where-maybe you’re running on AWS and you’re using the US East region, and then, suddenly, traffic in your business starts to take off in the other side of the world. Instead of having to go set everything back up by hand over there, you can actually kind of clone your entire setup into that other region on the other side of the world really simply without having to reconfigure anything, which is going to save you a lot of time, a lot of headaches. And you’re not going to be kind of pulling your hair out trying to remember all those knobs you turned and switches you flipped to get things set up the way they are in your current region.
Phil Dougherty 05:25.460 So we try to make this as simple as possible because we feel that simplicity is what’s going to make it so that as many people on the team as possible can get involved and help to manage their own software projects, including their deployments, how well things are performing, and look for bugs and be able to fix them quickly without having to go to somebody else on the team, to ask them for help, to get things sorted out. So we have basically standardized our API, CLI, and UI across all the different providers, which gives you a really easy way to kind of get started on the hosting provider without understanding all of the intricacies of how they decided to build their automation tools. So what we’ve built is really a portable infrastructure blueprint. And what that means is that all of your services and applications, your load balancers, firewall rules, and your underlying data can kind of be picked up and moved between providers, like I mentioned before, in a cloud agnostic way. And that makes for some pretty interesting capabilities for disaster recovery, for example. So we just experienced an AWS outage about a week ago, and most people, when that happens, are kind of out of luck, right? There’s not much you can really do except pray and wait for it to come back. In the case of Containership, if you have scheduled snapshots enabled, you could’ve restored to another provider or an unaffected region of that same provider and been back up and running within a few minutes instead of waiting for things to come back to life. So that can be very, very useful and can actually impact your business significantly, especially if downtime equals a lot of lost revenue for you, which in the case of many businesses is true. And as I mentioned, region expansion, if you need to be able to expand the new different region, don’t rebuild it. Don’t reinvent the wheel. Take what you already have. Pick the whole thing up and be able to move it very quickly, and once again, provider expansion. So being able to take advantage of a hosting provider as if all they’re providing you is storage, networking, and servers is really, really powerful. And that’s something that we’re really trying to unlock with Containership, and it’s kind of been our goal since day one. So, I’m going to pass it over here to Gunnar, to kind of do a little bit of explanation on the time series issue here.
Gunnar Aasen 07:52.113 Thanks, Phil. So, hi guys. My name is Gunnar, and I’m a [inaudible] Partner for Engineering, here at InfluxData. So I just wanted to sort of go through the time series, what is time series, and sort of why is it a problem that you need to solve, like a containerized environment, especially in a multicloud environment. And then, sort of go through a little bit of why InfluxDB is-and the whole TICK stack-a good solution for that. So just to go over time series, so time series have a few unique properties in the sense that, typically, when you’re collecting time-stamped data, you’re collecting it over a period of time, and you’re either getting samples of data at regular period in time, or you’re getting event data. So say a container spins up or shuts down. At each of those time-stamped points is an event. And so most people can say, “Well, this is just like any other data and I could put this in a Postgres database or any number of my other data services I have running. So why do I need a specific database for this kind of data? And there’s a couple of reasons. And probably one of the big ones is, basically, time series data operates at a significantly different scale than many traditional databases that you’d be thinking about with a sort of relational data. So, particularly, if you consider a containerized environment, and you’re monitoring those containers-your containers, depending on how frequently you’re pulling for certain information, like CPU usage, memory usage, disk usage, you could be generating points between 1 point per second or 1 point per 10 seconds or a minute for each of these metrics. And what happens is that that ends up generating several hundred to even up to 1,000 points per second, and that’s per container. And so, what happens then, all this data needs to go somewhere, and if you’re trying to fit that into, say, a relational data store, there are a few issues you run into with that kind of setup. Namely, that’s a lot of writes. It’s a lot of writes for most datastores. And so a big selling point for something like InfluxDB and other Time Series Databases is that it can handle these writes and also make these writes available for querying in specific ways.
Gunnar Aasen 10:44.316 And typically, when people are querying for time series data, they’re asking for a set of data. It’s not usually specific points. And so, this is typically useful for making [inaudible]. And so, really, when you’re considering a time series use case, there’s a lot of different optimizations that you can make to make a high number of writes and a high number of queries, fast. And so that’s why InfluxData and other people have created time series databases to handle the specific type of use case. Some other additional sort of benefits of the time series database versus any other database is that, typically, time series data is kept in sort of chunks of time. And what happens is, most people are only interested in sort of the last 15 minutes or an hour worth of data. But as you start to get out a day, a week, two weeks, a month, the value of that data starts to sort of decline rapidly as it becomes sort of no longer useful for operational reasons or, honestly, isn’t looked at once you get back a year and farther. And so, time series databases are built to be able to handle various processing needed to both, basically, get rid of this data once it becomes stale and then, also, process this data, bring it up into summaries, so your average usage over the past hour instead of storing at individual points or seconds. And then, it’s also able to handle things, like going through and doing various queries based on time. So if you were looking at other relational data stores, it starts to get a little bit more tricky to do some various queries around grouping by that hour of the day and applying the various functions related specifically to time.
Gunnar Aasen 13:01.551 And so, why is it sort of useful in a containerized environment-is that if you think about a traditional sort of server infrastructure where you’re bringing on your email or VM in the cloud, but you’re just running regular process keys. You only have one sort of server to monitor, and you can sort of monitor the individual processes, but there’s not really too much of a breakdown there. But when you start coming to sort of a containerized world, especially in this case where you’re dealing with multiple clouds in multiple deployments across different clusters and in different environments, and each of those servers that you’re deploying to has anywhere from 3 to up to 100s of containers, you start to generate significantly more metrics. And in addition to that, not only do you generate significantly more metrics, but depending upon your deployment, you’re either having a lot of containers come out of service and new containers coming in, and each of these containers have different IDs. And you want to track sort of which stats are associated with different containers. There’s a bunch of management statistics that you would want to track. And so it just sort of exacerbates this issue of lots of data coming in. And this is operational data that is very useful, especially for ops people, but also if your application is generating sort of other data as well. That’s also useful in time series context. It’s useful to see if all of it’s real-time, and it’s useful to use a dedicated tool for this kind of thing. So that’s the time series pitch. So I’m going to pass it back over to Phil to start the demo.
[silence]
Phil Dougherty 15:14.855 Great. Thanks, Gunnar. Wonderful, yes. So with that said, let me go ahead and jump into the demo here. Share my screen. All right. So everyone should be able to see my screen now, and what we’re looking at is the Containership Cloud dashboard. So this is our single pane of glass into multiple hosting providers and clusters that we have running across them. So you can see that I’ve created an organization here called My Company, and I have a handful of clusters running, and they have various numbers of hosts and services that are running on them. So we have one in AWS, a couple in DigitalOcean and one here that’s running in Google Cloud Platform. So the way we’ve set this up is that we have the TICK Stack from InfluxData running on this Amazon web service’s cluster and we have the rest of these clusters set up to ship all of their metrics back to this centralized deployment that we have running here. So I might take a look at this cluster and take a look at the services that are running. I can see that I have the full suite of tools for the TICK Stack running. I can see that I have three running for Telegraf and that’s what actually shipping the metrics back. And the reason that is the case is because in this cluster, I have three follower hosts that are actually running containers. So it’s monitoring each of those. And then the same can be said for the rest of these clusters. So they’re also running Telegraf to ship their metrics back to that centralized stack that we have running on the AWS cluster. So I’m going to quickly just step through creating a new cluster to show kind of how easy it can be to start monitoring a new cluster of machines that you bring online using this awesome TICK Stack from InfluxData.
Phil Dougherty 17:09.573 So I’m going to go ahead and create a new cluster, and I can see here that I have all the different providers that we support. Some of them are grayed out because they’re not configured. The ones that are highlighted-I’m just going to go ahead and use DigitalOcean again. So once I choose the provider, I need to choose my orchestrator. We have Kubernetes support. We also have our own Containership scheduler. Just use Containership for this example. And then we’re dynamically pulling in all the regions where DigitalOcean has data centers. So maybe we’ll run in Toronto this time. Now, I just need to configure some settings like what operating system to use and which SSHQs I want to use and click continue. And now I need to select the leader size. So the leader or the master host is basically going to be running the API, and it’s more or less the brains of the cluster that’s deciding where your jobs are going to run. These servers can be really small, so I’ll just maybe pick this $10 a month instance, and we can run multiple of those for high availability. We’ll just run one for this example. And I need to choose my follower host, and these are actually where my containers are going to run, so I need to have enough juice and enough CPU and memory to be able to fit the workloads I plan to deploy. So these don’t have to be huge, but maybe we’ll run these $40 a month ones and I’ll run three of those. So once I click continue you can just give the cluster a name, like My New Cluster, and put it in an environment, say this is production, and launch it. So now we’re actually going out to DigitalOcean, and we’re launching those hosts, and once they have all formed a highly available cluster, they’ll phone back home to the Containership cloud platform and be available for us to interact with. So we’re going to take a look here and see what step it’s on. Currently, boot strapping Containership. It’ll install Docker and the rest of the necessary tools. And once it’s live, it’ll be green, and we’ll be able to start interacting with it. So with that said I’m going to actually pass it back to Gunnar again to take a quick look into the Chronograf software and then we’re going to show how easy it is to set up Telegraf on this new cluster to start shipping metrics there.
Gunnar Aasen 19:29.494 All right. All right. Thanks, Phil. Now I share my screen. All right. So, like Phil mentioned, he just deployed a new cluster and you can see that’s [inaudible] down here. Just to go over some of the components of the TICK Stack since I realized that I didn’t really cover it that well earlier. Basically, InfluxData is the sort of primary developer of the TICK Stack, which consists of a few different services. Basically, InfluxDB is our time series database. And then it also includes Kapacitor event alerting and streaming engine. So if you’re doing a processes time series data, you would send you data through Kapacitor to have it transformed in many different ways that you like. And then it can also generate alerts. We also have Telegraf, which is our collection agent, and you’ll see Telegraf is basically the service that will sit on each of your machines and actually go out and ping each of the containers for stats periodically and send those back to InfluxDB. And I’ll show you in a second that we’ve got Telegraf running on each of the different clouds that we have running here. And so we’ll show you also how to set that up in just a little bit. And finally use the Chronograf, which is the last piece of the TICK Stack. The Chronograf is our visualization and sort of graphing component. Also integrates directly with the Kapacitor so it makes it very easy to set up very simple alerts.
Gunnar Aasen 21:21.844 And I also just want to mention that the TICK Stack is-it’s almost all open source. Basically, Chronograf is fully open source. Telegraf is fully open source and then InfluxDB and Kapacitor, we have enterprise-licensed versions of the software. But running a single node is free and open source. So as you can see here we’ve got our TICK cluster or we’ve got a cluster running TICK Stack. So this is just one cluster running InfluxDB, Kapacitor, and Chronograf as well as Telegraf. If we go and look at the other clusters, we have set up, say our app over here in New York, we can see that we just have Telegraf on this service or on this cluster, and this Telegraf service is taking metrics from all the containers ran in this cluster, which isn’t very many, but it’s taking these metrics and sending them back to our other cluster over here on AWS. So I’m just going to show you-do a quick overview of Chronograf. So Chronograf’s are EY-is essentially the best way of starting to look at InfluxDB data as well as configure Kapacitor. And you can use it to generate graphs, set up dashboards, do a whole bunch of stuff. So as you can see here, this is the [inaudible] page. Basically, what Chronograf will do is it’ll auto-populate based on sort of the default Telegraf plugins. It’s got a few sort of default dashboards and graphs on display depending on what plugins you have in Telegraf that report back. It’s very quick sort of discovery and exploration of your data just getting started. And as soon as Phil sets up the Telegraf service on the new cluster that we just created, a bit later, you’ll be able to see that this gets added to this list here.
Gunnar Aasen 23:31.987 So some other things that you can do with the Chronograf is we have a data explorer. So this is a very easy way to build queries. So you can basically go in through and the database, say we’re interested in CPU. Our CPU time series that are being collected and sent from Telegraf to InfluxDB contain various additional-what we call-tags. So it’s just basically additional almost metadata that help you display data that’s sectioned off into the various things. So we can group our CPU usage by cluster ID and once we have it tagged, we can also set an actual field such as the data we’re actually going to take a look at which is the usage, over here. So this is the percentage usage by CPU and then we can also group by things like the host. So this is just more of an exploration thing. We also have dashboards. So I think we have a couple of dashboards set up by default here. Basically, you can go through and set up various dashboards and get nice pre-stats. So here we can see number of percentage by hosts. We can see that this 6137832E is the memory. So that may be something we want to inspect there. We also have Kapacitor alerting. So what this allows you to do is to easily create a rule based on a query. You can go through and say you were interested in seeing the host that-any host that can go above 60% memory. So we can define a threshold on the usage user. We could say if it’s greater than 60%, it will alert and we’ll send the message to find out here. And then we have some various other options like administrative options to set up users. You have queries and as well as configuration. So I’m going to pass it back to Phil then. He will go through and show you how to set up Telegraf on a new cluster.
Phil Dougherty 26:34.115 Great. Thank you, Gunnar. I’m going to go ahead and share my screen again. Okay. Cool. So here we are back in the Containership UI. We can see that that new cluster we launched named My New Cluster-how very creative-is up and running. And now we’re going to just show how easy it is to get Telegraf up and running on this cluster. So if we go to services, we can see that we don’t have any of our own services running, just the core stuff that we run on every cluster. And so to deploy this, it’s quite simple. So just use, launch new service, and either we can use the wizard which allows us to plug in our own docker images and other information or we can use the marketplace. We’ve actually already gone ahead and added Telegraf as a marketplace item here, so I can select Telegraf. And it’s going to just ask me for basically two values I need to fill in, so the name of the InfluxDB database where the data should be stored and then the URL to the DB server. So I’ve already gone and copied the URL here for the server. And then the name of the database we’re using is Telegraf and then I can choose how much CPU and memory that I need to provide to that agent. This should be sufficient and I can go ahead and add the service. Now, that should be coming up. Refresh the page. All right. So there’s Telegraf and we can see that’s it running three out of three containers. Once again that’s because of the number of hosts that we have in the cluster. We go here and take a look, we can see that they’re all loaded on the various hosts in the cluster. If we wanted to, we can pop in and take a look at their logs to make sure things are functioning properly. It looks like they are. So the deployment of Telegraf is really as easy as that. As I add more services to my cluster those will automatically start getting picked up and shipped back to our centralized environment where we’re shipping all of these metrics. So that was a quick little example of how to do the deployment of Telegraf. And now, once again, I’m going to pass it right back to Gunnar so he can show us in the Chronograf UI whether or not those servers showed up for monitoring.
[silence]
Gunnar Aasen 29:23.469 Thanks, Phil. All right. So, yeah, welcome back here and we’ll take a quick look and it’s hard to tell these host IDs, but there is indeed some more host IDs here. So let’s go back to our dashboard and I’ve just set up a quick example dashboard. And, basically, the way dashboards work is they’re essentially-if you’ve used sort of Grafana or any other sort of dashboarding tool, it’s very similar. Essentially, we can do various things like set templates, up here as well as query [inaudible] and add graphs to those cells. So create a new cell here and sort of drag and drop this, and also expand it if you want to. I already have a cell that’s partially completed, so I’m just going to come down here and use this one. Before I do that, I just want to go over templates real quick. So templates allow you to set sort of, basically, values that you can use as sort of a dropdown [inaudible] selection to filter your various graphs in your dashboard, very easily so you can you get different sort of slices of what you’re looking at. So I’m going to add a new variable and I’m going to call this one Containership Cluster. Not like that. I’m just going to get rid of this one then. All right. So I’m going to create a new cluster and we’re going to create this, so we can create template variables from various different things. We can provide a CSV sort of format or we can go through and use InfluxData so we can group by databases, measurements, key, tag values. In this case, I’m going to group by tag value and I’m going to use tags from Telegraf database and I’m just going to use the CPU stats right now because I really just care about the key, which is this Containership Cluster ID.
Gunnar Aasen 31:55.551 So I’m going to get the values for that and I’ll see that I’ve got a few clusters here. So I’m going to save changes and come back to this dashboard here. And I have a cell already set up so I can kind of rename this cell. I’m just going to name this points per second. And I can set up queries essentially the same way as the data explorer so I can come through here. I’m actually going to use this example; I’m going to use InfluxDB’s internal database so this is its own sort of stats about its own stats that it’s collecting. In this case, I’m going to go through and go to the right measurement and I’m going to grab the current InfluxDB cluster and then I’m going to look at the points. All right. Okay. Which is number of points. Okay. You can see that this is actually a [inaudible] increasing graph because there’s actually a counter that goes [inaudible]. It goes up for each new point that gets written.
Gunnar Aasen 33:35.590 So I’m going to add and do a manual query in here to add a non-negative derivative option which, essentially, takes the derivative of this query. And I’m also going to add a group by clause to this, group by time. I’m going to group this query into one-minute segments and then, basically, take the derivative of those one-minute segments. All right. So I received an error and I think it’s because I typed something in wrong. And I did. Okay. There we go. So we can see that we have been ingesting about 66 points average per minute until the most recent 10 minutes or so when we started ingesting 84 points. Those are from an internal query. Actually not very great for showing as a template variable I just? set up. We’ll just take this one and I can show you how to use the template variable very quickly. So this one, we have basically average CPU per cluster. And going to add in-sorry. Another [inaudible] here. And we’re going to filter it by this template variable that we set up. So we’re going to filter Containership [inaudible] and it will autofill here by the cluster. And it looks like there is inaudible.
Gunnar Aasen 36:17.585 All right. So now we are-I’ve set up this graph to show only a specific cluster. And we can set the cluster up here. So we can see this cluster is actually very spiky and, if we want to, we can go through and [inaudible]. Our clusters actually have names by setting the Telegraf [inaudible]. But this is a quick intro to dashboarding and templating. And I’m going to pass it back to Chris.
Chris Churilo 36:59.851 Awesome. That was really fantastic. Thanks, guys. So I do have a couple of questions from the audience. I want to just read them out loud and then have you guys answer them. So I understand the benefits that you described about why it’s important to be able to monitor or to be able, first of all, deploy across multiclouds in a consistent manner and, of course, monitor to make sure the service is effective. You mentioned DR and scalability, etc. But just wondering what are the things that your customers are actually surprised by, once they actually use your service?
Phil Dougherty 37:40.417 Sure. Yeah. I think, for us, our customers are really coming with the desire to have the cloud agnostic ability. And I think for them, the part of them they’re surprised about is how easy it can actually be when you kind of build tools that are made to function in that way. If you try to just go out and rig things together on your own, it can be a lot of work. And that’s why we’ve decided to build a business around it because we know it’s a challenge and we know that it’s something that a lot of people are going to want. And, as time goes by, more and more people are going to want. I mean, a couple of years ago, people would have said to us, “Why would I ever want to use multiple clouds? I’m perfectly happy on AWS.” But I think that just seeing the ease of use and the flexibility is what is going to really driving that forward. That and things like a couple months ago, Walmart told their partners not to run on AWS. If you already had multicloud strategy in place, then that really wouldn’t be a problem for you. But if you don’t, it really can be. Sorry about that. I lost my audio there. Yeah. It’s an important thing to do and a lot of our customers are just surprised that it can be easy once you kind of have the tools in place to actually do it.
Chris Churilo 39:02.907 Yeah. And I think it’s probably more prevalent across the enterprise that they have these various environments that they’re running on. I mean, I can’t imagine that there’s a-there must be very few enterprises that have-just using AWS or a single provider. I mean, they’ve got so much legacy stuff that could be running on-still their own gear. They could have some teams on AWS, some people on other stuff. So it just seems more natural to me for them to consider that they have to support a multicloud environment. One other thing it got me thinking about when I think of the enterprises, it seems to me that with this kind of automation and this kind of view into all these environments, that this could also help them with the understanding of their security postures, do you have customers that are starting to consider that as well?
Phil Dougherty 39:54.086 Yeah. Definitely. So just getting back to one thing you just said is just definitely when it comes to enterprise, they’re the ones that are really examining this multicloud stuff the hardest. And a lot of the enterprises we’re talking to are already trying to run across multiple providers as well as their own internal data centers. And they’re, like you said, doing that across multiple teams and they all have different processes for how they’re interacting with those cloud providers. In doing that it definitely causes a couple of problems. One is being able to keep track of your spend and where you’re wasting your resources. Really as a whole organization, the other is-just like you mentioned-security. Without kind of having a unified way of managing things and being able to put processes in place that are the same across all of your providers that you’re running on, you can definitely get into the weeds. Because really you can think of it as you’ve got a second provider, you’re doubling your work. You add another provider and now you’ve tripled the work really just running on top of one. So, yeah, definitely I think that’s something that’s a major concern especially for enterprises and really anyone that has to deal with PII data. But I think things are really finally starting to come around and get to a place where the tools are there to make it feasible.
Chris Churilo 41:18.282 Yeah. It’s funny to me. It seems so contradictory. I mean, here you have people try to adopt cloud because of ease of use and scalability. But then, like you said, when you’re in this multicloud environment you could be doubling, tripling your work and, all of a sudden, that ease of use goes out the window. So why bother monitoring containers? Those things just kind of come and go, what’s the point? Why do we even have to care about them?
Gunnar Aasen 41:46.067 I can take that one. Yeah. So, I mean, why monitor containers? I mean, really it’s monitoring your services these days. I mean, containers are really just like the build artifact of today and definitely, I think containers have helped when it comes from an organizational level, when it comes to the software development life cycle, and keeping everybody on the same page, and working on the same kind of project in the same state. When you’re getting your containers up there to run in the cloud or you been on bare metal, you’re adding a whole other layer on top of what you already had to monitor in the past. So if I was monitoring just a bunch of services running on a VM somewhere I need to understand really I’m only getting-it was difficult actually to understand how each individual service that was running on that server was impacting the overall health of the server because there were standardized hooks into seeing how each service was performing. Now, with containers, we’re able to do that because we can say, “Okay. This container-I can see its health as a unit and can better understand kind of how each service you’re running is functioning.” But that said, it really is a whole other layer of kind of operating system almost, on top of your-well, you already need the monitor which is your actual VMs or physical host systems and that can add additional complexity as well. It’s one of those things where as we add kind of more abstraction, it’s definitely going to solve a lot of problems but it’s going to introduce new ones as well and that’s why it’s important to have tools, like you guys have developed, to be able to really kind of introspect and understand how things are working under the hood.
Chris Churilo 43:37.977 That’s fun. Yeah. I completely agree with you. And I think you also said something that is important at least-something that we think about a lot is that there are a lot more things that you have to monitor. Basically, the faster you go the bigger and more complex these environments become. Then it’s really going to be key that you really understand what’s going on so that you can do the right thing at any given moment.
Gunnar Aasen 44:04.257 Definitely.
Chris Churilo 44:05.285 I think we can wrap it up at this point. Any last-minute thoughts or conclusions on today’s demonstration from either of you?
Chris Churilo 44:16.189 Yeah. I mean, I think that this is something that’s kind of really important for people to pay attention to. I think the multicloud thing is really just starting to catch its stride and it’s going to be really the theme of the next couple of years. We had the VM stuff happen years ago, and then it was cloud, and then it was containers, and then orchestration. And now it’s how do we orchestrate across multiple hubs? So everything is kind of coming full circle. So a tool like ours I think is going to be really necessary to be able to do that. And a tool like yours is going to be extremely necessary to do it as well because, as we said, we can’t get into this world of even more complexity, even if it’s going to give us really awesome benefits without really understanding how things are functioning. So I encourage everybody to take a look at both products and give it a try and see how things fit together. And just really happy for being able to be here today to talk about this really important issue.
Gunnar Aasen 45:17.638 Yeah. I also want to add to that is that, like Phil mentioned, you should really go out and just try these, both of these products. I mean, Containership is just excellent in terms of having an easy place to start and get up and running on multicloud environment. It’s just really sort of very easy to spin something up in their UI and get started. And so I think that’s going to be sort of a longer trend for monitoring as well as multicloud deployment and various other things in this sort of like new containerized world where we start to see a lot of the legwork of handling all these different pieces, that usually would be hard to do, will suddenly become very easy and very easy to do these super-complicated setups that used to take a long time.
Chris Churilo 46:16.834 Yeah. Definitely. And I think mostly when you saw the demo, everyone can appreciate how simplistic of a view the Containership UI has been able to provide across such a complex environment. It really is important that we have that single pane of glass that both these guys described so that we can make sure that we can orchestrate across these complex clouds. But, ultimately, giving our enterprise customers the benefit of being able to support their applications and their customers effectively. So with that, if you have any more questions feel free to email me and I’ll make sure that I pass it along to Phil and to Gunnar and I will post this video later on today. And if you have any other thoughts on what you’d like us to demonstrate in regards to Containership and InfluxData, also reach out to me and let me know. We’ll make sure that we can make that happen. All right. With that, I want to thank everyone for joining us today and I hope you have a wonderful day.
[/et_pb_toggle]