Why Monitoring Your Container Orchestration is Good for Business
Everyone is talking about how Containers are the way to deploy and run services. It helps developers deploy higher quality code faster by providing a consistent environment across the different phases of development (test, staging, prod) and is perfectly perfect mechanism to push application changes out regularly. Containers and container orchestration may be new to your organization, but they still require monitoring to ensure they are set up for optimal performance. However, your traditional monitoring solutions will not be able to handle the large load of ephemeral services and server instances and will not be able to help you spot the blind spots in your workflows.
In this webinar, Chris Rosen, IBM Container Service Offering Manager, and Gunnar Aasen, Partner Engineering, provide an introduction to the IBM Container Services using Kubernetes and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
Watch the Webinar
Watch the webinar “Why Monitoring Your Container Orchestration is Good for Business” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version="3.17.6" title="Transcript" title_font_size="26" border_width_all="0px" border_width_bottom="1px" module_class="transcript-toggle" closed_toggle_background_color="rgba(255,255,255,0)"]
Here is an unedited transcript of the webinar “Why Monitoring Your Container Orchestration is Good for Business”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Chris Rosen: Container Service Offering Manager, IBM
- Chris Churilo: Director Product Marketing, InfluxData
- Gunnar Aasen: Partner Engineering, InfluxData
Chris Rosen 00:00:01.911 Excellent. Thank you very much, Chris. Thanks, everyone for joining. My name is Chris Rosen. I am an offering manager within the IBM Cloud responsible for our container service and our container registry. So I’m very excited to be here with InfluxData today. Also, Gunnar will be on later for the second part. I wasn’t sure if he was going to jump in there, but he will take over in the second half here in just a second.
Gunnar Aasen 00:00:30.917 Chris, I am here.
Chris Rosen 00:00:33.151 Okay, great.
Gunnar Aasen 00:00:33.914 Also, I’m having some audio trouble.
Chris Rosen 00:00:36.385 No problem. So our agenda today, we’re going to talk about understanding the importance of containers. With any luck, this is not a new topic for everyone attending and listening after the fact, but we’ll kind of go through some of the main areas of why we’re seeing a lot of use of container technology. We’ll talk about the IBM Bluemix approach to a container service, then understanding the time series problems. So this is very important, as we start to really grow our container and microservices architectures, we need something like InfluxData to be able to get that real-time understanding and troubleshooting capabilities. And then we’ll do a demo from both the IBM side as well as the InfluxData side, and lastly, wrap up with any questions that you may have. Containers, so why does this really matter? So the first thing I wanted to really get to is a little history lesson. So containers are not new. Depending on your definition of container technology, they are 30-plus years old. But really, they’ve come to the main stage in the front lines here recently with the likes of Docker and Rocket and even Cloud Foundry, each of which have container technology. From our perspective, we’re really excited about the open container initiative that was launched in June of 2015 to kind of bring the standards together and get more formal definition around these essential building blocks at that OCI layer. So we’ve got the common construct for an engine, a registry format, and then an image format, and ensures we’ve got that portability across these multiple environments.
Chris Rosen 00:02:22.741 Container usage really started out as kind of a developer tool, a way to re-package their application and move it from one environment to the next. It really eliminated the issue that I’m sure we’ve all been through where something works in my environment, but it doesn’t work in yours. So that inevitably leads to a lot of finger-pointing on what’s the real crux of the issue, “How do we determine it?” Now with containers, we can package up all of our dependencies in our entire application, and we can move it as that one entity, and because we’re building on these open standards, we’ve got that portability to run that same workload, whether it’s on my laptop, your laptop, another public cloud, but to be able to run them all seamlessly regardless of that underlying infrastructure and environment. And our developers like to use this because they can ship software much more quickly. They’ve been able to integrate container technology to their existing CI/CD pipelines and move very quickly, whether it’s multiple versions and releases per day, adding new fixes, scaling out different components, it gives them much more flexibility.
Chris Rosen 00:03:33.944 The second thing is really around resource efficiency. Because containers are extremely lightweight, we’re not moving around full-VMs with an entire operating system in every patch. Now, we’re becoming much more efficient, much smaller. We’re building on that layered technology that Docker has provided with us. So that’s where a lot of that efficiency comes from is now we’re using those shared layers and we’re leveraging that underlying infrastructure much more efficiently. So instead of only a handful of VMs, now we can do the same workload with hundreds of containers if we need to. And then the last point, I’ve already touched on but it’s very important, especially with the customers that I talk to, is around that portability and eliminating vendor lock-in. And they want to be able to take that same application and move it amongst different environments, whether that’s a multi-cloud strategy or a hybrid-cloud strategy, but to be able to take that workload and deploy it in different areas and have it run consistently without doing any special tweaks from my laptop to the cloud or vice versa, just be able to take that application and run it as-is.
Chris Rosen 00:04:46.583 I wanted to spend a quick minute to give an introduction to the IBM Cloud. Now when we look at our cloud offerings, we look at them in totality from the underlying infrastructure where we’ve got, of course, the [inaudible] and the networks and the storage and all those things that you think about when you think, “Typical cloud.” But we also, as we go up the stack, what we are doing is providing a variety of choices for our users because different compute will address different use cases and scenarios. Because I own the container service, I’d love it if you ran everything in a container, but realistically, not every workload can or should run in a container today. So in IBM Cloud, there’s a wide range from OpenWhisk, which is our event-driven serverless technology. We’ve got Cloud Foundry, which is an opinionated way of doing containers, but there are use cases where that may be the best technology. Containers. We also have virtual machines. And then we also support bare-metal servers in the Cloud. So if you need something that’s very performant, you can use that. Then, again, as we go up the stack and you can see things where you’re leveraging higher-level services. So whether it’s Watson because you’re building a cognitive application and you want to add some intelligence into a chatbot, or you want to use weather data, or IOT, or blockchain; any of these capabilities, you can consume them independently of your compute choice at that lower layer. So whether you’re using containers, or Cloud Foundry, or event-driven with OpenWhisk, you can all consume Watson services from that platform. So as you’re building out, you can use those services to enhance the applications that you’re building.
Chris Rosen 00:06:39.894 The IBM Bluemix container service, so we launched our container service initially back in June of 2015 where we were really a multi-tenant container-as-a-service offering. Now with the time, Kubernetes was not an open source project yet, and Swarm was not generally available. So we had to develop our own orchestration to handle things like placement and deployment and scaling and isolation and things like that. We knew that, eventually, the ecosystem would catch up. From an IBM perspective, we decided on Kubernetes for our container orchestration tool of choice for a few reasons. One, today, it’s very future-rich. Obviously, that’s a point-in-time statement. But then also, and most importantly to IBM is the open ecosystem around Kubernetes. It was obviously seeded from Google, but today, less than 50% of the commits are from Google, Inc. So it’s a very open and vibrant ecosystem around Kubernetes, and they make it very easy to deploy and manage. So what we’re doing is combining kind of Docker, that OCI layer, and then Kubernetes as we move up to stack and the Cloud Native Computing Foundation and providing a user experience, not only day-one deployment, but then ongoing management and maintenance of your Kubernetes cluster, ensuring that it’s not a fire-and-forget environment. It’s an actual, “We’re going to help you manage that going forward.” And then having the flexibility to consume cloud services. So whether it’s from IBM Cloud, and like I said, Watson or something else, but really easily consume any cloud API running anywhere.
Chris Rosen 00:08:20.534 On the left-hand side, we’ve got kind of what we’ve highlighted as kind of the six main benefits of Kubernetes, and you’ll get these regardless of where you’re running Kubernetes, whether it’s in our cloud, in any other cloud or OnPrem. And then on the right-hand side, these are the capabilities that we’re providing in addition to, or on top of, as a part of our container service. So I’ll go through these very quickly. The first is the ability to design your own cluster so you can scale up or down. There are different flavors which are different predefined sizes. Container security and isolation is very important because you can choose the isolation of those worker nodes in your cluster. We’ve also got a component called Vulnerability Advisor which will introspect every layer and every image for known vulnerabilities as well as power feed misconfigurations, and we also do live container scanning. The simplified cluster management; so this is like I was talking about, helping you manage it day-end. So after we deploy it, how do we manage it going forward? Because as an end user, I don’t want to learn how to manage and maintain and upgrade open source projects that are coming from Docker and Kubernetes. I want that as part of my managed service. It’s 100% native Kubernetes experience. So you’ll see here the sample app that I’m using in the demo today is something that you can run in any Kubernetes environment. You can use the same CLI and API consistently. I’ve talked about leveraging the IBM Cloud services.
Chris Rosen 00:09:50.588 And then lastly, the integrated operational tools. And so that’s one of the things that I’m most excited about with our new architecture, with this single-tenant Kubernetes cluster is that our customers have that flexibility and choice to bring the tools that they’re familiar with, and that’s why I’m really excited to be here with InfluxData today is that our customers can drop that into their cluster and collect that data, regardless of if they’re running in IBM Cloud, or Amazon, or OnPrem, but to be able to consume that data and have it all collect to one InfluxData instance, and now I can look at all of my workloads, regardless of where they’re actually running. So that’s what I’m really excited about and glad to be here today. All right. So with that, I’ll hand it over to Gunnar.
Gunnar Aasen 00:10:40.025 All right. Thank you, Chris. That was a great introduction. So I just want to go over sort of the monitoring use case with using containers and sort of both why monitoring is a specific problem when you’re switching to a containerized environment, such as Kubernetes on IBM Bluemix. And then also sort of what InfluxDB and InfluxData have and can provide, and sort of go over our whole stack. So sort of just to go over what makes containers special in a Kubernetes environment is that containerized environments are pretty different from what you would consider in your sort of stock server-based installations. So one of the big differences is that, typically, you’ll have a lot of versions of containers running at any given time. And if you’re doing any kind of containerized service, it typically means that-especially once you get to a certain scale-you could have as many as tens to hundreds of containers running on a single server, and you might have tens to hundreds of actual worker nodes and servers in your clusterized environment. And what this means is that you’re essentially running a ton of stuff and potentially many more sort of individual deployments than you would be in a typical single kind of [inaudible] process per server kind of setup. And in addition to that, especially in the Kubernetes sort of workload, you’ll end up seeing a more dynamic environment where if containers die, they’re brought back up and they might be brought back up on different nodes. Depending on how large your environment is, you might see even servers coming in and out, worker nodes coming in and out, depending on various aspects of your installation.
Gunnar Aasen 00:13:02.935 And so one of the problems with having a significant number of services and in a dynamic environment is that it’s very hard to get complete visibility into the system as it’s running. And I think that’s really where monitoring becomes a very crucial tool in managing any kind of clustered, especially Kubernetes installation. And one of the challenges with monitoring containers is that, essentially, you’re creating a bunch of data, assuming that you’re trying to take a look at each of your containers that are running, and checking things like uptime CPU usage, memory usage, and even volume disk usage. And in addition to that, it’s really important to sort of get real-time insight into the data that you’re collecting, and you really need to be able to expose that data almost as real-time as possible so that you have an up-to-the-date view of what’s actually happening in the cluster because cluster services like Kubernetes tend to have pretty, again, dynamic environments that change fairly rapidly depending on the situation. And there are also some other-and specific to monitoring containers, there are some other things that are useful, such as expounding data. So if you’re collecting a lot of data that most container monitoring data is only useful in basically the first hour, maybe a couple days or weeks, and then it sort of rapidly becomes less interesting.
Gunnar Aasen 00:14:57.525 After that, there’s also some more of that unique requirements of what you could do with time series data that you’re collecting from a monitoring standpoint, and that is essentially, you want to create some pretty grass, things like that. And, typically, that involves looking at a chart over time, which with many existing solutions, it’s fairly hard to do. So I just also want to mention on developing [inaudible] InfluxData, in our platform. So InfluxData were the primary developers of InfluxDB, which is an open source time series database. And we’ve created a whole platform around this InfluxDB open source time series database. So we also have a few other components that create this, what we call the TICK Stack, and that’s composed of Telegraf, which is our collection agent, and our collection agent is a process that you essentially run on all your worker nodes, and that would sort of [inaudible] all of the containers on that node and then collect those metrics and ship them back to an InfluxDB database. That’s a service on a Kubernetes cluster, for instance. And then InfluxDB is our Time Series Database. So it’s very high-performance. It can ingest up to 20 points-per-second on a single node. And then InfluxData has a cluster [inaudible] enterprise InfluxData, InfluxDB solution as well.
Gunnar Aasen 00:16:49.226 The other components of the TICK Stack are Chronograf, which is our visualization and UI component. And then, Kapacitor is the last component of our stack, which is essentially, an event-processing and metrics, analytics, processing engine. So if [inaudible] alerts, or, say, [inaudible] time series, through various filters and sort of gets more analytics out of your data, Kapacitor is the way to go there. And so InfluxData has some basic stock deployments for Kubernetes. They have a [inaudible] services that, today, we’re going to demo sort of spinning off Kubernetes cluster, IBM Bluemix and then how to easily deploy all of the six services to Bluemix. And we’re also working on separate other integrations with Bluemix, both with integrating our hosted Influx Cloud solution as another service, and then various other things that we’ll be rolling over the next couple of months. So with that, I’ll give it back to Chris to do a quick demo.
Chris Rosen 00:18:20.888 All right. Can you pass me back the presenter baton? There we go.
[silence]
Chris Rosen 00:18:47.327 All right. So with any luck, we’re now seeing my screen. And first thing I want to do is kind of just walk through the Bluemix console. So I’ve logged in here to bluemix.net, I can see here everything that I have deployed. I’ve already got two clusters that are up and running, and I can see some information here. But the first thing I want to do was kind of navigate through the catalog and kick off a deployment, and then while that’s churning in the background, then we can dig a little bit into what’s already there. So on the left-hand side, I’ve got all of my categories to choose from. I will click, “Containers,” and my Kubernetes cluster. Now as this page loads, you can see that I’ve got my defaults, what we call our lite [inaudible]. So you can have a free cluster with some limited capacity and you can keep that forever. So if you really just want to kind of kick the tires on that, you can go ahead and do that, but I will create a standard cluster AKA paid. So that way, we can see some more options. So I give it my name, my location. I’m in our US South region. So I’ve got two data centers, which are DAL-10 and DAL-12 that I can select. As far as my machine type, there’s different sizes and like I said, we can have different worker nodes flavors in this cluster, but for a demo, I will just use a micro. I can choose the number of workers. In this case, I’ll just throttle it down to one since, again, it is a demo.
Chris Rosen 00:20:33.194 Here are my public and private VLANs. Now, I own the network. So I can control and kind of create different networks and different ecologies that I want for my different clusters and their workloads. If you don’t have VLANs already, then we’ll create some for you. That’s no problem, as well. And then the last thing down here is the hardware. So like I said, “shared” is a standard cloud VM where it’s a single-tenant VM, but then multi-tenant hypervisor and hardware underneath it. And the dedicated model, it’s single-tenant top-to-bottom. So if you need that isolation, even though you’re running in public cloud, you can select “data,” and ensure that you are the only user on that physical piece of hardware. So I’ll go ahead and click, “Create cluster,” and it will go off and start creating that guy for me. Now, the example that I’m going to use is-let me show this real quick-and this is a standard Kubernetes Guestbook example. So anyone that has gone to kubernetes.io and gotten started, this is one of the samples that they provide. And the reason that I like it is not because the Guestbook is such a complex application that will blow your mind with the complexity of it, but it’s really to show that portability and that any sample app that you can run in a Kubernetes environment anywhere will run in the IBM Cloud as well. And that’s really what I like is that portability to be able to take those workloads and move them from one environment to the next.
Chris Rosen 00:22:08.378 Now, back at the IBM Cloud dashboard here on Bluemix, this cluster’s currently deploying. So while he is churning, let’s take a look at something that I already have deployed. So you can see kind of some overview information here, as that loads and hopefully refreshes for you guys at a decent rate, I can see my cluster ID. So this is going to be very important when it comes to troubleshooting and things, I can see my Kubernetes version, the location, my ingress subdomain. So I can see here, each cluster that you create gets its own ingress subdomain with its own key, and you can upload in your own certificate if you wanted to. On the right-hand side, I’ve got the number of worker nodes. I’ve got three that are ready and one that’s currently in pending. I could dig deeper into that over here on the left-hand nab with the worker nodes. I could get more information, I can see like I said, the machine type, my VLANs, and then the hardware type as well. I can also select one or more of my nodes. I could reload it. If it had an issue, I could reboot it, I could delete it, I could upgrade it if there were something available. And then the last tab shows the services. So if I want to consume a service and bind it to that cluster, again, a Watson or some other service, I can do so here with these commands, and now I can consume that from my application.
Chris Rosen 00:23:41.510 And the next thing I will do is actually go over here to my command line and what’s nice is that, obviously, the UI is great, but the reality is most of the time, you’re not going to operate in the UI, you’re going to drive things either through the CLI or the API as well. So what’s nice is-you can do this as well-what we’ve got here is the BX, which stands for Bluemix CLI. And I can do “CS” for container service, and I will do “clusters,” which is basically going to show all of the clusters that I have running. We can see-here’s my first one that I kind of walked through and here’s the one that I just deployed through the UI and he’s currently in deployment state. So then if I do a “BX, CS, workers,” if I want to get more detail on the workers in this cluster, I can go ahead and do that. And then this will come back, and we’ll see the same thing that we just saw through the UI, we’ll see that here as well. So I can get all of the actual details. Now because I’m pointing at my cluster, running in the cloud, if I do a “Cubed CTL. Get services,” and say I’ve segregated my workload based on namespace-for demo, we’ll just do a quick show that there’s nothing deployed here in this particular namespace yet. Now, if I do a, “Cubed CTL. Create. Guestbook.” And deploy it my namespace of “Demo.” Now, this will go through and take that Guestbook, that all-in-one sample where I’m basically deploying my Redis master and slave, and then my front-end service, and it’s going through and deploying these pods to my cluster running out in the IBM Cloud.
Chris Rosen 00:25:51.396 So now as we go back and take a look, we can see that things are starting to spin up. My default, everything are on this private non-routed IP space. And I can start to see that these things are coming up. So what I will do now in my YAML I defined the front-end to be accessible via node port. So what I can do is do a, “BX, CS, workers,” and now I can get a public IP from my cluster and I can take any one of these. Come back over here and as you may recall, the front-end runs on 30,084. And there, my Guestbook is up and running and I can use this to say, “Hello.” So like I said, pretty basic deployment, but the most important thing, again, in my opinion, is that portability. And I can take that workload and move it, regardless of where I want to deploy to because customers I talk to, even in a simple deployment like this, they want to be able to take it and move it from one environment to the next and do so seamlessly. So that’s really the easy part of the demo. The next thing, when Gunnar comes back in here, is really around, “How do I actually monitor that?” So let me go ahead and stop sharing and hand control back over to Gunnar so that he can walk through because what becomes important is once we’re up and running, how to actually monitor that and how to actually go forward with something a little more complex than Guestbook.
Gunnar Aasen 00:28:04.463 Thank you, Chris. And yeah, let me share my screen.
[silence]
Gunnar Aasen 00:28:16.620 Do not do that.
[silence]
Gunnar Aasen 00:28:46.107 All right. So thank you, Chris. All right. So like Chris mentioned, one of the things we like about Bluemix is that it really is sort of easy to get everything up and running and really get a very diverse environment put together with very little effort, essentially. And so, Chris demoed how to create a new cluster on Kubernetes. I’m just going to show, really quickly, essentially how to deploy the TICK Stack on a Kubernetes cluster, and then also go over some of the components of the TICK Stack running on a Kubernetes cluster so you can get an idea of how that works and what kind of stuff you can do that. And also sort of emphasize how easy it actually is to get up and running using Kubernetes on Bluemix and making that super easy. So I actually have a new cluster up and running with four nodes. And this is a fresh cluster. So I just spun this up. And if we go over here, we can see we’ve got four nodes over here, and all these nodes are running. So I’m going to come over to my command line. Should be able to switch here.
Gunnar Aasen 00:30:26.754 All right. So right here, as Chris showed earlier, I’ve got the Bluemix CLI installed and I’ve just basically gone through the first initial commands to set up my environment to use the new cluster that I’ve just spun up. And for here, what I can do essentially is, I’m going to use what’s called Helm, use the Helm Charts, which is a sort of packaged config management service to quickly deploy the TICK Stack throughout my cluster. So, especially-actually, yeah, so I’ll just do a key control, “Get nodes,” just to show you that running here. And I’m going to go through and, essentially, we have sort of some pre-packed Helm Charts that you can use at github/influxdata/tick-charts. And these are really useful to sort of go get up and running really quickly, like in the last slide of this webinar. But essentially, all you need to do is you need to install Helm, which on a cluster, once you set up your environment on the command line, you can just do, “helm init.” And then once that’s installed, you can pretty much go through here and basically install all these charts. And yeah, there’s a lot of stuff there.
Gunnar Aasen 00:32:31.399 But the charts are fairly easy to install and so-oh, that’s why. I’m not in my right folder.
[silence]
Gunnar Aasen 00:33:03.563 So I’m going to install InfluxDB and you’ll see that it just starts running and also install Telegraf on all of the nodes. And the way this demo is set up is that it’s going to actually install Telegraf to configure it to start running to InfluxDB. And once I’ve installed all of these Helm Charts, essentially, I have the whole TICK Stack up and running. So that includes, essentially, InfluxDB service running InfluxDB, which is our database collecting all the metrics that are being sent to it from Telegraf, and Telegraf is basically pulling the metrics from all of the containers in the host’s machines. And so then what we also have is we have Kapacitor, our alerting engine set up there-set up as another service, and then Chronograf, another service that we’re going to use to visualize. And to do that, we pretty much just run this command right here that gets printed out automatically so that we can proxy-oh, you got to just check in, make sure that the Chronograf service comes up. We’re basically going to proxy our Kubernetes Chronograf service to my local host service. And I’ll show you what it looks like to have TICK Stack running.
Gunnar Aasen 00:35:02.675 And there’s just some small configuration here to set up our cluster, which is-we’re just going to tell Chronograf or the InfluxDB services within our cluster. And we don’t have [inaudible] turned on this. So we’re just going to add our scores. And so that’s pretty much it for installing TICK Stack. And so what you can see here is we, right now, have all four of our nodes listed down here as different hosts in this cluster, and we can go in and sort of see immediately some initial statistics that are starting to be collected. So this is a very powerful sort of up and running system to get metrics almost immediately. And so once you get all this stuff set up fairly quickly, then you can start doing some more powerful stuff like going through and we have a data explorer to easily create various visualizations on your cluster. And so for instance, we can go through and we can find key usage for all of our hosts very, very easily and immediately get graphs. And there’s also a dashboarding function within Chronograf. So we can set up dashboards and very easily put together an overview of all of our various services. So we can, essentially, create that same CPU graph over here and sort of spin out because there’s not really much data there yet.
Gunnar Aasen 00:37:11.358 But we can do a whole bunch of other stuff. We can create a graph for memory usage as well. And the way InfluxDB works is that we’ve got a database and measurements for the various things that we’re collecting. In this case, we’ll do a graph on the memory within the containers we have running. And we’ll do memory usage. So essentially, it’s very easy to get up and running. Additionally, we also have Kapacitor, which is our sort of alerting engine. And Kapacitor makes it very, very easy to-and I’ll quit, it’s not working. Anyways, Kapacitor makes it very easy to set up. It alerts [inaudible] safe CPU thresholds, various other things like that. And that is the quick overview of the TICK Stack. We can also do user management, see running queries, and databases as well. So that is a quick overview of the TICK Stack and easily getting up and running [inaudible]. So I’m going to pass the ball back to Chris to wrap it up.
Chris Rosen 00:39:00.995 All right. So here we’ve got some links as well to get started, find some more details, both IBM side and InfluxData side. And you can see here, how easily you can get both sides up, deployed, and running very quickly, very easily. InfluxData leveraging Helm to install things, we think that’s a great standard to help deploy kind of regardless of environment. So some links to get you a little bit deeper. And lastly, we’d open it up if there are any questions, we’d be glad to address those as well.
Chris Churilo 00:39:37.932 Okay. Awesome. Thank you, guys. So at this point, we’re going to open up to questions. And if you do have a question, please feel free to throw that in either the chat window or the Q&A panel. And just to kind of get things started, Chris, you talked about how important it is for you to have a tool like InfluxData in these types of environments to give you an understanding of, I guess, a real-time state, what’s actually happening in this environment. So what are some recommendations that you would give to people when getting started with pulling together an environment like Bluemix and InfluxData? What are some of the things that you’ve learned in your experience that you wish that someone told you to watch out for?
Chris Rosen 00:40:28.357 I think the most important thing to understand going into it, there are a lot of benefits in a microservices architecture, but that also brings a lot of complexities. And if you don’t have that real-time monitoring that InfluxData’s providing for you, it’s going to make it very difficult to understand the environment, understand where latencies are coming in, where different components of your architecture are breaking down. And you really need a solution that is container and microservice aware and that kind of container-native to be able to provide those insights because without it, essentially, you’d be lost in determining where the breakdown and where the issues are coming from those complex microservice architectures. So that’s the thing, it’s great to re-factor, a cloud-native developed in this architecture, but you really need to bring the tools from day one so you understand the environment as the complexity grows.
Chris Churilo 00:41:36.772 But I imagine, even though the microservices architecture can bring a lot of complexity to the picture, the benefits of containerization is still probably a lot more superior than not.
Chris Rosen 00:41:50.809 Right, absolutely. And I don’t mean to imply that you shouldn’t go down this path. There are a ton of benefits of containers and microservices and having teams own individual components so they can develop and release new content at their own rate and pace without adversely affecting other components of the architecture or making updates or changes, or scaling up or down if different components handle more of the workload. So there are definitely many more pros to this type of architecture than there are cons, but just be aware that you need to develop and architect them in a way to handle failure with using something like a Chaos Monkey or something. And something like InfluxData will give you those insights and help you identify within your architecture where things are starting to fail.
Chris Churilo 00:42:45.049 And then, Gunnar, do you have any other recommendations or sorts of best practices that you would recommend to people, or you wish someone recommended to you when you were first getting started with trying to monitor a bunch of containers?
Gunnar Aasen 00:43:00.259 Yeah, I mean, I think if you’re just getting started, really Helm Chart’s the place to start. And additionally to what I showed in the demo, there are also a bunch of stock Helm Charts that are included in these sort of general Helm repository of charts for each of the components of the TICK Stack, and those are good places to really start getting your hands dirty with deploying the TICK Stack and Kubernetes and getting everything set up. Now the ones in the [inaudible], you’ll have to do some [inaudible], of course, but really, I would say starting with Helm, I find it significantly easier to get it up and running very quickly. And that’s something that wasn’t completely obvious when I first started using Kubernetes service. And also, Helm is a fairly recent addition as well because it’s really sort of a new thing.
Chris Churilo 00:43:57.423 Excellent. Excellent. And I know we do have a number of webinars that we’ve done previously that do go over those Helm Charts for the various components of the TICK Stack. So we do recommend that you take a look at that. In fact, we will put those links on the main InfluxData, IBM Bluemix page to make it easy for everybody, so you can access them. Well, I do appreciate everyone’s time today. I apologize about some of the audio issues we had in the beginning. And I will post the recording of this webinar later on, and you’ll get an email with a link to that. And as always, feel free to reach out to me if you have any other suggestions for a topic that you’d like us to cover. So I’d like to end this by thanking our really great speakers. Thank you, Chris, from IBM. And also thank you, Gunnar, from InfluxData.
Chris Rosen 00:44:45.170 Thank you very much.
Chris Churilo 00:44:47.008 Awesome. Bye-bye.
Gunnar Aasen 00:44:48.198 Thanks, Chris.
[/et_pb_toggle]