The I/O of DevOps and Where a Time Series Database Fits In
Session date: Apr 24, 2018 08:00am (Pacific Time)
In this webinar, Larry Gordon from xOps will share his experiences working with his clients to deploy the latest tools and implement practices across their organizations. Their focus has been on developing open source products to revolutionize technical operations by gathering and analyzing all operational metrics, logs, alerts, and business data in one place, in real time with a time series database like InfluxDB. This allows teams to build self-healing systems through automated issue resolution and provide visibility across business metrics with predictive analytics to determine trends before the competition.
Watch the Webinar
Watch the webinar “The I/O of DevOps and Where a Time Series Database Fits In” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version="3.17.6" title="Transcript" title_font_size="26" border_width_all="0px" border_width_bottom="1px" module_class="transcript-toggle" closed_toggle_background_color="rgba(255,255,255,0)"]
Here is an unedited transcript of the webinar “The I/O of DevOps and Where a Time Series Database Fits In” This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Chris Churilo: Director Product Marketing, InfluxData
- Larry Gordon: Chief Revenue Officer, XOps
Chris Churilo 00:00:16.273 All right. As promised, it’s three minutes after the hour, so we’ll go ahead and get started. We’ll probably be getting a lot more people joining us a little bit later. My name is Chris Churilo, and today, we have our partners, xOps, who will be presenting for today’s webinar. And I just want to remind everybody, if you have any questions, please put your questions in the chat or the Q&A section and the guys will get them answered before the end of the webinar. And I am recording this session, so you can take another listen to it, and it will get posted before the end of today after an edit, and we’ll send an email out tomorrow so you can share that link with any of your friends. So without further ado, I will hand it over to the guys at xOps, and they can start by introducing themselves.
Larry Gordon 00:01:03.334 Hey, thanks Chris. Good morning. Good to be working with you this morning. Always a pleasure. Hi-
Sean Mack 00:01:09.717 Hey, Chris.
Larry Gordon 00:01:11.084 -to the I/O of DevOps. I’m Larry Gordon.
Sean Mack 00:01:13.981 I’m Sean Mack.
Larry Gordon 00:01:15.910 And we’re going to be talking about the I/O of DevOps. We’ll be looking at how you break down information silos using tools such as InfluxDB and TICK Stack. Before we get started, I’ll tell you a little bit about ourselves. We’re xOps. xOps helps companies run their technology more efficiently and more effectively. We build open source tools and offer a suite of related services to help your business. Our services include DevOps consulting, tool implementation, and outsource tier 1 and tier 2 managed services. Sean is our CEO, and I’m the Chief Revenue Officer.
Larry Gordon 00:01:59.204 So just to set up the problem, what we’re going to do today is, we’re going to set up the problem with our problem statement, we’re going to talk about how to DevOps your data. Now, no one really DevOps their data. We’re going to talk about how you more effectively use data and break down information silos using various types of processes. We’re a consulting firm, so we care about processes and tools, like the TICK Stack. And we’ll talk about solutions, and where we go from here. So we always like to set up the problem by talking about the extent of the problem. So in our consulting work, Sean and I see this problem and solve this problem every day and see a lot of it. But that’s just our view on the world. So we did a lot of research, we partnered with CA and others to find out how big this problem really is. And the study that we did found that $26.5 billion in revenue is lost each year from IT downtime. And that’s just the revenue, that’s websites that aren’t doing transactions to earn revenue and profits for their owners. In fact, revenue is not the main impact of downtime. Another study found that 78% of downtime costs relate to reduced worker productivity. This is the very talented and very well-paid engineers who aren’t able to do their jobs because of IT downtime. So missed sales opportunities and lost revenue are very, very severe issues in this world, and we’re figuring out, along with InfluxData, how to do this better.
Sean Mack 00:03:49.974 Right. So I’ll jump in to talk a little bit more specifically about the challenge of information silos. You know, in DevOps, we talk a lot about the deployment pipeline, we talk about automation, continuous integration, continuous deployment, automated tests in this ideal scenario, where you don’t actually introduce problems into your production environment. But the reality is, especially in mid to large-sized organizations, there were often faced with legacy systems at all levels of maturity, including long manual deployments, including systems that don’t have the automation necessary for seamless deployment. The reality is, sometimes things do go wrong, and it isn’t all perfect. And in these environments, we need to make sure that we’re monitoring things, so when things do go wrong, we can react. The challenge in today’s environment is actually that, we’re measuring in so many different ways. What we’ve seen in working with a lot of different clients, is a massive amount of data groups. We’ve seen a monitoring tool proliferation where companies are having more and more tools. And along with this, we see the increasing information silos.
Sean Mack 00:05:23.425 AppDynamics did a survey, and they found that 65% of enterprises have 10-plus monitorings. In fact, one company we work with had over 50 tools just for monitoring their systems and services. And of those 50 tools, some were implemented in multiple different ways in multiple different regions. So for example, they had five different deployments of one monitoring tool, and each one of those have one server-level monitoring tool. And each one of those deployments was done differently. So a red alert in one area didn’t mean the same thing as the red alert in another area with the same tool. So this problem was even worse than just having a huge number of tools. And within all that noise, there’s no real way to figure out how your systems are performing, alerts are often missed or lost, and there’s no way to redetermine when a failure is approaching. So this problem is exacerbated when information gets siloed, when information is only provided to individual pieces of the organization. Now in DevOps, we also talk a lot about organizational silos, but we don’t talk so much about informational silos. But it’s a real issue, and very often, we see this issue impact system-level availability, whether it’d be that your business users are the only ones who have access to information such a leading sales, sales that are coming up or revenue, or their tech users are the only ones who have information about system availability.
Sean Mack 00:07:23.780 You have a classic informational silos where the developers have access to information about development, and operations are the only ones who have access to information about how those systems that they rely on are performing. But even within technical teams, even within operations teams, we see this information silos, where the network teams-especially in larger organizations, where you have specialized teams, you may see your network team is the only one who has insight into the network data. The database team is the only one who can really see into database performance. You know, I can’t tell you how many times I’ve been on late night incident calls to hear something similar to, “Hey, everything on the network looks fine. Hey, everything on the database looks fine. Hey, everything on the application looks fine.” This is a clear indicator of dysfunctional teams, and they’re siloed in information. If we’re not sharing this information, we can’t collaborate and work towards a shared solution or have the shared ownership necessary to drive appropriate behavior. Now, in the DevOps handbook, Gene Kim, Jez Humble, and Patrick Debois noted that this was a problem. They said, “For decades, we have ended up with silos of information, where development only creates logging events that are interesting to developers, and operations only monitors whether the environments are up and down.” They go on to say that, “As a result, when inopportune events occur, no one can determine why the entire system is not performing as desired.”
Larry Gordon 00:09:21.047 Hey, Sean. That’s not really you. You can’t fool me [laughter].
Sean Mack 00:09:26.024 No. But that’s about how I felt after the upcoming story I’m about to tell you.
Larry Gordon 00:09:31.978 Cool.
Sean Mack 00:09:32.949 So this was exemplified. The issue of informational silos was exemplified by an issue we had with one of our clients. They had one of two revenue processing down. This meant that 50% of revenue was not processing, and it lasted for over 48 hours before anyone noticed. Now you say, “How could this happen? How can nobody have noticed that this was a rare result of informational silos?” The finance team only looked at their information once or twice a week and was a peak period. So everyone was on a high-end alert. And the systems team actually did receive an alert, but they assumed that alert was for one of two redundant systems. So the impact would have been negligible, and they were getting around to fixing it. And the customer service team who was receiving tons of messages from angry customers was A, so busy answering these angry customers, they never tied it to the underlying infrastructure issues; and B, they assumed it was part of it being a peak series. So if we can look at how we cut down on informational silos, we can hopefully drastically reduce those sorts of problems. So I’m going to talk about the steps you take to break down these informational silos to reduce your time to resolve and actually get to a point where you’re predictively finding issues and preventing them before they impact the customers. So some other steps that we’ll talk about, that we’ll get into in more detail, first, you want to determine your data sources and reduce duplication; then you want a centralized key data; and finally, level your data so you can take action based on the information that data provides.
Sean Mack 00:11:45.856 So first and foremost, we’ll get into determining your data sources and reducing duplication. So first, look at all your data sources and reduce duplicates where possible. Going back to the example I spoke about earlier, they had several different tools for system-level monitoring alone. Now this is particularly interesting because system-level monitoring is a very mature thing. You don’t need all these different tools. We were able to take the best in class, identify it, and deploy that. So you got down from whatever was the seven different tools to just one. And to that effect, I recommend using best in class for each area. You know, when I say find your duplicates and reduce them, I’m not advocating taking one tool to do them all. In fact, I tend to shy away from tools that say they can do everything, or even groups of tools. What I found after working this problem for many, many years is that, there are vendors who are focused on their specific problem area for a really long time. And that focus has built some great tools in specific areas. So find the best tool for each specific area, but only find one of them. You don’t need five different tools to monitor CPU. And once you find those best-in-class tools, standardize them and deploy them throughout the enterprise. You know, I will say that the level of diversification or granularity of these tools is going to be dependent on the size and maturity of your organization. Smaller organizations may only need two or three different monitoring tools. And in fact, in smaller organizations, it may make sense to choose tools that can do two or three different areas, right? You’re not going to have specialized teams.
Sean Mack 00:14:01.524 Where you do have larger organizations, more technical depth, more technical specializations, you may want to choose more tools, so you can really get the full benefit of that knowledge. But the point is, you need to find a level of granularity that is right for your organization, your size, and your maturity. So once you’ve done that, you’re going to put all your data in one place. And this is where tools like Influx, as well as other correlation and presentation tools come into place. Now that’s easier said than done, right? The fact is, a lot of data sources were designed to contain their data by themselves, right? Because when you’re building a tool, it’s really usually a very singular purpose in mind: to deliver the value of the application which is collecting it. So if someone is building a server monitoring tool, their first thought is, “How do we collect and present the data?” Not, “How do we share that data with something else?” Similarly to financial data, or data on your social media, right? When I look at my Twitter Analytics, very often, that’s in a self-contained application which was not built for the purpose of inserting that data into a database. So you want to look very careful at your data sources because that’s challenging.
Sean Mack 00:15:38.264 Another problem is that, there’s so many different types of data. So you want to choose a right data source store that’s right for each different data type and put all the data of that type in those. So steps to do this correctly: You’re going to first review your architecture, look at the different types of data you’ve got coming in, and rationalize them and point them to the same place. Then you’re going to look at your correlation tool, or tools, to bring all that data together [inaudible] sources. And finally, pipe all your data into one place so you can begin to reap the benefits of this consolidation. Finally, and a great thing that tools like Influx brings light, is the ability to centralize and manage your data through access controls. I strongly advocate bringing as many data sources together. The more data reach you can provide your environment, the more value and the more you can break down organizational silos. And also, I’m a big advocate of transparency, but there are sometimes when you need to limit that transparency. So you need to make sure you can do that through access controls, and by presenting the right data to the right types of users.
Sean Mack 00:17:05.753 So business unit data, you may not be able to share it. In fact, there are some compliance regulations which won’t let you share. So in publicly traded companies, you may not be able to share things like real-time revenue data as much as that’d be good information. I mean, certainly looking at real-time revenue data against how your systems are performing, is a really valuable-make sure you can build with your data, but if you’re a publicly traded company, that may not be feasible. But the important thing to note is, it doesn’t mean, “Don’t put your data there.” Put your data there, but then control it through access-level controls, through building a correct type of dashboards and presentation layers for the correct type of users. Another good example of this is that, you often don’t want to present all of your tech data to business stakeholders. In fact, for someone who’s not familiar with highly available systems, where one system outage may be information you have but may not be relevant to the actual user performance, that sort of temporal state of different nodes within your application can be somewhat alarming if you don’t know that your system has been built for resiliency. So sometimes you want to extract that sort of information for some users and provide a high-level view of how your system is actually performing.
Sean Mack 00:18:42.469 So the last step in this is to really up-level your data. Data alone, even getting it all into one place, can be quite noisy. And actually, if we’re talking about the amount of data we should be talking about, it becomes even more difficult to sort through. So you need to make sure you’re taking steps to up-level the data. There’s a lot of ways you can do that. One is, to look at the overall health of the system, designing dashboards and equations. We’re actually measuring the overall health of a service or application. So some very interesting work you can do around determining what is the health, and then providing that. And also understand that that may mean different things to different users. So your business users may have a different opinion of what is a healthy application, versus your tech users, who may have a different opinion. So design all those different health equations, present a high-level picture of how healthy your systems are. One thing that can help in this area, is using service maps. So if you can actually map components, nodes of your services to the overall service or application that it’s providing to the customer and then you get in this data, you automatically map that to the service or application, you begin to paint a very good picture of how things are performing. Additionally, look at things like real user monitoring, synthetic monitoring, which provides that outside in-picture of how your service is. Tie that in with the underlying data. You’ll begin to build a very rich picture of all your business’ technical users. And by sharing all this data, you can bring them together, all looking at the same information, getting out different values to make your organization run smooth.
Sean Mack 00:20:56.942 The last piece of this, is really using this data and begin to apply machine learning so that you can predict when incidents occur. You can bring together information such as system performance with real user information for synthetic monitoring, with information from your social media, and map that against information from your ITSM, your service management tool such as when incidents occur. You can begin to learn and predict when incidents are going to occur, so you stop them before they even impact your customers. Overall, the point is, you want to up-level your data. So it’s not just a ton of raw information, it’s actually providing intelligence that’ll enable your business to make the right decisions. With that, I’ll turn it over to Chris, who’s going to talk a bit about one of the great solutions we have to begin to bring this together, the InfluxDB and the TICK Stack. Chris, over to you.
Chris Churilo 00:22:23.832 Sorry. Did I mute? So my name is Chris. And we’ll talk a little bit about the TICK Stack, which stands for Telegraf, InfluxDB, Chronograf, and Kapacitor. And these are the open source projects that we have here at InfluxData. And the reason why we’re so confident that the TICK Stack is the solution for many of these DevOps types of initiatives, is that it’s really simple to use. In fact, one of our core values at our company is to ensure that developers are happy and actually find joy in using our products. And when I say “easy to use”, you can literally install this and get it up and running and start using it in just a couple of minutes. And it’s primarily because there’s no external dependencies. It can just be a platform in itself. It’s all written in Go, which also makes it a bit of a joy; so you can help to contribute to that. And its key functionality is for the collection of time series data, which is essentially any kind of data that’s timestamped. So metrics and events, we categorize them as regular and irregular time series data. So things that come in a regular interval versus things that may be anomalies that just kind of pop up in front of those as events or regular time series. The TICK Stack is also horizontally scalable, and it can be used alone. So you can use InfluxDB, the database, by itself. You can actually use the collector agents with a number of other databases as well, or you can combine all four projects and use it as a single platform. It’s completely up to you. And then the final point is that, we can also collect data either by putting these collector agents out on the various systems that you want to collect metrics on, or we can actually grab that data from things like containers that might not be around for a long time, and it might not be worth your time putting the Telegraf agents on those various containers. So a number of different ways of being able to use the platform. And like I said, it’s open source.
Chris Churilo 00:24:35.095 And so the way that a lot of our users use InfluxData is, they’ll use it to collect metrics and events from a number of sources. So we have a number of customers that are using it in the IoT space, where they’re collecting information from sensors and devices, so they can make sure that their solutions are efficient and they’re saving money or saving energy. We have a lot, a lot of users that are using it for gathering infrastructure metrics, like the one that Larry and Sean were describing. And then we also have a number of customers that are using it for gathering business metrics, because really, our open source projects are not just tools for the operations team, but they’re also tools that your development teams can use. And the nice thing is, that way, it really helps to bring the two teams together because you can now start to have a unified platform to be able to bring in all these different kinds of metrics that are ultimately going to make the customers successful if you’re able to act on them in the right way. So once the platform collects all these information, then you can use it to monitor, you can use it to get more insights into what’s going on, we can also apply machine learning against it and start to make some pretty good predictions about where your business is going, or how much more infrastructure you’re going to have to expand to, etc.
Chris Churilo 00:26:06.240 So one of the reasons that we have been so successful as a platform-collecting metrics and events is that, at the heart of it, we have a time series database, which is different from a standard relational database, in that we really focus on things like making sure that we can handle the ingest of data at very high volumes, at very small time intervals. And that’s really important because if you think about the metrics that you’re collecting, you’re not collecting them at the minute interval, you’re collecting them at the second, the millisecond or even a nanosecond level to be able to really understand what’s going on with the various systems. So you need to make sure you have [inaudible] that can handle that high ingest rate. In addition, you want to be able to query against this. You want to make sure, even though it’s taking in all this data at this incredible rate, even users and systems are going to have to start digging into that data, and there’s going to be a lot of querying to be able to get that real-time view into the data that you’re collecting. What’s interesting about time series data is that, over time, that level of precision is not necessary. So maybe now, two weeks from now, two months from now, maybe you need that really high precision rate of data being collected. But maybe after a couple of months, you don’t need it at that rate. Maybe you’re okay at the one-minute interval, or maybe over time, you’re okay with just the one-day interval for some of that because then, you start looking at it more for historical purposes. And so another key feature in a time series database is that it can also remove data that you’re not going to need over time. So you don’t even have to think about it. So you can set retention policies so that it’ll automatically start to remove that data, and you don’t need to do anything for that. And then finally, because we’re talking about high volumes of data, it’s also important that that data can get compacted so that we don’t take up all the storage unnecessarily. So those are the things that are important to look for when you’re looking for a database for collecting these kind of metrics, and these are the things that we focus on in our time series database, known as InfluxDB.
Chris Churilo 00:28:22.749 So just a quick architectural view of our four projects. So we have a set of collector agents called Telegraf. Now, there’s actually over 140 plug-ins. And what’s exciting about Telegraf is that, I think we’ve only contributed, maybe, to six plug-ins as a company. And so that means that the resulting 134 plug-ins were the results of the community. And that’s why open source is so fantastic. You can rely on people who are the real subject matter experts, especially in something like a collector agent, who really understands, “What are the key metrics, or the important metrics that you should be collecting from whatever systems that are out there?” And there’s no way we can hire enough engineers within our organizations to be able to build the many agents that we have at the pace that they’ve been growing. So a lot of people have been contributing to it. And it’s also fun; all our projects are written in Go. It’s an excuse for a lot of people to, “Hey, let me try this new Go Hangout and see what I can do with it.” And we expect to see the same kind of growth happening with Telegraf. And the other thing I want to plug is, as an open source tool, it also works with any database that can handle the line critical that’s in there. So it’s not just InfluxDB that it can write into. And it may seem a little bit counter-intuitive, but it is important for us to be open and work with many different data sources, because I think it’s only going to expand the functionality in Telegraf.
Chris Churilo 00:29:54.898 In any event, if you choose to use Telegraf with InfluxDB, you put Telegraf agents on the various systems that you want to collect metrics from. It then shoots it into InfluxDB, stores it. It can handle the high ingest rate, that I talked about in the previous slide. You can then run queries against it. And the querying agents can be things like a visualization tool, which we also have, called Chronograf. There’s also some other open source visualization tools, like Grafana, that are super popular with InfluxDB. And so it can also query against it. And then there’s a bunch of homegrown visualization tools, whether it’d be for a smart phone app, or an analytics system that can also query data and display it into the visualization tools. We also have a data streaming processor engine called Kapacitor that also can be used as a part of InfluxDB. And this is where you can then throw that data, that you’re collecting the time series data, against things like a machine learning algorithm to help with forecasting. So those are the four projects that we have, and as I mentioned, they’re great for DevOps organization, because it really works on both sides of the coin. You can use it for collecting metrics for the functions that you’re writing within your application, or even metrics on how well any of your systems are performing. Sean?
Larry Gordon 00:31:37.265 Sean? Sean was going to do the “Gotchas”.
Sean Mack 00:31:45.326 Sorry. I was muted there. Thanks very much, Chris. It is great to see all the exciting work that’s going on with Influx. We love working with open source and open source community here at xOps, and it’s so cool to see these sorts of powerful tools come to life. I did want to, before we wrap up, talk about some of the things to be aware of when you’re going through breaking down silos, informational silos, and looking at consolidating your information in a system like InfluxDB. You know, one thing to be sure of is, stay away when you’re looking at how you gather the data, monitoring data, or other information about how your systems are performing. You want to stay away from point solutions, where possible. That is to say, if you’re looking at measuring something as ubiquitous, let’s say CPU, you don’t need five different tools to do that. And so if you’ve got a heterogeneous infrastructure made up of Linux, and Windows, and whatever else, if you can find one tool to measure system-level performance across all of that infrastructure, use that, rather than picking a tool that’s only going to work on Windows, and another tool that’s only going to work on Linux. The more you can generalize within a given space, the better you’ll be, you’ll avoid duplication. So as a technical leader, whenever someone comes to me and says, “Hey, I’ve got this great new tool,” I have to think, “Is that going to work for more than this one-point solution?” You also want to avoid vendor lock-in, and this is one of the reasons I’m big on the open source space, is because very rarely are they going to force you to use tools.
Sean Mack 00:33:51.253 But when you look at some of the other solutions in this space, particularly with bigger vendors, they’ll say, “Hey, you can consolidate all your data here, but you have to use our cnDB, or you have to use our APN tool,” and lock you into using other parts of their solution that may be very costly and prohibit the kind of agility and adaptability that you need for your business. Also, finally, you want to ensure that you have access to your data. Very often, you’ll get tools in place only to learn that you can’t actually extract that data. And without being able to extract that data, you have no way to break down those informational silos. You have no way to bring that data into a centralized location to visualize it for all the different users across your business, your dev, your operations. So ensure you have complete APIs, RESTful interfaces to get to all the information, all the collection points you have, whether that’s your social media feeds, whether that’s your real user monitoring, whether that’s your cloud monitoring, whether that’s your terrestrial data center monitoring. Make sure it’s all available so you can bring it in and bring real value to your customers. Finally, I want to say, it’s important that you build a culture of collaboration. Tools alone aren’t going to make you DevOps, right? No tool will do that.
Sean Mack 00:35:35.518 If we think about DevOps as a culture of collaboration, then you want to make sure that any tool you choose enables that. But a tool, in and of itself, is not [inaudible], right? It’s a tool. You know, in fact, I’ve seen organizations where collaboration tools have actually been used to create silos. And one organization that I worked, they had three different chat tools. Chat being a great example of a collaboration tool. But in this organization, the development group was using Slack, the operations tool. The team was using HipChat, and the business team was using Skype. So the very tools that can be used to break down these silos were actually being used in four silos and create silos between these teams, prohibiting communication and easing flow of information. So again, you want to start by enabling a culture of collaboration. Once you’re getting there, find the tools that can help you share the data, that can help you collaborate and build that culture through the different tools you use for your organization.
Sean Mack 00:37:03.988 So where do we go from here? I’ve talked a lot, and we’ve shared a lot about ways you can break down informational silos. I’d love it if you took away some key points and went back to your organizations and make some changes. Things to look for, first steps to take: Identify and reduce any duplicate data; find point solutions and reduce them where possible; and then determine a central data collection and presentation engine and implement it to get your company or organization to the next level of collaboration and [inaudible]. And over to Larry.
Larry Gordon 00:37:49.698 Thanks, Sean. Thanks, Chris. Really excellent work. I enjoyed watching your segments. So we certainly have time for some Q&A, I think Chris is going to run that. And you can find us at our website at www.xops.it. Very interesting web community on Facebook, xOps Army, and this is where people like you contribute their knowledge to DevOps and to bring up tools like InfluxData and how that can be useful in people’s implementations. You’ll see interviews on there, you’ll see other things that people are contributing. And of course, you can follow us on Twitter, our handle is @xOps. So once again, thanks for your listening, and thanks for your presenting. And hopefully, we have some big questions coming up.
Chris Churilo 00:38:41.388 Yeah. Thanks, everybody. So if you do have any questions, don’t be shy, put them in the Q&A or the chat panel, and the guys will be more than happy to answer your questions. Or if you have a question about InfluxDB, I can definitely help with that. You know, it’s really great the way that you guys set the stage for the problem, because I’m sure everybody who’s on the call has had a similar story, where we look at all our monitoring tools, we look at our dashboards, “Everything looks great, but this important team is screaming at us.” And we look at them like they’re a little bit crazy, and they look at us like we’re crazy. And at the end of the day, it was just, oftentimes, we just weren’t in sync with what we were monitoring, what we were tuning for, etc. And it’s really easy, you can make it down that rat hole. So I think the one question related to that, that I want to ask you guys, is that, having been in that situation, sometimes it isn’t obvious to us that we’re actually doing that. Like, we’re just absolutely certain that we’d done things correctly. So how do you kind of break down those mental blocks in organizations, where it seems like we’d done everything correctly? What do you recommend?
Sean Mack 00:40:05.210 So Chris, I think that’s a great question. I think it really speaks, in some ways, to the last point I made, which is that, you’ve got to start with a culture of collaboration. Because otherwise, you get into that, and you get into that, “It’s not my problem. I’ve got this tool fully deployed. I know what’s going on.” One of the key tools that I suggest you look at as you push that culture of collaboration is, also, developing shared goals. Because I think, when we talk about DevOps in the divisionary, you talk about the separation of goals when development is driven to produce features, and operations is measured by how stable is the environment, and therefore, change adverse. With those different goals, you’re going to ensure your organization is a one. But if you can say, “Hey, we’re all going to be measured by the same goals, and figure out what those are for your organization,” you’ve now got all the different groups of people working together. So the interstops being, “Well, it’s not my problem,” but, “Hey, let’s look at this piece of data,” or maybe it’s how my piece of data is interacting with that other system, and not just living in these silos. So I think a lot of that begins with the culture and can be holstered through the deployment of shared goals, which would then drive these shared practices.
Chris Churilo 00:41:59.645 That’s really great advice. So we’ll just wait a few more minutes, if anybody has questions. And if you do, try by tapping it in, feel free to send me any amount, and I will always forward it to any of our speakers, so they can answer that question. So how did you guys find out about InfluxDB? I mean, you guys are pretty well-versed in a lot of tools, but in particular, how did you find InfluxDB, and what drew you to it?
Larry Gordon 00:42:28.026 So Chris, I found InfluxDB for a couple of ways. You know, I don’t pay much attention to stuff unless I hear about it from a couple of different communities. So we heard about it at client sites. So that’s clearly very, very interesting. And I actually looked and did research looking for thought leadership, and articles, and speakers about who could solve some of the I/O of DevOps’ problems we were facing and who had ready-made cost-effective solutions, and InfluxData came up high in that screen. So I was actually able to reach out to you guys, and you were very receptive and easy to work with.
Chris Churilo 00:43:23.190 And what was your experience with installing and playing around with it?
Larry Gordon 00:43:28.173 So I’m the least technical of our team [laughter], but I was able to work with it. And I was able to be a client where their tech team is working with me. And overall, it was a really good experience. We actually have a team in Sri Lanka that uses it, as well as the team here.
Chris Churilo 00:43:53.417 You know, I know I sound repetitive when I tell people that it is easy to use, and I know everyone says that. One of the ways that we ensure that it is easy to use, we actually make every employee-it doesn’t matter if you’re in finance, or HR, or engineering, every employee needs to install it and try it. And so if we can get all those people to try it and use it, then you know that it’s easy to use.
Sean Mack 00:44:20.692 So if you’ve got Larry using it, I think it’s a real testament [laughter] to that. You know what I mean?
Larry Gordon 00:44:29.520 Thank you. Thank you very much [laughter].
Chris Churilo 00:44:33.680 Okay. Well, it looks like we don’t have any questions. But I thought it was a great overview of the problems that we’re trying to solve. And so I think our partners, xOps, did a fantastic job on today’s webinar. And if there are follow-up questions, please send them to me and I’ll forward them to the guys. And you can also follow them on Twitter, as they mentioned, @xOps. And we’ll probably do more webinars with them in the future. So stay tuned, and check out our events calendar. I will be posting this recording later on today. And once again, thanks to our attendees and thanks to our speakers.
Larry Gordon 00:45:13.795 Have a great day.
Sean Mack 00:45:14.307 Thanks for having us, Chris.
Chris Churilo 00:45:15.561 Thank you so much. Talk to you later. Bye-bye.
[/et_pb_toggle]