Introducing InfluxDB Enterprise Version 1.3
In this webinar, Tim Hall, VP of Products, reviews the new capabilities of InfluxDB Enterprise - in particular, the new Time Series Index.
- Time Series Index (TSI)
- Most significant advancement in any Time Series Database
- For customer with large, ephemeral time series (1 Billion time series), this new Time Series Index (TSI) moves the in-memory index to disk eliminating the restrictions imposed by the available memory
- Query Language Improvements-several of these features make InfluxData easier for users to use:
- Support timezone offsets for queries
- Previously, users had to manipulate the data to fit their users' timezone
- New Functions include:
- Integral function
- Non-negative difference
- New Operators include:
- Modulo
- Bitwise AND, OR and XOR
- Optimization for top() and bottom() functional using an incremental aggregator
- Maintain the tags of points selected by top() or bottom() when writing the results
- Nanosecond duration literal support
- Support single and multi-line comments
- Support timezone offsets for queries
- Fine-grained Authorization
- Permissions set for Measurements and Tags (not just database)
- Kapacitor Clustering
- Including alert de-duplication within an High Availability deployment
- Automatic Cluster Rebalancing
- Automation for rebalancing a cluster in the following scenario for data node replacement (due to hardware or other catastrophic failure).
Watch the Webinar
Watch the webinar “InfluxDB Enterprise 1.3” by clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version="3.17.6" title="Transcript" title_font_size="26" border_width_all="0px" border_width_bottom="1px" module_class="transcript-toggle" closed_toggle_background_color="rgba(255,255,255,0)"]
Here is an unedited transcript of the webinar “InfluxDB Enterprise 1.3”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers: Chris Churilo: Director Product Marketing, InfluxData Tim Hall: VP of Products, InfluxData
Tim Hall 00:02.001 Thank you, Chris. Good morning everybody. Thank you for joining us today. What I thought I would share with you today is the latest and greatest innovations that we’ve been working on for InfluxDB Enterprise, starting with the contributions in open source, walking you through some of the new visualization components that we launched, obviously, called Chronograf, talking through how those are related to each other and how you can effectively use them. And then, talk a little bit about our anomaly and alert detection system, called Kapacitor, that we’ve been working on, some exciting new innovations there as well. So just a little bit of a background. Always looking for a few extra Twitter followers, so there’s my Twitter handle, @thallinflux. It’s been a pleasure working with the team here over the last six months since I joined. This is the second major feature-bearing release that I’ve been a part of, and we’re excited to get this out the door to all of you today.
Tim Hall 01:11.038 So just a little background on the company. InfluxData was founded in 2013. Our illustrious founder, Paul Dix, was working with a lot of different time series data and could not find a technology on the market to meet his needs. And he started by trying to build something on top of existing NoSQL offerings and found that that was not going to be sufficient to meet both the scalability requirements that he had from a read and a write perspective. And from that, InfluxDB was born within GitHub. And we appreciate all of the contributions and engagement that we’ve gotten through our community members over the last four years. But what Paul built, essentially, was a modern, open source, Time Series Platform for metrics and events, trying to focus on making developers happy, mostly because Paul is a developer. And he knows the challenges of consuming technology, building these things, and monitoring and maintaining them over time. So we want to make it easy for developers to use the technology, to get value out of it very, very quickly, to make it easy to develop solutions, and obviously, it has to scale. And we’re talking about what’s happening in the world today with the sensor data, the nature of people adopting microservices, and the amount of information and telemetry that’s being thrown off of those kinds of systems. The scale is super, super important.
Tim Hall 02:42.428 And so, we’ve been excited about the results that we’ve achieved thus far. We can see about 70,000 active servers of InfluxDB in the world today. And we currently have about 300 or so paid customers across our InfluxDB Cloud, which is our hosted managed service, and our Enterprise commercial offerings. So just thank you and a shout out to all of our existing customers as well. We appreciate you, and we enjoy working with and engaging with you on the new feature development through our support network and on the community. And if you haven’t been to our community site, just put a plugin for that now. If you go to community.influxdata.com, we’ve got over 500 folks that have been engaging with us over the past few months since we launched it on all topics related to time series data, and obviously, the InfluxData technologies.
Tim Hall 03:38.098 So for those of you that are joining and might be new, just looking at what we call the Time Series Platform, we’re really focused on the accumulation, the analysis, and the action of various data elements. And these are metrics and events. And the metrics today are coming from a wide variety of sources-Internet of things, popular buzzword of the day-but sensors, particularly in the industrial, or agricultural, or medical fields, being fed in. We’ve also got a large number of folks using the InfluxData platform for system metrics and DevOps monitoring use cases, whether you’re monitoring on-prem systems, systems that are running in the cloud through our partners at Amazon, or if you’re using Microsoft as your environment or the Google Compute Platform. Obviously, the challenge is in terms of understanding what you have running, how it’s performing, and potentially things that you want to be alerted on to go take action. That telemetry data is super critical. Within those environments, again, whether you’re on-prem or in the cloud, we’ve got folks adopting container and container orchestration platforms from folks like Mesosphere and others and then docker containers running on top of that. And of course, the challenge with all that is, you have these ephemeral containers and workloads that you also need to track and manage. And then, of course, inside of all those things, now we’ve got the next gen of APIs and service-oriented architecture that’s focused on microservices and how those things find, interact, and operate with each other. So you got a whole other level of metrics to deal with in understanding, and tracking, and correlating what’s happening between those systems. So tons of metrics everywhere as you’re delivering all these applications built on all types of infrastructure. And how do you analyze, store, correlate, and understand what’s happening at scale?
Tim Hall 05:38.092 So getting that data in, we’ve got folks using our Telegraf technology to collect metrics from over 100 different plugins that have been developed by the community members over the last four years. And then, we’re going to talk a little bit about the pull-style metrics that are offered through technologies from Prometheus and how you can leverage that to accumulate data into the InfluxData platform as well. From an analysis perspective, landing all of that data, and storing it at scale, and being able to access it quickly using query languages and other API level access is what we do in the middle. Being able to also shape that data, send it out for things like machine learning or other types of anomaly detection is the next step. And that’s where we get into actions. And so, now, we’d like to be able to visualize the data that you stored and also help you take action on top of it. So what that means is, how do we send out alerts? How do we notify either people or other automated systems to take action on the things that you’re seeing that are coming in either through the metrics and events being gathered? And so, that really makes up the platform and the core capabilities.
Tim Hall 06:53.579 Now, in terms of how that manifests in terms of product offering, I’m going to go a little bit Japanese-style here and go right to left. And we start with the TICK stack. This is where we began in terms of putting together Telegraf, InfluxData, Chronograf, and Kapacitor, the letters of the TICK stack there, as open source components. And again, we appreciate the more than thousand developers who are contributing to those projects within GitHub. And the goal was to create an open source core for each of those things that were extensible, delivering support for both irregular and regular time series data and allowing it to run where you want: run around cloud, run around premise.
Tim Hall 07:44.905 From that, we have built our commercial offerings, InfluxDB Enterprise and InfluxDB Cloud. And I’ll start with InfluxDB Enterprise. Again, it’s built on that open source core of the TICK stack. But we’ve added some additional capabilities to it that’s only available through the closed source commercial offering and that includes high availability capabilities, the ability for you to scale and cluster to run it on more than a single node that provides some redundancy and scale-out. We’ve delivered some advanced backup and restore capabilities. Again, as you deploy these systems at scale, the ability for you to back up and move your data around and restore it in other locations becomes critical. And there’s also some advanced security capabilities that we’ve added to the Enterprise edition as well. And of course, you can run this on any cloud environment or on premise. And one of the latest things that we’re releasing as part of the Enterprise subscription is the ability, also, to have high availability for the Kapacitor instance. And what this means is, Kapacitor can now be deployed in a cluster on its own to allow for that high availability and scale-out. So we’re going to talk a little bit more about that as we get deeper into the webcast.
Tim Hall 09:02.082 And then, last but not least, InfluxDB Cloud. This is our managed service offering. So, effectively, we’re taking the InfluxDB Enterprise bits and we’re deploying them onto AWS. And then, we’re managing them and running them on your behalf. So you can think of this as a Platform as a Service. It is a managed offering, meaning we have 24 by 7 monitoring. And if anyone wants to guess what tools that we use to manage and monitor our own infrastructure, it happens to be InfluxData [laughter]. So we have Kapacitor set up for alerting. We use Telegraf to gather all the telemetry data from all of the instances that are deployed across the various regions of AWS. And then, we also feed it into dashboards that we inspect using Chronograf as we look at the operational health of the various services that are out there. And so, you can get those two offerings, again, available on the website. The InfluxDB Enterprise subscription, engage with our sales team. InfluxDB Cloud, self-service. Get it right off the web.
Tim Hall 10:02.418 So let’s get into the meat of what’s new in InfluxDB Enterprise 1.3. The first thing that we’re super excited about is the time series index. So I don’t know how many of you are keeping up with the various blog posts and the things that we’ve posted on the website from our founder, Paul Dix. But this is one of the most important innovations that’s been developed in this release. We’ve been working on this with some co-development partners behind the scenes for about six or more months. The idea here is that, where did we run into bottlenecks and limitations of the previous versus the technology? And one of the challenges we had is, as you continue to have more and more time series, where are the limits? Where are the bottlenecks? And the challenge was, we were tapping out at about a few hundred million series. And the reason why is because we were keeping the entire time series index in memory. And so, the idea behind the time series index is that it’ll now allow you to spill to disk and allow us to get up to a billion time series. And so, we’ll go into a little bit more details in a minute. But I just want to, again, thank our co-development partners who have been working and testing this particular feature with us over the past six or so months. That contribution and engagement is invaluable to us as a company and hopefully is valuable, obviously, to all the customers who get to take advantage of this amazing, new innovation that’s arriving.
Tim Hall 11:37.930 Second, and pretty important for those of you that have been users of Influx previously, there’s a large number of query language improvement. We do listen to the feedback coming in through the community in terms of what kinds of features and functions you want. We, certainly, have not delivered every one that’s been asked for, but we continue to make slow and steady progress on all of the development of these features, with every feature-bearing release, extending that query language to allow you to do more work, write less code, and access your data faster. One of the second things I really want to highlight in terms of the scale and the scope of a feature in this release is the fine-grained authorization. Again, something that folks have been asking about a lot has been finer-grained security. So, effectively, previous editions of the database only allowed you to authenticate and authorize at the database level itself. And so, this led to proliferation of databases within a single instance. And then, that also can lead, ultimately, to some scalability challenges, depending on the hardware that you’re running on. And so, the idea is, how can we offer finer-grained authorization? So can we go inside the database level and restrict access to measurements and tags? And so, that’s what we’ve done, and we’ll talk a little bit more about that in a bit.
Tim Hall 13:08.313 Kapacitor. Kapacitor can now be clustered as part of the InfluxDB Enterprise subscription. And for customers who have already purchased that subscription, your existing license key can be used to leverage this new feature. And effectively, what Kapacitor’s done is, it can be installed and clustered. And then, as you write TICK Scripts, you can deploy them to multiple instances of Kapacitor. Now, if those TICK Scripts include anomaly detection and alerting, the Kapacitor instances will actually talk amongst themselves and determine which of the instances will raise that alert. So, effectively, even though multiple Kapacitor nodes may be doing the anomaly detection, only one of them will raise the alert. And so, you will not get alert storms and have to de-dup [de-duplicate] the alerts downstream. And so, that’s a pretty nice capability. And then, last but not least, we have focused on a new service, and it’s sort of the foundation of that service, around something we call anti-entropy. And the first capability is for you to replace the data node and allow the cluster to automatically rebalance. And we’ll talk a little bit more about that as we get into some of the details here.
Tim Hall 14:23.941 I do want to highlight one significant change which is, Chronograf has now become the face of InfluxDB Enterprise. It works with the open source edition. It works with the Enterprise edition. There are not separate bits for Chronograf. When you use Chronograf to connect up to a data source, it will determine what edition of the database you’re using, whether it’s the open source edition or Enterprise, and it’ll ask you a couple of additional questions as you’re doing the configuration for Enterprise. But it’s super easy to set up. It’s very streamlined. And it provides some pretty incredible capabilities in terms of the visualization and the richness at which you can access the fields and tags as you’re writing queries. So we’re really excited about Chronograf, and we’re getting a lot of really positive feedback on the experience of using it. Now, I say it’s the new phase because we have announced the deprecation of what was previously called the Enterprise Web Council. So this was previously the tool that you would gain access to if you were an InfluxDB Enterprise customer. So we are deprecating that tool, and so, there’ll be no further development on that going forward. And this will lead us to sunsetting that, officially, in the next release.
Tim Hall 15:47.610 And of course, the database is not the only part of the stack that we’re talking about. So Telegraf also has had a series of feature-bearing releases and additional bug fixes over the last month or so. And some of the key contributions that have arrived, number one, is that Telegraf can now accept collectd stats natively and feed and on-play those to the database. And then, there’s additional plugins available for ElasticSearch 5 and for Kapacitor and gathering Kapacitor metrics. That Kapacitor plugin, we’ve actually been using in InfluxDB Cloud for over a year. And we’ve been hardening it and now excited to make that available to folks that are using more pieces of the InfluxData stack so that you can monitor the stack using the stack itself. There’s also a bunch of new features in Telegraf to just make the collections of the metrics easier, faster, and obviously, more performant.
Tim Hall 16:52.413 In terms of the query language improvements, one of the big things that folks have asked for is the ability to do queries where you can offset the results by a particular time zone. So for those of you familiar with the query language, we have delivered all the results in UTC previously. So I think this is-people will be really excited about being able to natively send in a time zone data as part of the query and then receive the results back relative to that. One highlight on this, Chronograf does actually return the results to you, in a visualization format, in native browser time instead of-for some folks that has sort of been a little bit of a surprise, but I wanted to highlight the fact that Chronograf is actually using local browser time to render the query and query results.
Tim Hall 17:43.656 There’s some new functions in the query language, an integral function. This is for definitive integrals. Folks have been asking about this, particularly in the smart grid and smart power arena. And so, that’s been developed and delivered as well as a nonnegative difference function that’s available now, native in the query language. There are some additional operators that now exist as well. So you can do modulo using the percent sign, which is a very familiar way to do that in other query languages. And now, we also support bitwise operators. So these three bitwise operators are now part of the query language, and you can take advantage of those as you’re exploring your data. There have been additional improvements on the top and bottom functions as well. These three functions are super popular, and we’ve got a lot of feedback on, folks using top and bottom. And so, inside this, we’re now using incremental aggregators. And we’re now also able to maintain the tags of points when you’re doing selections using top and bottom. And you can look for the documentation on how we’re actually doing that, how you can take advantage of those features, which will be published a little bit later today. In addition, a couple of other small things, nanosecond duration literal support is now available. And also, we can support single and multiline comments as you’re developing the queries as well, which previously was only single.
Tim Hall 19:23.306 In terms of performance improvements on security and operations for Enterprise, in every release, we worked to improve performance. And in this particular release, the Enterprise edition has gotten a huge bump on the way in which it communicates with, between, and across the nodes. Effectively, there’s a RPC-based connection pool which now multiplexes multiple streams over a single connection. So previously, there was-quite chatty between the individual nodes as there was communication going on. And so, now, we’re doing connection pooling and multiplexing of those connections to try to reduce the traffic in communication across those nodes. The other significant improvement on the Enterprise edition is that, previously, the hinted handoff queue, which is the queue that we use to maintain write consistency between the data nodes, was being segmented only by data node. So if you had a-let’s say you had a four-data-node cluster, you would have hinted handoff segments that were, essentially, for the other three nodes. And now, what we’ve done is, we’re breaking that down so that it’s not only by node but also by shard. Now, one thing to highlight here is that if you upgrade to 1.3, that hinted handoff segment will now start to be leveraged and used. If you need to downgrade, for any reason, between 1.3 and 1.2, you will need to stop write traffic. You will need to stop that write traffic temporarily so that you can do the downgrade and because the hinted handoff segmentation will not be understood by the 1.2 release. So effectively, if you’re writing points in, and it lands in this new segmented fashion, and you attempt to downgrade, what you’ll have to do is delete those queued writes. And so, the best practice for now is to actually halt your inbound write traffic either through load balance or redirection or stopping the actual collectors if you need to downgrade. And if you need other help beyond that, please contact us in support.
Tim Hall 21:36.087 On the security front, as I mentioned, fine-grained authorization now allows for permissions to be set at the measurement and tag level. So that’s inside the database. You can grant authorizations to individual user roles. And again, it uses a common grant. And then, if you want to remove the grants that you’ve provided, you just use the Delete command. And all that’s part of the documentation. And then, last but not least, is our new service. This is, again, a foundational service that we call anti-entropy. And the idea is that there’s a large number of reasons why you might get data drift between multiple nodes of a distributed system. One of the ones that we’ve tackled first is the replacement of a data node. If you lose a data node for some catastrophic reason or failure-it’s a disk, or you lose the motherboard on a machine and you need to replace that node-you used to have to go through a whole series of rebalancing steps. And the idea with this new capability is that you can replace that data node, and the data now will rebalance across your different nodes. Now, I call it a foundational service because we’ve got some things that we want to do now that that’s in place and expand the capabilities of the anti-entropy service. So, for example, we would like to do elastic resize. So we’d like you to say, “Not only to replace a node, but I want to add a new data node.” When you add a new data node, we’d also like the data now to be redistributed across the nodes that are available. We’re also looking at situations where customers get data drift on the back end for other operational outages and how to both detect and repair those inconsistencies in the data stored within the shards. And anti-entropy will be the hook point for us to be able to do that.
Tim Hall 23:28.816 Now, most significantly, I mentioned the time series index. And it says “Opt-in” here. So what it means is, we have this capability in the 1.3 release. However, it is turned off by default. What this means is, this is a little bit of a technical preview. We’d like you to be able to try and explore this capability in non-production environments. And the area of usefulness is really around when you have large ephemeral time series, so ephemeral meaning-let’s say you’ve got docker containers that are coming and going. Each of those docker containers is going to have a unique ID. But of course, for the life of that container, you’re going to be collecting the various metrics in the series. Then once it disappears, you’re probably never going to see that container ID again. And so, the idea is, how can we start to use the various capabilities of operating system and storage that are available to us to move that time index so that the hot data and the ability for you to access the most relevant data remains in memory? And as those ephemeral time series are not being accessed anymore, they spill to disk. And sort of the natural progression of both the hot and cold data is just happening from memory to disk behind the scenes.
Tim Hall 24:55.705 Now, the reason why it’s an opt-in and it’s not active by default is, there are still some restrictions in terms of, we don’t limit the scope of the data returned by the various SHOW queries. And of course, what that means is if you execute one of those commands and it accesses all of the data that’s available across all of the shards, you will run into out-of-memory conditions. And so, what we’ll be doing is addressing that as part of the next release and ensuring that before we turn on the time series index by default, we also have that scope of limitations put in place and available to you. So, again, if you want to read more about the details of the development of this, we have a blog that sort of originally announced this feature and then a new blog by Paul Dix that describes more of the details of this and the work that’s gone on within the community to put this together.
Tim Hall 25:54.315 On the Kapacitor front, for those of you that are using Kapacitor for anomaly detection and alerting or other use cases, one of the things that you also notice is that we’ve introduced a technical preview item for scraping and discovery of Prometheus-style data collection. So Prometheus has done a great job in terms of describing how an HTTP-style endpoint can be created, and published, and standardized as a collection point for various stats. However, what that means is that you have to be aware of where that endpoint actually lives and then go out and scrape it or pull those stats. Whereas Telegraf works on a slightly different model, where it is embedded and deployed in various locations, and it pushes the data. Prometheus is sort of exposing this HTTP endpoint, which then, obviously, needs to be scraped. And so, what we’ve done is, we’ve worked with the Prometheus technology to leverage the discovery capabilities that it has, putting that into Kapacitor’s. And it made logical sense for Kapacitor to be the home of the configuration of both the service discovery and then for Kapacitor to reach out and stream that data into the database for other various endpoints that it discovers. And so, we’ll work on hardening that and making that more of a GA feature for the next release.
Tim Hall 27:20.066 And we’re obviously looking for feedback on folks that are using Prometheus as a standard way to expose the metrics from various ephemeral containers and things that should be deploying in your environments. And that notion of discovering those services on the fly as they’re being deployed is one of the benefits of that pull style. Of course, there are limitations. Pull-style metrics are a bit harder to use when you’re doing things that cross firewalls, so be careful. To be honest, we don’t care on the accumulation side of your telemetry, how you get your data: pushed style, pull style. We love them all, and we wanted to work to make this easier to integrate in the platform itself. So definitely take advantage of this and look for more improvements in this area. And if you have ideas or suggestions, we’re obviously open to hearing that.
Tim Hall 28:09.581 Additional feature innovation on Kapacitor is updates to the alert topic systems. But now, we simplify the configuration of the alert handlers themselves, and alert handles now only ever have a single action and belong to a single topic. So we think those are some nice modifications on the capability and simplification overall. But I think the biggest thing in terms of the Kapacitor innovation that’s been done is, now the ability for you to deploy in an HA. This is only available to the Enterprise subscription customers, and again, your license key will work to activate this feature in the nodes of Kapacitor itself. It also delivers alert de-duplication when you deploy the text scripts in each of the instances. And then, the nodes talk amongst themselves to determine which one will be raising that alert. Of course, if you lose a node for any reason, the other nodes will take over and then regossip amongst themselves to determine who will raise alerts going forward. There’s two types of subscription modes. There’s a cluster mode and a server mode that you can configure as you’re putting the HA deployment together. And then, to highlight, there’s also some advanced security capabilities that we put in. There’s a token-based authentication scheme that allows a secure communication between the Enterprise edition of Kapacitor and InfluxDB, that folks have been asking for. And then, there’s also a mechanism for subscription-based authentication. Again, advanced security features only available in the Enterprise edition.
Tim Hall 29:57.407 And then, last but certainly not least, the Chronograf team is continuing to work on a two-week, both, bug fix and feature cadence delivery: Chronograf being the new unified user experience for the TICK stack and, obviously, InfluxDB Enterprise. And the key things that we’re continuing to work on, expanding the administrative capabilities. So you can create databases. You can do user management. You can configure alerts. You can change retention policies. All those administrative capabilities are available through the user interface. But the power comes in, in terms of being able to do integrated and deep data exploration and exposure of the fields and tags in a very native and easy-to-use matter through the query builder. Will allow you to build dashboards very, very quickly.
Tim Hall 30:48.888 And then, we’re also providing a whole host of pre-built dashboards for those of you that are using Telegraf as your metrics collection source. So when you’re using Telegraf, we know it’s Telegraf. And as those data elements come in, we’ll tell what hosts are reporting in. And then, you’ll see links to specific visualization for the metrics, for the plugins that have been activated on those Telegraf collectors, and so no queries to write, no dashboards to build. You can just collect and explore the data extraordinarily rapidly using that. And of course, all of this is-all the Chronograf innovations being delivered in the open source. There’s no additional license to buy here. Again, it works across both the Enterprise edition and the open source editions of the TICK stack. Lots of new feature innovations, so definitely, if you haven’t tried Chronograf, grab the latest edition. I think we’re about to release 1.3.5 in the next day or so. There’s a new status page. There’s new legend information that’s available that you can tap into. It’s super, super easy to use and a powerful tool to explore your data. So with that, I’m going to wrap up the prepared remarks, and we’re going to go into Q and A.
Chris Churilo 32:15.084 Awesome. So if you do have any questions for Tim, now is your chance. Go ahead and put them in the Q and A section or the chat window, and he will answer them as they come in. So we’ll keep the lines open for probably the next five minutes. Okay, Tim, looks like we got a question that came in through the Q and A panel. So the question is, the Enterprise web console and Chronograf at feature parity, so with confidence, can I just switch over to Chronograf?
Tim Hall 32:51.968 That was actually one of the requirements that we had in terms of getting Chronograf to GA, is reaching feature parity. So our belief is that it is at feature parity and actually beyond in terms of the visualization capabilities. So for those of you that are familiar with the web console, there’s a easy to use data exploration panel. So you can rapidly just drop queries in it, either in full text-or you can use the query builder capability and it’ll return results either as a graphical format or in tables. All of the administrative capabilities are there. And I think what you’ll see now is, as we sort of advance from this point, we’re going to be adding a bunch more things that, essentially, we were having trouble with in terms of trying to integrate that in with the existing Enterprise web console. If there are features that folks were using at Enterprise web console that they believe did not make it across, we’d love to hear that feedback, of course. But yeah, Chris, I think at this point, we believe it’s a feature parity and actually beyond in many cases.
Chris Churilo 34:02.867 Cool. Another question that came in is about upgrading from the 1.X platform to 1.3. Any issues? Is it really easy?
Tim Hall 34:14.258 Yeah. Again, trying to continue with the theme of developer happiness and delighting developers, this should be a drop-in upgrade. Just to highlight, we’ve got over 160 instances of 1.3 that we’ve been running in the InfluxDB Cloud environment for the past month and a half. So we have a high degree of confidence in our ability to upgrade very, very easily from the previous editions to the current. Through that cycle we did obviously find a couple of defects. That’s sort of normal, part of the software experience. And those defects have been addressed. And so, for those of you on the Enterprise side, you’ll notice that the first release you’ll have access to is actually not a 1.3.0. That’s what we sort of built, hardened, and deployed in the InfluxDB Cloud environment. What you’re going to get is actually a 1.3.1, which has those regressions and/or defects that were uncovered during our extensive testing address. And so, again, as a product person and somebody who uses the technology regularly, I definitely appreciate the ability to not take a .0 release that has a bunch of new features and functions. That’s not going to say it’s bug-free, but I will say that we have certainly done a lot of testing on the regression side and looked for how easy it was to sort of drop it in and stand up. The one thing, again, that I’ll caution you on is if, for any reason, you run into a regression-and that does require you to downgrade-you will want to stop the writes from backing up in the hinted hand-off queue because that hinted hand-off queue segmentation is now different in 1.3. And the 1.2 version will not be able to read those new queue segments. So I would say the upgrade path, super easy and smooth. Downgrade has that one little gotcha that you need to be aware of.
Chris Churilo 36:11.300 Okay. One more question, just for those InfluxDB Cloud customers. When can they expect to get this 1.3 version?
Tim Hall 36:22.114 So, like I said, all of the InfluxDB Cloud customers-well, 160 of the about 200 instances that we have running have already been upgraded and are running 1.3. We have a remaining handful of folks that we’ll be sending notifications out to over the next day or two to let them know when they will be upgraded, but we expect that to happen over the next week. It’s just a matter of sending out the notifications and organizing a time with those folks. But, yeah, lots of folks already received those notifications about a month, or so, or more, ago and have been running on 1.3 since.
Chris Churilo 37:08.665 Awesome. Thank you so much. We don’t have any other questions that are in the queue, but I want to remind everybody that we have recorded this session, and I’ll be posting it. We also attempted to do a Twitter live feed, so maybe some of you in Twitter-land got a chance to see that. And if you have any other questions, please make sure to put them at community.influxdata.com. We have, always, our engineers out there looking to answer all of your question. And actually, Tim is also out there quite a bit. So with that, we will end our session today, and we hope you have a wonderful day.
[/et_pb_toggle]