Using InfluxDB for Consumer IoT | Live Demonstration
Session date: Aug 10, 2023 07:00am (Pacific Time)
In this live demonstration, discover through real-life use cases and technical demonstration how companies use InfluxDB to provide their customers with real-time insights and analytics into devices, websites, or applications — collecting, storing, and visualizing data from sensors.
Understand how InfluxDB customers are using historical data from sensors to gain insights into performance and/or consumer behavior without sacrificing part of their analysis.
Join our live demo to learn how InfluxDB can help you:
- Provide real-time monitoring and analytics of IoT devices at scale
- Support the collection of millions of data-points-per-second writes across all customers
- Improve overall customer experience with an optimized database for low-latency analytical queries
Watch the Webinar
Watch the webinar “Using InfluxDB for Consumer IoT - Live Demonstration” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “Using InfluxDB for Consumer IoT - Live Demonstration”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Ben Corbett: Solutions Engineer, InfluxData
- Jay Clifford: Developer Advocate, InfluxData
Ben Corbett: 00:00:00.677 So my name is Ben Corbett. I’m a solutions engineer at InfluxData. So that basically means I’m one of the techies that works for the commercial team. My role at InfluxData is typically to help customers to understand if InfluxDB will be a good fit for their use case. So this typically involves understanding the use case and the project that you’ve got, maybe what the legacy system if there is one there at the moment, understanding any friction or issues that you might have with that, and then hence extracting your requirements. Then we kind of map them towards InfluxDB capabilities and also just educating you with regards to what InfluxDB can do and supporting you through any evaluation. So that kind of involves all the high-level conversations around architectural best practices, schemas, right optimizations, and query patterns. But then also, “Help, Ben, quick. Can we get on the phone? What does this error code mean?” I come from a background in IoT platform development before joining influx. And I’m very happy to say that IoT as kind of an industry for influx is really growing and expanding. It’s almost exclusively what all of my time is spent doing now. So before I kind of focused on — it was mostly kind of agriculture construction, energy, but also a little bit in electric vehicles. And InfluxDB started to be quite central to a lot of the platforms that we were developing. So I liked it so much. I came over to the dark side and joined these guys just at the beginning of last year. And yeah, I’ve been working with our IoT customers ever since then. Shall I crack on with the presentation, Jay?
Jay Clifford: 00:01:45.404 Awesome. Ben, yeah, I’m very glad that you joined the dark side as well. It’s always greater on the dark side. So yes, without further ado, if you would like to start the presentation.
Ben Corbett: 00:01:55.575 Great. Let’s do it. So today I’m going to be walking through — I’ll start off with a little bit of an InfluxDB overview. And part of that we’ll be talking about the new and improved 3.0 storage engine, sort of internal project name IOPS, and some of you might be familiar with that. We’ll then jump into some use cases. So these will be themed towards how customers are using InfluxDB as applied to consumer IoT use cases. So we’ve got, I think, three of those to share with you. And then we’ve got a short live demonstration, a little bit of role play from me and Jay as well. So Jay’s been working behind the scenes to set up a nice live demonstration for you. And we’ll both be kind of conducting that and walking through it. So let’s get started.
Ben Corbett: 00:02:42.901 So what is InfluxDB? So for those of you who haven’t worked with InfluxDB before, InfluxDB is a database and a platform for handling time series data at massive scale. So as it’s called, it’s that database — you can see it at the bottom there. So this is kind of a database here, which is really performant at collecting, compressing, and serving data with a timestamp, so kind of natively developed for that. Whereas a lot of other database solutions are more suited to different workloads, or more like a Swiss Army knife to handle any workloads. We’re really tailored to anything with a timestamp. On top of that, we have the API. So this is kind of the best-in-class consistent API that we have across all of our different editions and versions. So one unifying API, this is kind of where all of the development happens, right? We’ve got good documentation online, which come from our open-source routes. And then we’ve also got client libraries, which is a really popular way for you to interact with that API. On top of that, scripting languages. So this is typically how you interact with the data itself, not just the database. So this can be, whether it’s influx QL, which is kind of our own in-house SQL like query language, which has been around for a while, or SQL, which I’ll talk a little bit about in a moment. These are typically how you can manipulate extracts and basically query your data out of InfluxDB. And then last but not least, one thing that I’m really passionate about and a lot of our customers get value out are our data collectors, right? So one thing we’re really passionate about in influx is time to awesome, developer productivity, ease of use, whatever you want to call it. And the data collectors are one of the really easy ways for me to describe this to you.
Ben Corbett: 00:04:26.420 So essentially, we’ve got a couple of different ways that’ll make it really easy for you to be able to scrape, poll, subscribe, get data into your influx, to really reduce that time towards building an MVP or a production offering. So it kind of begins with our widely adopted open-source Telegraf. So hundreds of plugins available there, definitely go and check that out. It’s kind of become a little bit of an industry standard for collecting data in the time series database space. It can then continue with our client libraries. So if Telegraf doesn’t meet your requirements, feel free to develop your own client in a language of your choosing. Those two techniques have a lot of the right optimizations baked in that are sort of out of the box. So there’s a really, really scalable and production ready ways to get your data into influx, and then we also have other kind of third-party integrations. So if you’ve got a particular tool that might have an InfluxDB plugin, definitely go ahead and test that and see if that’s the way that you can simplify your architecture too.
Ben Corbett: 00:05:29.946 One thing we’ll talk about, I think, on a different slide is just the different additions that we have in influx. So the point that I want to make here is really we’re suited not just towards hybrid deployments, but you as your business changes. So on the left-hand side, we’ve got the fully managed editions of InfluxDB. So these are typically for customers who they don’t want the additional overhead of maintaining the — yeah, maintaining the code and hosting it themselves. So you’ve got serverless, which is the shared edition, so based on shared infrastructure, offering elastic scalability and kind of a really low entry point. So really good for kind of prototyping and getting going and kicking the tires. Cloud dedicated, so this is a clustered InfluxDB that’s highly available, production ready and it’s sat on dedicated infrastructure in our cloud environment and managed by the experts at InfluxData. This is kind of what we say is suitable for use cases which are running mission critical data. So you want to have it on dedicated infrastructure and not worry about any implications that are associated with shared infrastructure. And on the right-hand side, we’ve got the self-hosted editions of InfluxDB. So clustered is the evolution of enterprise on 3.0. So that is the kind of self-hosted edition of an InfluxDB cluster, offering high availability, obviously, support as well as some additional enterprise features. And then edge will be the single node edition of that, so suitable for kind of those edge deployments or gateways, devices, assets, things like this.
Ben Corbett: 00:06:58.981 Next, I’m going to be digging into 3.0 a little bit. So this is a really exciting time for me to be able to give webinars because 3.0 landed — well, it started landing earlier this year. So 3.0 is a complete redevelopment of the underlying storage engine within InfluxDB, and it forms the core of the platform itself. So what we’re doing here is really doubling down on the core database functionality, so focusing on storage, compression, storage performance, queries as well. So not just better query performance, but also additional query use cases that we’re seeing a lot of our customers have. Cardinality, so focusing on scale, focusing on those high cardinality use cases, how can we incorporate additional workloads within InfluxDB, making it more useful in your tech stack? And last but not least, making InfluxDB play more nicely in the ecosystem and integrate more nicely with really well-known tools, kind of shifting towards more of that open and interoperable approach there. So just to highlight, 3.0, or internal project name IOx, has pretty much been finished ever since I’ve been at the company, but as I’m sure a lot of you that work in software appreciate massaging that into kind of those commercial offerings and getting it out into production so customers can use it, takes just as much time as developing it initially. So that’s kind of what’s been happening this year. And I think it’s been around in serverless since January, cloud dedicated since April, and it will be a landing across the different ones throughout this year as well. So it’s been being developed for a while in conjunction with a lot of our customers, which is kind of why on the next slide, I’ll be talking about what we were kind of consistently hearing that a lot of our customers wanted out of InfluxDB. So these are kind of the pillars of the benefits that we are wishing to bestow on customers.
Ben Corbett: 00:08:45.953 So the first one, essentially, InfluxDB historically has been a really good fit for metrics. So relatively fixed or slow growth data sources representing that kind of stable cardinality if you’re familiar with that term. That’s a term that represents kind of the variation in the keys or the indexes that you’ll have against your dataset. InfluxDB wasn’t a good fit for high cardinality or unbounded cardinality use cases such as events, traces, law, logs, error codes, things like this. So what we wanted to do with an InfluxDB 3.0 was add support for unlimited cardinality, positioning it as more of a versatile time series database for any time series database workload, not just metrics. Although a lot of our other metrics customers also ran into cardinality concerns, right? It was the treacle. It was the mud that kind of got in and slowed down your right performance and your query performance. And now we’re happy to say that with 3.0, we’ve added support for unlimited cardinality with no impact on query performance as well, so yeah.
Ben Corbett: 00:09:52.668 The second pillar that we have here, I mean, database customers are never going to be upset about faster queries. But really what this means as well is additional query use cases as well, right? So InfluxDB has always been a good fit for those metrics monitoring and alerting use cases, querying the leading edge of data. It’s a hot database tier, but what we want to do is add support for maybe slightly heavier analytical query workloads and slightly different query workloads. So just again, making InfluxDB more useful throughout your tech stack.
Ben Corbett: 00:10:25.252 The third one is one that I’m particularly passionate about, and it’s the one that a lot of my customers can take advantage of immediately. So again — I mentioned before —InfluxDB’s historically, only ever been a hot access tier. And lots of customers, I had it day in and day out, when are we going to introduce a cold storage tier? And a lot of customers were just essentially building their own cold storage solution, right? Dual writing the files into object storage and keeping them there for archiving purposes or potentially to run their analytics workloads, machine learning tasks, things like this. So with 3.0, I’m happy to say we’ve included a cold storage tier, which is based on Parquet files, sat in object storage. So there’s two benefits here from a storage perspective. One is advanced compression. So we’re seeing 4.5 — I think our benchmark say 4.5 times better compression than OSS within this cold storage tier. But then also the kind of unit cost of object storage is orders of magnitude cheaper than what you would have for SSDs. We all know SSDs are expensive. So this is one that I’m really, really passionate about, and I’ll also talk a little bit more about how that object storage can be even more useful in the next slide.
Ben Corbett: 00:11:46.127 The fourth one that we have here, so this is really relating to something that we would also hear time and time again. It’s that customers potentially that were new to influx and really, really like the technology, but maybe they found influx a little bit hard to work with. It had a slightly steeper learning curve, and their organization just knew SQL. So SQL is the kind of de facto query language of choice. So I’m pleased to say with 3.0, we’ve included the ability to query it by a native SQL. And this also comes with the opportunity to integrate with SQL based tooling as well, right? So anything with a JDBC driver or ODBC driver, we’ll be able to integrate with, so Tableau, Superset, Power BI, things like this. Definitely take a look if you’ve got an application that you think can leverage one of those drivers to integrate via SQL. So hopefully pushing down that learning curve and that sort of barrier to entry for our technology.
Ben Corbett: 00:12:43.967 Last but not least, again, one, I’m really passionate about, moving kind of further away from that walled garden approach. So from our own sort of in-house query languages and our own file formats, we’re moving towards continuing to adopt more and more open standards, open technologies, query formats, things like this, just so we play more nicely in the ecosystem. We complement the other items in your tech stack. And I’ll show you an example of that now. So this is a really high-level overview of 3.0 architecture. There’s two things I wanted to kind of touch on, on this slide. One is how the hot and cold tier work together. So what you can see here within the InfluxDB box is that as a query comes in, the data fusion query engine will essentially go to the data catalog to find out where those Parquet file partitions sit. We’ll then go to memory, which is based on Apache Arrow. This is the in-memory format of Parquet, right? So that data conversion to Parquet, the in-memory format, is extremely performant, and it’s offering — well, it’s giving our customers really, really impressive improvements on query performances. Any additional data that isn’t already included in-memory will be fetched from Parquet. So you’ve got a small performance here of an additional object fetch from the object storage there, depending on how many Parquet files you do need to fetch. So the point here is that the query that comes in through the front door has no understanding of what’s in cold and hot storage. It’s all automatic and all under the hood for you. And data will reside in the hot tier for a period of time, which is configurable after it is written, so satisfying that kind of querying, the leading edge of data use case. You’re monitoring your alerting. And then also as it’s queried, typically as data’s queried, it’s queried a few more times. So as it’s fetched from Parquet, it’s pulled into memory and will reside there for that period of time too before it eventually expires and it’s just sat in object storage waiting to be worked on again.
Ben Corbett: 00:14:47.674 The second thing that I wanted to touch on is this back door that we’ve got here. So this is that you can see the kind of two arrows from machine learning tasks and data science activities coming into the Parquet files, sat in object storage. This is the point that I was making before, right? We had lots of customers who they had their monitoring use case, their alerting use case, but they started to want to use their data in new ways, right? So running those really intensive queries which are fetching highly granular information across fast time ranges kind of more akin to a data export than anything else. What we’re looking to do is provide a pipeline which can give read only access to those Parquet files sat in object storage. So you can satisfy your sort of data science team. We all know data scientists need to be topped up with a Parquet file a day, using this integration and that will not impact your sort of monitoring — your whole access to basically your monitoring, your alerting and your time series database over here. So that could be used for archiving, maybe your own custom backup jobs, but also it was kind of built with these two activities in mind. So I will stress that this feature doesn’t exist in 3.0 at the minute, but it will be landing this year, and I believe the first — we’re kind of developing a prototype at the minute, which should land in the next couple of weeks. So just something to get excited about for now. So again, this just really nicely simplifies what we’re talking about. So shifting InfluxDB, which was historically the metrics data store. Hopefully moving towards more of a simplified tech stack like this. Now with the support for unlimited cardinality, being more interoperable with additional tooling, SQL support, things like this, we’re just trying to make InfluxDB more helpful for your tech stack and hopefully provide some simplification.
Ben Corbett: 00:16:40.874 I won’t add on this slide too much because I think we already covered it, but this basically just breaks down the different environments that we have 3.0 sat across for now. So yeah, the two different cloud offerings dedicated and serverless on the left-hand side. And then the self-hosted ones on the right-hand side, so managed by you, clustered and edge. This is quite hot of the press, so our benchmarking completed on Thursday. And we really wanted to get a slide in. I won’t go through and read them one by one, but definitely go onto our website and check out the full report. It obviously has all of the detail around the specific tests that we did run. But what you can see here is that we’re making good on that promise of focusing on the core database functionality, right? So improve throughput, improve storage performance, and then also query performance as well. So really doubling down on those core database fundamentals. And hopefully, yeah, delivering those benefits to you guys.
Ben Corbett: 00:17:36.861 So consumer IoT, why you’re here, so what I wanted to do is kind of go through a little bit about what we see the value proposition is for influx of a consumer IoT use cases. So it’s kind of essentially why customers are adopting us. So by consumer IoT, just to give a quick definition, the one that we typically adopt is where InfluxDB is integrating with consumer-based devices. So wearables, mobile phones, those kind of things, or it’s propping up a customer facing application. So yeah, providing kind of more insights to customers there. So let’s dig into the value prop. These are kind of the three things, and usually a customer will start off with one of them and will expand to the others. But these are kind of the three pillars of value proposition that I see consumer IoT use cases face that potentially depending on what side of the coin the use case is that I just described whether it’s on the device side or the application side or might determine which one they kind of go for. So on the left-hand side, we’ve got driving that customer engagement and experience. So that can be making you feel more connected to your device, potentially giving you more insights and just overall giving you a better service. So really, really improving that kind of customer satisfaction.
Ben Corbett: 00:18:55.385 The second one that we have is more internal, so it’s more around gaining insights internally around how customers might be using your application or device product and feeding into that kind of product loop, right? So understanding how you can make your product better. So that’s a really key one as well. And I’m working with a customer at the minute, which is augmenting their production line facility to improve product quality. So that kind of falls into that product analytics use case there.
Ben Corbett: 00:19:24.763 And then last but not least, obviously, one thing that’s always possible with consumer IoT use cases is providing more value to customers, providing value added services, potentially additional revenue streams that you can build for. We’ve definitely got some examples of that as well. So once you understand how customers are using your product a little bit more, maybe you can understand how you can help them more. And it can lead to additional revenue streams.
Ben Corbett: 00:19:55.814 Super high level, this is our consumer IoT reference architecture. So all we’ve really done here is just put all of the information on the page with what could be the relevant Telegraf integrations or client libraries to integrate with those consumer data sources. So you can see on the left-hand side, we’ve got ecosystem devices. We’ve got mobile apps like electronics products, web apps, all those kind of things, as well as just message brokers or anything that could represent a data source in your ingestion pipeline. InfluxDB obviously collecting and storing that information. And then you’ve got your typical applications and use cases on the right-hand side. So we want to reduce churn. How do we improve that experience? We want to understand our customer’s behavior better. We want to improve the product performance so that we can build better products next time, all of those. So yeah, definitely take a look at that.
Ben Corbett: 00:20:49.150 And then I guess really quickly before we dive into the case studies, and a quick nod to EDR, and that stands for Edge Data Replication. So Edge Data Replication is a feature that exists within InfluxDB, which is designed for use cases that have an edge storage requirement and particularly where you’ve got intermittent connectivity as well. So it’s an opportunity for you to simplify that edge to hub data pipeline data collection in a way that is durable. So essentially what this is, is InfluxDB deployed at the edge. So that will be either per device, per gateway, per site, whatever it is, where you have that local operational view where you need to do — you’ve got kind of a closed loop feedback at the site, at the device that needs that data there. But then also, you have that kind of parent level hub, single pane of glass view, where you need to collect the data from all of kind of your ecosystem. So you can kind of improve the agility that you have in terms of hooking up additional services, maybe giving the data, making it more accessible for data scientists, reporting, analytics, KPIs, all of that stuff. So this is an architecture that we see come up time and time again. There’s lots of different ways to do it. But EDR is just one way that offers that kind of durable data sync from the edge to the hub, and it’s really just a few lines of CLI and really, really simple to set up. And that hub InfluxDB can be any InfluxDB as well. So definitely check it out if you think that — one thing I’ll say about the durability, if the data sync fails between the edge and the hub, so this is designed for use cases with intermittent connectivity, it is buffered on disk at InfluxDB on the edge, and it is retried at an interval determined by you. So that offers that durability. We’ve got customers that run this with sites with bandwidth issues, maybe even marine vessels that come in and out of connectivity, and they’re really happy with the service. So definitely check that out. And this is actually part of our demo too, so.
Ben Corbett: 00:23:00.208 Okay. A little bit more about our case studies. So Tesla is a really cool one for me because it represents both sides of our consumer IoT, I guess, how we support consumer IoT use cases. So one is integrating with data sources or consumer data sources. So Tesla pulls time series data from connected power walls in solar panels using homes. So they collect all of that using InfluxDB, and then they have the Tesla Powerwall app, which is powered by InfluxDB. So that’s the second use case, which is really propping up our customer facing application. So they collect all of that data at the edge, and they send that data into InfluxDB running in their backend systems, which props up the app. So you might have seen this before. This is one we’re really, really happy about. And yeah, definitely one of our strongest consumer IoT use cases.
Ben Corbett: 00:23:50.751 So this one’s kind of shoehorned in a little bit. Obviously, it’s consumer IoT because it is a consumer device that has a consumer application, and they have a very impressive IoT pipeline which gathers all of that information system wide, which transforms it, manages the devices, analyzes the data, aggregates it all, and presents it to the user so they can understand more about their nest device and their home, their smart thermostat. InfluxDB actually monitors the infrastructure that does all of that pipeline. So it’s more of a kind of a monitoring use case, but nevertheless, geared towards consumer IoT, because obviously the better that pipeline is, it’s improving that customer engagement and that customer experience and probably overall allowing them to understand the performance of this device more and hopefully leading into that product analytics use case that we saw.
Ben Corbett: 00:24:47.076 Last but not least, really cool one. So Rappi essentially uses InfluxDB to monitor, react and adjust the fluctuations in the on-demand pricing of their driver rider network, so depending on availability, prices, all of that kind of stuff. InfluxDB collects time series data from this mobile rider delivery network. And then it sends it to an InfluxDB in the cloud-based application they’ve built, which allows them to do that kind of on demand pricing and adjust the fluctuation. So I guess kind of similar to what you get with Uber when it comes to charges. This is what we’re seeing Rappi do with their use case. So another really, really cool one that’s geared towards a consumer facing app. But also integrating, I guess, with mobiles and cells, mobile devices.
Ben Corbett: 00:25:34.784 Awesome. So we’ve got a demo for you guys that we’re going to run through. Sorry, Jay, I took up quite a lot of time there. I’ll pass it over to you just for a quick 101 if you like, and then we can kind of run through our demo.
Jay Clifford: 00:25:48.958 Ben. I must admit, I’m going to take away that saying from you, basically, that every data scientist needs a Parquet file once a day. That is brilliant. So I might share my screen if that’s okay with you, Ben. We can start this role play off. Okay. As Ben said, we’re going to do a bit of a role play today just to keep it interesting with this consumer IoT use case. So you’re going to have to imagine that Ben started a new consumer IoT platform called Coffee Co. And he’s offering a range of coffee distributors, kettles, you name it. All of these smart devices are offering the data based on how much usage I get from these devices, so overall, power pillage, the status of that coffee machine, et cetera. So all of that data is being provided to me in near real time at the edge. At exactly the same time, basically Ben also wants this data up in the cloud. He wants to be able to aggregate all of his smart devices into one location so he can basically provide consumer feedback, aggregate that data into some ways, so he knows when we are creating our coffees, how he can then interact with us based upon that, whether he needs to sell us more coffee, whether he needs to offer us something more expensive on his line, but first he needs to know exactly what we’re up to. So based on this really rough prototype, we’ve devised this solution. And you can sort of see the Coffee Edge Architecture here. So we have a Raspberry Pi, which is a considered our edge device, and what that’s actually doing is it’s got basically a noninvasive power monitoring sensor, which basically monitors the current coming from our extension lead here that’s connected to our kettle. What’s that’s doing is basically, it’s using a python script to pull and collect that data from the noninvasive power current. We’re using Telegraf to schedule that python script. So as Ben said earlier, if you don’t have a plugin available to use cell phone Telegraf, you can actually create a script yourself and have Telegraf run it for you and pass the data after it’s been created.
Jay Clifford: 00:28:12.109 So in this case, that’s exactly what we’re doing with that Power.py. So Telegraf is scraping the data, passing that data into a format that InfluxDB understands and then writing that data to InfluxDB at the edge here. From there, we actually then have EDR working. And as Ben pointed out, it’s our highly durable service for writing data automatically from one bucket to a remote instance of InfluxDB. In this case, we’re writing it to InfluxDB serverless, which is our 3.0 multi-tenant offering, where Ben is going to pick up the data from there and provide his further analytics upon that data. So let’s kick off the demo. What I’m going to first do is turn this on, and my apologies if it gets a little loud. And then I’m going to set my auto refresh for my local instance of InfluxDB. So what we’ll soon eventually see is we’ll see a sharp spike in our overall pool from that sensor, and we should see on our activity that coffee is being made. Now, while that’s taking place, what I can also show you — there you go. You can start seeing the poll of that data going up, and we should see — we’ll give it another refresh eventually. While we’re also seeing that data coming — I’ll come back to this — I’ll jump into Telegraf’s, and you can see here this is our Telegraf config. This Telegraf config basically outlines exactly what we discussed that’s running on the edge here. So InfluxDB is great for storing all your configurations. You can use this for hundreds of Telegraf configurations and edit them within InfluxDB. So in this case, you can see it’s a simple output plugin to InfluxDB that we’re writing to, and then we also have that execd plugin, which is running our python script. So if I return back to our dashboard, as you can see, we now have the coffee brewing in that time period. Now, that’s the consumer part of the role play over. That’s the data I have got. Ben, why don’t you show us what’s happening on your side, and I’ll pass the screen back to you.
Ben Corbett: 00:30:38.188 Yes, perfect. Thanks, Jay. So let me just go ahead and share my screen. So I guess it comes back to that EDR piece that I mentioned before, right? So we have that local operational view. So Jay is getting some real time insights about how his device is performing, how his kettle is performing, brew times, things like this. And it’s really helping him to feel more connected to his device, understanding it more, understanding how healthy it is and improving that kind of customer experience and engagement with the service. It could even be propping up a consumer facing application. Maybe he could be doing some remote management of that device. And that kind of sort of sat within that edge use case that we have. What you can see from my dashboard here is kind of more of a parent level operational view. So if we consider this, I’m looking at the entirety of my service. I’ve just drilled down into Jay’s device for now. And what this does, I would be more concerned with — I’m happy that Jay’s having a better experience and feeling more engaged. But also, I’m able to see a lot more about the analytics of my products. So brew times, how often, not only is it used, but how long does it take for the functionality to be completed? I can really drill down into the amps, the voltage. I can look at that across my fleet of devices, assets, whatever it is. And I can produce roll up metrics, reporting. I can even unlock this data towards our analytics team. So data scientists can run some complicated machine learning at models to find out a little bit more about optimizations or fine tuning we could do for our product next time. So this is kind of just a really high-level overview of what we would have as that kind of parent level ops center, which is a really, really common edge to hub architecture that we see.
Ben Corbett: 00:32:33.427 So yeah, what am I looking at? So specifically, you can see I’ve got the last value here, so that’s the leading edge of data for Jay’s particular device. I’ve got the voltage. I’ve got the amps that you can see a spike there when you hit brewing, and you can see exactly — I could maybe work out a little bit about the energy consumption, so I could do some optimization there. I can see the brew time, and I’ve got a number of coffees. So giving me some real insights into those key metrics that I want to see across how Jay’s using his device. If I change the time range here to across three hours, you can see that number of coffees jumps up. So these are just, yeah, particular cells that we’ve got. So this is based on InfluxDB 3.0. So it’s sat in InfluxDB serverless at the moment. If I drill down into one of these tiles, we should see — that’s right. There we go. So this is the associated SQL query, right? So this is a really good example — quite a complicated SQL query. So what we’re doing is assigning a status based on a particular value. So if it’s above then, if the amps are above that particular value, we can consider the state brewing. If not, it’s standby. And then I can produce this table, which I can visualize potentially even leverage in a report. So the key thing here is that SQL is 3.0 functionality. So you can see it’s using the plugin that Jay’s named serverless that’s going towards InfluxDB cloud serverless. And if I show you really quickly, if I go on to connections, we should have a couple of plugins.
Ben Corbett: 00:34:10.184 So we’ve got the Flight SQL, which is integrating with InfluxDB serverless for SQL queries. And we’ve also got the InfluxQL plugin as well. So you could also leverage InfluxQL queries for 3.0 and produce some metrics there if that’s the language that you kind of know and love. So I think that’s it for me when it comes to the parent level operational view, Jay. Maybe back over to you just for some closing comments.
Jay Clifford: 00:34:43.645 Yeah, I think you hit the nail right on the head there. I think one of the main benefits of 3.0 here is that you can leverage both InfluxQL and SQL and both have their own benefits in their own ways. I find SQL is very generic or probably it covers a lot more larger use cases. So it has a lot of utility functions available to it you saw there with the sub queries, which are really, really cool. InfluxQL is so concise for time-series-based functions. It makes very complex time series aggregation seem really simple and very little amounts of InfluxQL query language. So yeah, I think using a variety of both can really benefit you depending on your requirements. So it’s great that we have both. Awesome.
Jay Clifford: 00:35:31.494 So Ben, thank you so, so much for this demo today. It has been awesome to have you on again, and hopefully we’ll have you for the next one as well. What I’m going to quickly do is I’m going to do a little bit of housekeeping, and then we can wrap this baby up. So essentially, what’s going to happen after this, anyone who’s attended will receive an email from Ben basically thanking you attending and give you access to the slides, a recording, and also give you access to any of the links that we’ve provided as well. If you have any questions for Ben, please do not hesitate to reply to him. He will endeavor to reply to everyone. He did very well in our last one. I think we had over 30 replies that we needed to do. So well done to Ben on that one. If you don’t have access to the email, don’t get it at any point, please just jump into our community at any point. Either me, Ben, or one of our engineer team will be there to help you out and we can answer all of your technical questions if you need. But as always, if it’s an enterprise-based question, something that requires a licensing, please go directly to our sales team. They will be more than happy to have a chat with you on those situations. Ben, thank you very much. And I’ll see you next time.
Ben Corbett: 00:36:52.639 Thanks for having me, Jay. Thanks, everyone.
Jay Clifford: 00:36:54.699 Thank you. Bye, now.
[/et_pb_toggle]