Remotely Monitoring Your Systems with InfluxDB and DDS
Session date: Nov 05, 2019 08:00am (Pacific Time)
The Industrial Internet of Things (IIoT) is transforming industries across the board by infusing and connecting billions of connected devices with digital intelligence. Through low-cost computing and networking, today’s systems enable far greater amounts of data, at far greater speeds, to be connected throughout distributed systems. As the amount of data increases exponentially, however, there needs to be a new way to monitor it in real time without overwhelming existing systems.
Building a monitoring architecture for IIoT requires a collective software stack for data collection, storage, analysis and visualization. In today’s connected world, however, the software components of the monitoring framework may be deployed in a distributed manner and over geographically-dispersed locations, requiring them to be seamlessly integrated over multiple networks.
This webinar will introduce a way for InfluxDB to be used in IIoT system monitoring. It will highlight a new architecture that meets the software framework requirements by combining the capabilities of the Data Distribution Service (DDS) standard for real-time data exchange with InfluxDB. DDS is the open middleware protocol and API standard that provides data connectivity, extreme reliability and a scalable architecture to meet IIoT application requirements.
The webinar will highlight IIoT use cases, provide an introduction to the DDS standard, learn about the time-series monitoring architecture over WAN using the Telegraf Plugins for Connext DDS and view a live demonstration of the Connext DDS and the Telegraf plugin as it monitors and provides alerts throughout distributed IIoT systems.
Watch the Webinar
Watch the webinar “Remotely Monitoring Your Systems with InfluxDB and DDS” by filling out the form and clicking on the download button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “Remotely Monitoring Your Systems with InfluxDB and DDS”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Chris Churilo: Director Product Marketing, InfluxData
- Lynne Canavan: Director of Ecosystems, Real-Time Innovations (RTI)
Chris Churilo: 00:00:00.218 Welcome everybody. Thank you for joining us in our weekly webinars. My name is Chris, and I work at InfluxData. And just a reminder, we are recording this session. So if you want to take another listen to it, you will be able to, after I do a quick edit. And if you are interested in speaking at any of these events, just give us a holler. We’d be happy to host you. Today, we’re really excited to have our friends from RTI talk a little bit about some of the Telegraf plugins and some of the systems that they’ve pulled together with InfluxDB and Telegraf. And if you’re especially interested in process engineering, or as we like to call it, industrial IoT - I know people that are thinking to that don’t really consider themselves as such. But they pulled together a solution that really works well in that kind of an environment. So if you have any questions at any time, once again, just please, post them in the chat, in the Q & A. And I will let them take it away.
Lynne Canavan: 00:00:57.980 Thank you, Chris. And good morning, good afternoon, good evening to everybody. I’m Lynne Canavan. And I’m the Director of Ecosystems for RTI. And I’m really happy to be here today. And thanks again for the invitation to talk. I’m going to really quickly go through a little bit about the market that we cover and some of the use cases. And then, I’m going to turn it over to my colleague, Kyoungho An, who is going to give you a - is going to talk about the technology, and give you a demonstration of that. So where are we? So this is Easter parade. This is Fifth Avenue in New York City. And so in the year 1900, you can see that there’s this one lone little car there. Just 13 years later, the transportation mode flipped, which generated changes that impacted far more than the transportation. It actually generated the city services. A change in energy from hay into petroleum and so on and so forth. So when we talk about today, where we are now, we are facing brand-new disruptions, and what people are calling the third industrial revolution. This is the era of smart, connected machines. Gartner calls it the most disruptive in history. We call it the industrial internet of things. And there’s lots of numbers, depending on who you ask.
Lynne Canavan: 00:02:10.876 But in general, it’s about a trillion-dollar market opportunity. It’s going to be large and impactful. And it also goes across industries, from healthcare to energy, from manufacturing to transportation. But all of them share these common attributes. These are very robust applications. There’s no room for error. They need to be secure. And above all, they need the ability to process data in new ways. There’s a lot of data that’s being generated. The data needs to be processed in real time. And so they require an architecture that’s different. It needs a data-centric architecture that’s scalable, that offers very low latency and so forth. So if you look at this chart, typical IoT applications are very - there are some key differences between general IoT and industrial IoT. The architecture that runs your Nest thermostat or your personal fitness device, it just can’t accommodate the requirements and the scale of IIoT, industrial internet of things. IIoT requires something that’s more scalable, more data-centric. And the data distribution service, DDS, is the standard that’s published by the Object Management Group. And that accommodates this requirement through a central databus structure.
Lynne Canavan: 00:03:34.264 And RTI is the largest commercial vendor of DDS. And so I’m going to talk a little bit about some of the markets and a little bit more about the databus. If we take a look at what the databus is, it’s a data-centric framework that develops and runs the complex distributed applications. So if you’re using a traditional message-centric middleware, you need to write code that sends messages, so a specific code. In contrast with DDS, the programmers write code that specifies how and when to share data, and then directly share those data values. So, rather than managing the complexity in your code, DDS enables this controlled managed secure data sharing for specific integration points. The databus actually acts as an integration for the data in motion, from the associated applications and tools. And it also provides some interoperability, and it eliminates major bottlenecks. And also, it eliminates single points of failure. And that’s critical for these IIoT-types of applications.
Lynne Canavan: 00:04:40.918 So I’m going to talk a little bit about some real-world examples. So this is an actual - this is an actual picture from an actual operating room. Hospital error is the third leading cause of death. So how can IIoT help? So as you can see, this is a room that’s full of equipment, very modern equipment, very expensive equipment. But what’s not apparent is that now, today, with all of this technology, most of this equipment does not talk to each other. And that leads to medical errors. So anybody that’s been in a hospital room, you yourself have been there, or you’re visiting a loved one, you know that alarms come off. The nurse comes in. She quickly turns it off, or she ignores it. And there’s actually a term for it. And it’s called false alarm fatigue. Connected medical devices can solve the problem of different alarms going off, by taking and correlating the data from the different medical devices, so that they work together, and they can provide meaningful alarms and things. So, for example, if there’s an infusion of medicine, and the patient’s blood pressure or oxygen level drops - those are separate machines. But through DDS, these two can be correlated, and it can flash an alarm saying, “Hey. Come pay attention. This is something critical, and you need to address it now.” So when you think about that, that’s a lot of data that’s coming out of the patient. And then you multiply it for every bed, in every ward, in every unit in the hospital. That’s a whole lot of data.
Lynne Canavan: 00:06:15.007 So what does the data look like? And what does the architecture look like? This is the layered databus. So we took the databus that I talked about earlier. And that’s on the bottom. What we do is we add in the layers that connect from the edge to the cloud. So on the bottom layer, in the one room, the databus, the DDS databus allows those medical devices to talk to each other, and talk to the other things that are happening in the room. And then, that is connected with the nurse’s station. And it’s connected up to the high-level IT cloud layer. The data flow is appropriate. It’s when and where it’s needed. It’s scalable. And there is, again, no single point of failure. So when we talk about automotive, it’s the same thing. So this is with autonomous vehicles. You have a lot of data. You’ve got your lidar data that’s coming in. You’ve got your traffic data. There’s a whole lot of data that’s coming into the vehicle. And it has to happen instantly. And it has to be reacted to, as well. And so through this central databus, you can connect to all the things that are happening in the car and also around the car, the connectivity layer.
Lynne Canavan: 00:07:21.511 And now, I’m just going to really rapidly go through some of our customers. And then, I’m going to turn it over to Kyoungho, and he’s going to talk a little bit more about how it works, and where it works with Influx. So NASA is one of our customers. Kennedy Space Center uses Connext DDS in their mission control and their launches. So if you think about the countdown, when something is about to launch. There is hundreds of thousands of data points. And so Connext DDS enables that communication to flow through and sends alarms, where necessary. So if something has to be shut down, it does. This is ESO. They’re building the world’s largest telescope. So they’re measuring things that are happening light-years away. And so when they grab an image, it has to come back to the earth quickly. And it has to be - there has to be some adoption for the refraction from the earth’s curvature and so forth. But they want to make sure that the image is captured in the appropriate space. So they’re using Connext DDS to control the communication, and to synchronize those hundreds of mirrors and instrumentation that’s going on, so that they can precisely identify when the event happens, and to zero in on it.
Lynne Canavan: 00:08:30.311 TechnipFMC, they do underwater robotics. And again, Connext DDS allows the autonomous and the semi-autonomous operation to work with things that are happening above sea level, as well. So that, again, through this instrumentation, that there is a reliable, secure - that these robotics instruments do not cause any damage down there, but they actually do what they’re intended to do. This is BAE Systems. They are working with JORN in Australia. JORN is their over-the-horizon radar system that monitors the air and sea movements across the coastline. So they use it for protection, for disaster relief and so forth. And so Connext DDS provides that connectivity layer of all the data that’s coming in so that it can be integrated and reacted to. Vahana Airbus, they are building the flying taxis and the flying cars. And so, again, Connext DDS is used to operate those vehicles, as well as with Nextdroid. Again, this is an on-the-ground transportation system. But, again, they’re using Connext DDS for the autonomous vehicle operation. Virgin Hyperloop, this is the only company right now that has built a fully operational hyperloop system. It’s still in tests and everything else. But Connext DDS is running the data that runs the hyperloop for this. And when we talk about wind, Siemens is using Connext DDS in the wind turbine farms to collect, analyze, and react to the data.
Lynne Canavan: 00:10:01.134 Again, when you talk about some of these IIoT applications, they run from the edge to the cloud. And so there has to be some instant communication to make sure that if something goes wrong, that it can be reacted to in real time. And finally, Harris is using Connext DDS for emergency response and control. So, again, the distributed layer allows that if something goes down, there’s other communication that can take up the slack and so forth. So the distributed communication allows them to respond more quickly and to emergencies, with the correct and updated data. So that you’re not using data that’s 10 minutes old. You’re using the more correct and instant data. And I apologize for going through those so quickly. I wanted to make sure that we get to the technology. Just a couple of things about RTI. We are a connectivity middleware market leader. We have approximately 70% of the commercial DDS market share. We’re also a leader in standards activity. So we’re a leader in the data distribution service. We are one of the authors of the standard. And we serve on the boards of some of the other organizations. We have about 1,350 commercial designs that are in play. And also about 500-plus research projects. And we’re privately held. We have headquarters in California. And we also have - our European headquarters are in Spain. And finally, I just wanted to leave you with the industrial internet connectivity framework. This is a document that was produced by the Industrial Internet Consortium, by members of it. So it represents hundreds of hours from people who have been in the industry for a very long time. And it provides more information on the different IIoT connectivity technologies, including DDS. So I encourage you to go and seek that document and to download it. And with that, I wanted to turn this over to my colleague Kyoungho, so.
Kyoungho An: 00:12:00.116 Okay. Sounds good. Thank you, Lynne. Okay. Great. Good morning, everyone. I’m a Senior Research Engineer at RTI. From this slide, I would like to introduce the integration of RTI’s Connext DDS and InfluxDB from here. First, I will briefly talk about the architecture with RTI’s Connext DDS and InfluxDB, and then discuss why we decided to use InfluxDB stack. And then, I’ll cover the benefits of using DDS and InfluxDB, along with some DDS backgrounds and terminologies. Following that, I will explain how Telegraf plugins for DDS are implemented, and how to use them in detail. Finally, I’ll give you a simple demo with a Telegraf plugin. This figure shows architecture of the integration of Connext DDS and InfluxDB. The key component to realize this architecture is the Telegraf plugin for Connext DDS. With this architecture metrics and logs for systems and applications can be sent and received over Connext DDS databus. Then, the received metrics can be provided to InfluxDB for visualization and alert tools like Grafana. In the later slides, I’ll talk about the benefits of using DDS as the connectivity layer. And why did we decide to use the InfluxDB stack?
Kyoungho An: 00:13:54.599 Actually, we came to know InfluxDB while exploring time-series monitoring technology for a research project funded by the Department of Energy to develop system management capabilities for the Microgrid standard called Open Field Message Bus. At the time, we looked into other microgrid technologies, and these are the main reasons why we decided to use InfluxDB than other technologies. First, Telegraf uses push-based metric collection. And it is a better fit for DDS, as DDS uses the event-based pub/sub model. Second, Telegraf provides lots of out-of-the-box plugins for collecting system metrics. So it makes a lot easier to collect from diverse sources for system value chain, and also, it is very easy to extend. The last one is that it is mature and widely adopted open-source technology. But at the same time, it has commercial offering and support for users willing to pay. So then, what benefits could DDS and InfluxDB stack bring together? Before talking about the benefits that DDS can provide, I would like to give you some background terminology and concepts of DDS. This diagram describes the key architectural entities of DDS. As I briefly mentioned before, DDS adopted the publish/subscribe model. So DDS applications can communicate through DDS published and subscribed entity, which are data writers for publication and data readers for subscription.
Kyoungho An: 00:15:52.703 If data writers and readers use the same topic, then they can exchange a structured topic data. Multiple writers and readers can be grouped under a publisher or a subscriber. And a participant can create multiple publishers and subscribers. DDS data domain is a logical communication environment used to isolate network communications of DDS participants. So only participants using the same domain ID can communicate with each other. I think DDS concepts can be explained better with some InfluxDB concepts, if you’re familiar with. Within our DDS domain, topics exist. And a topic is something like a database table, that can be written or read by a data writers or data readers. So it can be a similar concept to the measurement name in InfluxDB. Within a DDS topic, it has a sample of data. And it has a set of fields, like metric data in InfluxDB. And within a single topic, samples can be grouped and identified by key fields, which can be equivalent concepts to a tag name. And DDS instance is the identifier of sample streams in a topic. And it can be a similar concept to a tag value. I hope this analogy helps to understand DDS concepts from the InfluxDB’s perspective.
Kyoungho An: 00:17:34.242 Okay. Let’s talk about DDS design and features from here. So how is this DDS designed? And what benefits can we gain? First, DDS is fully distributed, supports decoupled interactions. Fully distributed means that there’s no broker between publishers and subscribers. So publisher and subscribers directly communicate each other. With that, you can achieve very high throughput and low latency, without a single point of failure or bottleneck. Like you see in the figure, DDS applications declare intents to publish or subscribe to a topic. And it does not require prior knowledge about each other. So publishers and subscribers can join anytime from any place. This decoupling can greatly reduce integration effort. DDS is data-centric. This simply means that publishers and subscriber exchange [inaudible] type data, not messages. With that, it can support content-based filtering on the publisher side. For example, a subscriber can declare content filters, like it wants to receive temperature data above a certain threshold. Then a publisher delivers the data only within that threshold. This will be quite useful to improve efficiency in an environment having a limited network bandwidth. DDS provides more than 20 standard configuration parameters for data delivery, resource usage, and etc. We call them quality of services, in short, QoSs. Among them, the most interesting ones are reliability, durability, and history QoS. I’m going to explain them with some examples in the following slide.
Kyoungho An: 00:19:37.559 DDS supports multiple transports, including shared memory UDP and TCP. But the default transport of DDS is UDP, which does not have a reliable mechanism for data transmission. But with DDS reliability QoS, it can deliver data in a reliable way on top of UDP, if it’s set to reliable. At a high level, with DDS reliability QoS, if data is delivered, a subscriber responds with an Ack message to the corresponding publisher. If not, subscriber responds with a Nack message to request to send the data again. And then, publisher re-sends the missing data. DDS QoS policies are configurable. So you can set it to either, for example here, reliable or best effort, depending on your use case. DDS can deliver historical data for late joiners with durability QoS. So a persistent publisher keeps historical data. And when a late-joining subscriber joins the system, it can deliver the historical data stored in its historical buffer. Historical data can be stored either in memory or persistent storage. You can configure that. The size of buffers can be also configured through history QoS, so you can control the amount of historical data that you’d like to persist.
Kyoungho An: 00:21:18.391 Okay. So we talked about some key capabilities of DDS. Then, what benefits can we gain with the InfluxDB stack. First, as most of you know, Telegraf provides lots of plugins contributed by the community. And it has a very strong ecosystem. So with that, you can collect metrics from diverse sources and provide collective metrics to many destinations. Other than that, InfluxDB supports built-in time-series functions that can select and aggregate or even predict metrics. These time-series functions are essential to normalize and analyze enormous amount of data collected for monitoring and alerting. Lastly, it is well integrated with visualization tools, like Grafana and Chronograf. Specifically, with Grafana, there are many freely available dashboards for InfluxDB and Telegraf, so that you can use them as they are, or they can be used with very minor change. From this slide, I would like to explain how Telegraf plugins for RTI Connext DDS are developed, and how to use them in a detailed manner.
Kyoungho An: 00:22:50.133 I developed three input plugins and one output plugin. Input plugins include DDS consumer, DDS consumer with a Line Protocol data model, and DDS monitor. And the DDS consumer plugin, actually can read DDS data in any DDS data types. And the DDS consumer plugin with the Line Protocol data model is to read DDS data with a specific data model, developed based on the Line Protocol format. The last one is DDS monitor plugin, is to read monitoring data for these DDS applications. DDS uses DDS itself to send its monitoring data, such as how many samples were sent or received, or are there any missing or rejected samples. Those type of metrics can be collected. And we can use it - we can use Telegraf for collecting those data, as well. And the DDS producer out plugin can get metrics from any Telegraf input plugins and publish them with the data model based on the Line Protocol format.
Kyoungho An: 00:24:11.025 Then why do we need multiple input plugins? Because DDS data samples are strongly typed. And we want it to support multiple use cases, including data from any DDS applications and Telegraf input plugins, and also DDS monitoring entities, as well. So this figure describes how we use Telegraf. So the first use case is to support getting data from DDS applications, with application-specific data types. So DDS consumer needs to provide a way to define DDS data types by users, and convert this data to the format that Telegraf can understand. Second, we also wanted to subscribe to data from DDS monitoring entities, which are mainly for monitoring DDS-specific metrics. And also, lastly, we like to send metrics from existing Telegraf input plugins, such as CPU memory and network. For that, we needed to develop our data type based on the Line Protocol format.
Kyoungho An: 00:25:23.986 In this presentation today, I’m going to cover the DDS consumer plugin mainly, which is for any DDS application. I’ll leave some slides for the plugins with the Line Protocol data model, as references. Okay. From here, I’m going to explain how I implemented the DDS consumer plugin, which can subscribe to any DDS applications with application-specific data types. Before jumping into the plugin implementation, I’d like to briefly introduce the RTI Go connector, which was used to create and access DDS entities for the plugin development. RTI Go connector is simplified API for DDS application development in Go programming language. And it is built on top of DDS C-API, with cgo, which is a way to call C code from go. And RTI connector has a very few methods. So it is quite easy to learn and use. It is developed by the RTI research team. And it is freely accessible in our community GitHub. So if you’re interested in developing DDS applications in Go, please check out the link in the slides.
Kyoungho An: 00:26:56.967 Okay. Let’s talk about some code from here. Telegraf provides plugin API, and I leveraged a service input plugin API to develop the DDS consumer plugin. Because this plugin creates a thread to handle DDS samples when they arrive. A service input plugin API has additional interface functions on top of the input plugin API. They include start and stop functions. Most of the plugin implementation is included in the start function, which is invoked when the Telegraf plugin gets started. And the stop function, as it is named, it is invoked when the Telegraf plugin is stopped. In our case, we have to code for deleting DDS entities that were created for this plugin in the stop function. In the start function, there are two main things. First, it creates DDS entities defined in our external configuration file. I’ll cover how an external configuration file in a later slide. So the connector function creates RTI DDS connector object with a participant name and a path for a DDS external configuration file. And then, it can get the DDS reader with its name, by calling, Get input. It is also defined in an external configuration and file, as well. And the created connector and reader object pointers are stored in the DDS consumer object. And so they are used to read DDS samples in the process functions. Finally, it starts a thread for processing received DDS samples.
Kyoungho An: 00:29:04.048 This is a code snippet of the process function. What this function mainly does is to take DDS samples from the reader and then process and injects metrics to a Telegraf output. First, in the folder of this function will wait until it receives a new DDS sample arrives. Once a DDS sample arrives, it takes the sample, and then it will be possible that multiple samples can arrive, so it has to iterate them. So within the iteration, the first thing is to convert a DDS sample to a JSON object. Then it parses the JSON object to a metric struct. Then, now, you have a metric object that you can pass on to an output plugin. Although, I covered a code snippet, it has a more important step. And thanks to all the functions provided by the plugin API and RTI Go connector, the plugin code is small and simple, which is just less than 200 lines of code. Okay. Then, how can we use this input plugin? You can actually download it from our community GitHub. And you can follow the building instruction, and also some example commands to use. And also, there are some example configurations that I created, as well. Assuming that you downloaded and have a Telegraf executable, as a user, the first thing you have to do is to create an external configuration file for DDS data types and entities.
Kyoungho An: 00:31:01.670 The slides from here, I’ll introduce an example of DDS external configuration file. First, external file will include the definition of data types. For example, here, I have defined a data type for a shape that includes four attributes, color in string; X and Y coordinates in long; and shape and size in long. After you have a data type defined, you should define a DDS domain, which includes a domain ID you’d like to participate. For example, here, I would like to participate in the domain zero. And within the domain definition, we need to register a data type we’d like to use. Here, in the example, I registered the shape type defined in the previous slide. Then, I defined a topic associated with the data type. Okay. This is the last, but the least thing to do. You need to define DDS participants, subscribers, and readers you’d like to create.
Kyoungho An: 00:32:23.092 So in this example, I defined a participant named zero, and in the domain that I defined in the previous slide. Under that participant, I defined a subscriber named, my subscriber. And under that subscriber, I defined a data reader named, my square reader, that uses the square topic that we defined in the domain definition. And actually, you’re done recreating an external configuration file for DDS. The next step is defining a Telegraf configuration file. So this is a part of Telegraf configuration file that uses the DDS consumer input. Like other input configurations, you define the name of input plugin. And in this case, it is DDS_consumer. And then, under that, there are specific configurations for this plugin. Config_path is for defining the path for external configuration file that I just introduced in the previous slide. And after that, participant_config is the full name of participant in the external configuration file. In this example, I defined a participant named zero, under participant library name, my participant library. So the full name of the participant is, my participant library::zero. And reader_config is the full name of reader in the external configuration file. So in this example, we defined the reader named, my square reader, under a subscriber name, my subscriber. So the full name of the reader is, my subscriber::my square reader.
Kyoungho An: 00:34:23.005 And these are pretty much all you need for DDS configuration. And additionally, you can add some fields as tags, if you like. In this example, I added a color field as a tag. And also, you can override the name of the measurement, otherwise, it will use the name of the plugin. And the last one is you should use JSON as data format because the plugin code converts DDS samples to only JSON format. So this configuration is actually required. So I think this is it for using the plugin. So here, I edit some slides for DDS plugins with the Line Protocol data model. But the time is limited, so I just leave them as references. So I have some slides of how the data model looks like. And then, how the plugins are implemented at a high level. And then, some of the slides that instructs how to use it.
Kyoungho An: 00:35:46.610 Okay. So this is the fun part. And from here, I’ll give you a live demo for the Connext DDS and InfluxDB integration. So I’m going to demonstrate the consumer plugin with the RTI shapes demo. Mainly, I will demonstrate two interesting DDS capabilities, like delivering historical data for late joiners. And then, content-based fields. Okay. Let’s get started here. So first, I have to switch my screen sharing to the virtual machines that I use. Right. Okay. So as I just mentioned, I am going to use RTI shapes demo to publish shapes topic data. RTI shapes demo is a tool you can use to learn about the basic and advanced concepts of DDS. So basically, you can create a publisher or subscriber with some DDS configurations you’d like to play with. So let me create a square publisher with a default setting, and then a circle publisher with a different color. And then, let me create triangle one with a durability QoS, which is needed for late-joining subscribers. And then, I’m going to set history depth to 6,000.
Kyoungho An: 00:37:55.140 The shapes demo publishes 20 samples per second. So it will keep historical data about 5 minutes, roughly. So I’m going to put the green, the triangle one. Okay. So on the other windows, let me create a square subscriber. So now, we can see the square subscriber receives and visualizes the data from the publisher in real time. Then a circle subscriber, and you can see the circle subscriber is also - circle data is received and moving around. And then, lastly, a triangle subscriber with durability changed to local, because I just wanted to show that getting the historical data from the publisher. So because it is set to durability local, with the history depth to 100, so we can see some historical data of the triangle when it got started, even though it was edited quick. So I just wanted to give you some idea of our shapes demo.
Kyoungho An: 00:39:19.206 And then, let’s get started with the actual demo, with Telegraf and InfluxDB. So I’m going to terminate subscriber application. And then, I’m going to use the terminal here. So for this demo, I’m going to run Telegraf, InfluxDB, and Grafana in Docker containers. Actually, InfluxDB and Grafana Docker containers are already up and running. And I haven’t started Telegraf yet. For this Telegraf DDS plugin, I used this XML configuration file for DDS configurations. But because we are using the shapes demo, it defines those same data types that are used by the shapes application. And then, it defines some topic that we’d like to subscribe to, which are square, triangle, and circle. And lastly, it also has the definition of domain - first topic, domain participant and reader for a square, and domain participant and reader for a triangle, and then, domain participant and reader for a circle.
Kyoungho An: 00:40:59.761 Okay. And this is the Telegraf tele-configuration file that I’m going to use for this demo. It has DDS consumer input plugins for each topic. So I have a DDS consumer input plugin for square reader, and then a DDS consumer for triangle, and the one for circle. Okay. And also, I have an output plugin for InfluxDB. So basically, this Telegraf will subscribe to all these topics, and then provide data to InfluxDB. And I use Grafana for visualization. And this is the dashboard for this demo. And because I haven’t started Telegraf yet, so there’s not data coming into the InfluxDB and Grafana yet. For this dashboard, I created a graph panel for plotting X and Y coordinates of shapes for each topic. And a single-step panel to show a count of received metrics on average during the time configured. In this case, it is set to last one minute. So okay.
Kyoungho An: 00:42:31.858 So let’s start Telegraf here. Okay. I just started Telegraf Docker container. And now, you can see that data comes into the Influx and visualize in Grafana. And square topic, actually, I use a default configuration. So it received the data from the time I stared Telegraf, and plot all the X and Y coordinates. If I move this square, and you can see that X and Y coordinates of the square is updated in real time. Also, when you take a look at the circle topic, you can see that it only plots data that has X coordinates higher than 100. Because I set a content-based field during the external configuration file. So let me look at the external configuration file for a circle. I set a filter, so it received the data only X coordinate is higher than 100. So that’s why you can see that it only received and visualized the data when X coordinate is higher than 100.
Kyoungho An: 00:44:14.316 So if I move this circle to the - sorry, to the left side, we do not see any data coming in. Because X coordinate is less than 100. So the range of X coordinate is between 0 and 240, so 100 is somewhere in the middle. So if I move the circle to the right side, I can see the data comes again. The last one is for the triangle topic. And I created a triangle publisher with persistent setting, with 6,000 history depth. So if I create a subscriber with change local durability QoS, it will receive historical data for the last 6,000 samples. So it can get all these samples, even when it joins later than the publisher. We set the circle depth to 6,000, so it can keep the historical data for the last 5 minutes. So that’s why even if the triangle joined late, it could still receive all the historical data. It was different from the circle and the square.
Kyoungho An: 00:45:36.806 Okay. Great. This is actually, the end of the demo and my presentation. I hope you find it interesting. Thank you.
Chris Churilo: 00:45:51.539 That was pretty cool. I really liked that demo. That was really easy for anyone to be able to interact with it and understand the concepts of filtering out some of the content, like you described in the middle panel and even the late joiners. The nice thing about that is that, I think, anybody on the call today could actually play around with that and actually see the data that’s working. Because that’s always the hardest part, right? You set these things up, and you’re like, “Okay. When’s the data going to come in?”
Lynne Canavan: 00:46:22.617 What am I actually looking at?
Kyoungho An: 00:46:24.439 Yeah. Yeah. Exactly. So yeah. To play with it, yeah, you can, of course, Telegraf and InfluxDB are publicly available. And you can download the shapes demo to play around with this demo, as well.
Chris Churilo: 00:46:35.634 That’s cool. And then, I don’t know if I heard you correctly, so if I wanted to use these DDS Telegraf plugins, you port them. And so you provided those URLs.
Kyoungho An: 00:46:46.036 Yeah. Yeah. Exactly. So that URL is included in the slide. So actually, yeah, I wrote them, and I developed additional plugins for DDS, which is actually stated in our RTI community GitHub. So yeah. You can access that. And then I included the building instructions, so you could download and build and play with it. Yes.
Chris Churilo: 00:47:07.491 Okay. So I’ll make sure we put that on the RTI page, on the Influx page, so that everyone can have links to that very quickly. Of course, you can easily go to GitHub to be able to find those. Let’s see. The other question I have for you is - let me see. First of all, I’m just blown away. That was a really well put together presentation and demo. I mean, you just articulated the whole thing so well, about how you found Influx. How you decided to use it. And I loved the example that you showed of the measurement of the tags early on. Because I think that really shows the power of InfluxDB. I mean, all the different kinds of data that you can bring in. How you might want to report on it. And so kudos to you, I thought it was a really fabulous -
Kyoungho An: 00:48:00.725 Oh. Thank you. Yeah. Thank you so much.
Chris Churilo: 00:48:03.272 So why DDS? I mean, you mentioned earlier that we could use some kind of other messaging queuing systems like an MQTT. Maybe you can just describe a little bit more about what else DDS does for the user.
Kyoungho An: 00:48:21.458 Yeah. Yeah. Sure. So I mentioned briefly about that in the presentation. But yeah, DDS is data-centric, which is different from other messaging technology. Because other messaging technology are message-centric. So actually, applications do not need to deal with data serialization, and de-serialized data. Because they are communicating through data, not messages. So application code is much simpler because all the parts are provided by DDS. And also, its architecture is fully distributed peer-to-peer, and it is designed for real time and low latency communication. So there is no broker between the publisher and subscriber. So it’s directly sending the data, that providing really low latency and also, it doesn’t have any single point of failure. Those are some of the differentiator from other messaging technologies.
Chris Churilo: 00:49:20.261 Cool. Cool. So I do have a couple of questions. The first one is - besides from me, I always have a million questions. So our friend, Angelo, from another famous big telescope project, he asked, “Thanks for implementing the DDS plugin. I think it bridges two important technologies and opens very interesting possibilities. I have a more general question on DDS. You mentioned that DDS works with UDP and multicast, that limits the deployment of DDS to cloud and Kubernetes, for example. Can you comment on that? Can we use DDS on unicast instead?”
Kyoungho An: 00:49:58.625 Yes. For sure. DDS is providing multiple transports, including UDP unicast and multicast and TCP. So it’s really configurable. So depending on your use case, you can use it with the UDP unicast or multicast or TCP or even shared memory.
Chris Churilo: 00:50:19.224 Excellent. And so you know, Angelo actually is at LSST, the big telescope that’s being implemented in South America. And he actually just made a comment that they use DDS, as well. So he’s pretty psyched about the work that you did. His follow-up question is, “Do you think it’s possible to use a DDS Telegraf input consumer plugin and the Kafka Telegraf output plugin to send that to Kafka queue instead?”
Kyoungho An: 00:50:47.605 Yes. I think it is possible. Yeah. You can get [inaudible] data from DDS, and it’s converted to the format the Telegraf is using. And then Telegraf is providing the data to the Kafka output, so that you can send that data to the Kafka, as well, in the format that you configured, right? So it could be JSON or Line Protocol. Yeah.
Chris Churilo: 00:51:11.400 So, Angelo, hopefully, that answers. But I think it was described early on in the presentation that that’s one of the beautiful things about InfluxDB and Telegraf, is that all these different mechanisms for bringing in data and putting data out are supported very easily. We also have another question who asked: “Are the Docker containers used in the demo available?”
Kyoungho An: 00:51:38.122 Yes. It is in GitHub. I created Docker compose and I used some of the Docker images. Yes. It is available. And I can provide a link after the slide. Maybe we can coordinate how to provide the link. Yeah.
Chris Churilo: 00:51:54.463 And once again, we’ll put that on the RTI page, so we can make it really easy for everybody.
Kyoungho An: 00:51:59.353 Yep. Sounds good.
Chris Churilo: 00:52:00.280 Excellent. And then, Angelo just followed up with, “Awesome. Very useful.” So you’ve made a couple of people really happy today.
Kyoungho An: 00:52:07.863 Oh. Great to hear.
Chris Churilo: 00:52:09.716 Including me. I’m excited. I’m going to be playing around with this shape demo, as well. If there are any other questions, please feel free to pop them into the chat or the Q & A. We are almost at the top of the hour. And if you do have questions afterwards, you guys, if you’ve been on these webinars before, you know the drill. Just shoot me an email, and I will always connect you with any of the speakers. Because sometimes, that just happens, right? We think about what we just heard about, and we realize we have a lot more interest, and we want to start playing around with it. And I want to remind everybody, this has been recorded. So after a quick edit, I will post it, so we can all take another listen. And those URLs that were posted, I’ll also make sure they’re available to everybody. And I’ll even change the automated email tomorrow to point to the page where these URLs are going to live, just to make it easy for everyone who might be interested. So amazing webinar, I thought that was super detailed. And I’m really excited. And I’m going to play with this myself. And if anybody has any other questions, just feel free to shoot me that email, and we’ll pass that on. And yeah. I’m just blown away. That was really amazing.
Kyoungho An: 00:53:28.448 Thank you so much. Yeah. Thank you, again, for giving us this opportunity. This was great. Thank you so much.
Chris Churilo: 00:53:34.597 Well, thank you for building this DDS plugin. I think the community is definitely going to appreciate this quite a bit.
Kyoungho An: 00:53:41.792 Okay. Sounds great.
Chris Churilo: 00:53:42.745 Awesome. Thanks, everybody. And like I said, we’ll post this as quickly as we can. I hope everyone has a wonderful day.
Lynne Canavan: 00:53:50.591 Thank you.
Chris Churilo: 00:53:51.634 Bye-bye.
Kyoungho An: 00:53:52.255 Thank you. Bye.
[/et_pb_toggle]
Lynne Canavan
Director of Ecosystems at Real-Time Innovations (RTI)
Lynne Canavan is Director of Ecosystems for Real-Time Innovations (RTI), where she oversees the company's partner, consortia and university program initiatives. She also serves as Vice President of Marketing for the Object Management Group's DDS Foundation. Previously, she served as Executive Director at the OpenFog Consortium and before that, as VP of Program Management for the Industrial Internet Consortium. She also served as the first Program Chair of IoT Solutions World Congress, the industry's largest IoT conference. Named a ' 2018 Woman of IoT in Marketing' by Connected World, Lynne is actively involved in collaborative ecosystems in Industrial IoT markets across the globe. She received a B.S. in Management from the University of Massachusetts and mini MBA in digital marketing from Rutgers University.
Kyoungho An
Senior Research Engineer at Real-Time Innovations (RTI)
Kyoungho An is a Senior Research Engineer at Real-Time Innovations (RTI). He has 10 years of experience with distributed real-time embedded systems. His research interest includes publish/subscribe middleware, and deployment and monitoring of distributed systems. He has been leading several DOD and DOE funded research projects as a principal investigator. He has published research papers in journals and conferences focusing on distributed event-based systems, middleware, and cyber-physical systems. He holds a Ph.D. in Computer Science from Vanderbilt University.