Learn How a Large Telescope and Time-Stamped Data is Helping Us to Understand the Universe Better
Session date: Nov 19, 2019 08:00am (Pacific Time)
The Large Synoptic Survey Telescope (LSST) is a wide-field optical telescope currently under construction in Chile. They plan to collect 500 petabytes of image data by observing the skies continuously for 10 years and produce nearly instant alerts for objects that change in position or brightness every night. In addition to astronomical data, their dataset will include DevOps, IoT, and real-time monitoring data.
In this webinar, Dr. Angelo Fausti will demonstrate how a time series database has the versatility to address their needs.
Watch the Webinar
Watch the webinar “Learn How a Large Telescope and Time-Stamped Data is Helping Us to Understand the Universe Better” by filling out the form and clicking on the Watch Webinar button on the right. This will open the recording.
[et_pb_toggle _builder_version=”3.17.6” title=”Transcript” title_font_size=”26” border_width_all=”0px” border_width_bottom=”1px” module_class=”transcript-toggle” closed_toggle_background_color=”rgba(255,255,255,0)”]
Here is an unedited transcript of the webinar “Learn How a Large Telescope and Time-Stamped Data is Helping Us to Understand the Universe Better”. This is provided for those who prefer to read than watch the webinar. Please note that the transcript is raw. We apologize for any transcribing errors.
Speakers:
- Caitlin Croft: Customer Marketing Manager, InfluxData
- Dr. Angelo Fausti: Software Engineer, Large Synoptic Survey Telescope - Vera C. Rubin Observatory
Caitlin Croft: 00:00:00.000 Hello, everyone. Thank you for joining. So we are going to get started. Thank you for joining our webinar today. I am joined by Angelo. He works at the Large Synoptic Survey Telescope, LSST. They’re a really cool customer of ours who are using the entire InfluxDB platform. So today, he will be going into how they are using InfluxDB, the challenges they face, and what they’ve gained from the tool. Just a couple of housekeeping items. There is a Q&A box in Zoom. So please feel to add to your questions there, and we will answer them at the end of the webinar. And we are happy to unmute you if you would like to have a conversation with Angelo. So without further ado, I’ll hand it off to Angelo.
Angelo Fausti: 00:00:59.901 All right. Thanks, Caitlin. And good morning, everyone. It’s my pleasure to talk to you today about LSST. And when we first talked about presenting this webinar, actually, Caitlin suggested this title, Learn How a Large Telescope and Time-Stamped Data is Helping Us to Understand the Universe Better. And I really like it because to understand the universe better, first we need to understand the telescope that we are building and the software algorithms used to process the data collected by the telescope. So that’s why metrics and monitoring are so important for us at LSST. And I’m super happy to share with you what we do, what kind of metrics we collect, and how we implement our solutions. So LSST is an 8-meter class telescope under construction in Chile, and it’s designed to take about 1,000 images of the sky every night. LSST also will produce alerts for objects that change in position or brightness every 60 seconds, and it will do that for 10 years. In the end, we will have so much data, about 500 petabytes of images, and we will process these images to create a huge data set with the properties of billions of stars and galaxies. Finally, astronomers, we will use this data set to make new discoveries.
Angelo Fausti: 00:02:33.456 So the first ideas around LSST started in 1998. In 2008, we started the construction of the telescope mirror. And we got funds from the National Science Foundation and from the Department of Energy, approved in 2014 when the construction officially started. We are now testing and integrating different parts of the telescope that are arriving in Chile. And I would like to emphasize that this is really an international effort. We have several institutions in different countries contributing to LSST. And if everything goes as planned, we will be on-sky by 2022. So what are we building? Essentially, we are building a modern observatory, a very compact and fast telescope with a unique optical design. And we’re also building a 3.2-gigapixel camera and all the software to control the telescope, the observatory, and to process the data. So I will give you some highlights of the LSST construction just because it’s really cool. Of course, I don’t have time to cover everything, so I selected two topics. We’ll talk about the construction of the mirror and the camera.
Angelo Fausti: 00:03:57.051 So this is the LSST mirror, preparing to melt the glass in this honeycomb structure. And that’s done inside a rotating oven to give the initial curvature to the optical surface. Then the optical surface has to be polished. And what’s unique about the LSST mirror is that it’s the first time we build a mirror with two curvatures in a single piece of glass. The reason for that is to have a very compact telescope in the end. The construction of the mirror was completed in 2015. And all this happened here in Tucson at the Mirror Lab, which is a facility specialized to build large mirrors for telescopes. Then the mirror was stored in a warehouse in the Tucson Airport for several years. It was back to the lab last January when we performed some optical tests. And then it was shipped to Chile. And it’s very interesting that the real limiting factor for the size of these mirrors are the roads. For example, the mirror barely fits this tunnel in Chile. And actually, this tunnel was a critical part of the logistics for other parts of the telescope as well.
Angelo Fausti: 00:05:24.074 And here is the mirror on its way to the summit. It’s being pulled by two trucks. And you can see on the top of that mountain the LSST observatory. And this is how we lift this 17-ton piece of glass using 54 vacuum pads. It’s pretty scary, isn’t it? But that’s done just four times during the lifetime of the experiment. And this is the mirror cell. It’s the structure that supports the mirror. What you see here are lots of force actuators that are used to compensate for deformations of the optical surface. That’s called active optics in astronomy. And there are lots of sensors in this mirror cell. So remember this picture later when we talk about our IoT use case because we’re going to record data from these sensors in Influx. And here’s the LSST camera. It’s the largest digital camera ever constructed. Just to give you an idea, it has the size and weight of a small car. The focal plane, as indicated here in the figure, is where the image is formed. And the image is recorded by a mosaic of 189 CCDs, 4K by 4K pixels each. That makes 3.2 gigapixels in total. And the electronics for this camera dissipates so much heat that it’s a real challenge to keep it cool to avoid thermal deformation of the lenses in camera body. So also remember this picture later when we talk about recording temperature data from the camera sensors.
Angelo Fausti: 00:07:10.729 The camera also includes this filter-changing mechanism. That’s how astronomers infer the physical properties of stars and galaxies by analyzing the light in different wavelengths. And here is the telescope mount. The camera is placed right here in the center. And we like to call this telescope a discovery machine because it really is. So this 350-ton structure has to move really fast, has to stop and settle in 4 seconds. It takes 2 exposures of 15 seconds each. It has to move to the next position and repeat that 1,000 times every night. Here you can see a rendered view of the telescope inside the observatory dome, and here another rendered view of the observatory building. I don’t have time to describe everything you see in this picture. But I just want to say that if everything works, this is how LSST is going to see the sky. The majority of objects you see in this image are galaxies, and LSST will be able to look really deep into the universe. This is a preview from another project on how the universe looks at this depth. And yeah, it’s super crowded.
Angelo Fausti: 00:08:35.795 So to produce an image like that and to measure the properties of all those objects, we are developing the LSST Science Pipelines. All this is open source software. The core algorithms are written in C++, and the rest is mainly Python. We have about 50 developers working on this. And the question is, how do they know they are doing the right thing? Well, here enters the first use case, application [inaudible]. So we have science requirements for the LSST Science. That’s what’s guided everything during construction, including the software development. I like this diagram. It’s a version of the bullet chart. It shows the value for three metrics - AM1, AM2, and AM3 - and we compare that with our requirements. So the thick black line here is our design requirement, and we also have a minimum and a stretch goal. And the panel on the left indicates how much the actual value deviates from the design requirement. But that’s just a single value for each metric, right? So how this become time series data -?
Angelo Fausti: 00:10:09.459 So we have our continuous integration system. We do build of and test our Science Pipelines software every night. After the build, we have our verification system that runs the Science Pipelines code on test data sets and compute those metrics. In doing that over and over again, we have a time series. And that’s how we had the idea of implementing SQuaSH, our Science Quality and Software Harness system. The idea is that we can benefit from storing these results over time. And what you see in this picture is our schema for the initial implementation using MySQL. We also implemented dashboards to monitor how these code changes impact the science results. And this tool gives the developers and managers visibility of what’s going on with the software development. So for example, in this graph, we show time series for one of our metrics, AM1. You see that at some point we had a couple of regressions here. More recently, the value was constant and within the requirements, so that’s great. It means that, as our software evolves, the science results remain good. We have this table in the bottom that for each build, we actually can go to the CI run that produced these results. And you have a list of packets that changed, and you can go straight to the GitHub comments from here.
Angelo Fausti: 00:11:57.133 Okay. So this is a very low frequency time series. We have only one data point per day for each metric. And you might argue that, “Okay, you don’t really need the time series database for that. You probably could stick with the MySQL implementation.” Well, I have two comments for that. First, we want to monitor code changes for the lifetime of the project, which is at least 15 years, and we expect to have hundreds if not thousands of metrics. So our problem is just at a different time scale. And second, I had too much work implementing this UI, and we didn’t get to the point of implementing alert notifications in SQuaSH. So the time I saved by using an open source solution to solve my problem I can use to solve other problems in the project. We use our second use case as an example. But the truth is that I didn’t even know about time series databases when I started implementing SQuaSH back in 2016. I only knew about relational databases. And relational databases, they’re generic tools but not necessarily the right tool to solve your specific problem. So rather than wasting time and money implementing a time series database in MySQL and probably do it wrong, I just decided to pick an existing solution. But there are many out there. So we tested a couple, like Honeycomb, Datadog, Prometheus with Grafana. And I still remember my manager saying that, “If you take more than three days to make it work, it’s not the right solution for you.” That’s her metric for success.
Angelo Fausti: 00:13:57.347 And yeah, so in less than three days, we had a prototype running with our data written into InfluxDB, visualized with Chronograf. And we started playing with Kapacitor to make alert notifications. But this is the slide that I like the most. I just had to map concepts from our verification system to InfluxDB to get it done. Remember that our verification system already computes and collects the metrics for us, so it was just a matter of writing the results in the InfluxDB line protocol. So we are not using Telegraf for this application. And that’s okay. We don’t have to. InfluxDB stacks are quite flexible. Let me show you a couple of screenshots here. So right now for SQuaSH, we have about 46 users, which are our developers, in three different Chronograf organizations. And they made their dashboards and created alert rules. And these alert notifications go to Slack. And this is great because we have a channel for discussion to see what’s going on and get notifications to fix our problems. But the feature that we like the most are the annotations. So [these were?] hard to remember what happened when we see a change like this in our metrics. So from time to time, our managers, they have meetings, and they look at these charts, and they make annotations trying to understand what changed. So this is very valuable information. And to quote, again, my manager, she says that, “Annotations are more important than the data itself.” And that’s true because annotations are like the result of interpretation of the data.
Angelo Fausti: 00:16:05.587 So why did you decide to adopt InfluxDB? First, because it’s a complete solution. It was a time series database. Another good thing is that InfluxQL is similar to SQL, which our users are already familiar with. And it provides an HTTP API, which is great. We can do programmatic things with InfluxDB. And it has a nice UI for time series visualization in Kapacitor to alert on the time series data. The other aspect is that the InfluxData stack is open source. So we had to do some customizations to Chronograf. That was pretty easy. The platform is generic, flexible, and we really liked the Chronograf dark mode, which is something important if you work at the observatory at night. So before jumping to the second use case, I’d like to show you how we teach InfluxDB concepts to our users. So we first start with this table, which is the result from one of our verification packages. Then we tell our users that an InfluxDB measurement is conceptually similar to a table, and the fields are actually - the actual data are metrics, while the tags are more like your metadata. And an important difference is that tags are indexed and fields are not. We teach them concepts like point and series, and we present examples of the line protocol using their data - and more advanced concepts like series cardinality. So we had the problem, for instance, using a run ID, which is an incremental number, as a tag that increases the series cardinality. So even if the run ID is more like a metadata, it should be a field to avoid this problem. And we learned that having fewer series with more points is better than more series with fewer points for performance reasons.
Angelo Fausti: 00:18:26.409 And here is our second use case, the LSST Engineering Facility’s Database, which is used to record data from all the sensors and edge devices that we have in the telescope and in the observatory. So if you attended the InfluxData webinar two weeks ago, you might remember a Telegraf plugin to collect data from DDS. DDS is the communication middleware our edge devices and sensors [are?] connected to. So the problem was that when we started developing the EFD, there was no Telegraf plugin for DDS. And also our architecture is kind of different because we use both Kafka and InfluxDB. So we decided to implement our own client to collect metrics and events from DDS and forward this data to Kafka. So what’s interesting about this client is that it’s dynamic in the sense that if you have new topics in DDS, they are automatically discovered. And we also parse on the fly the DDS IDL that describes the topic schema and transform that to other schemas and upload them to the Kafka schema registry. In the Kafka brokers, we have a retention period of 24 hours. So if there is any downtime of the InfluxDB instance, we can recover the data from Kafka.
Angelo Fausti: 00:20:04.242 And then we use the InfluxDB sink connector to write data to InfluxDB. And the good thing is that all the data conversion from Avro to InfluxDB is handled by the connector. It’s great. You get that for free. But of course, we found some problems, like the connector was not dealing well with arrays and with [nones] that for us represent missing data. But no problem, that’s how things work, right? So it’s no [inaudible], we contributed these features back to the connector. Another thing that’s interesting about our architecture is that we have this instance that we call the source EFD that runs at the observatory or at the lab. In these places, we have limited computing resources, so we keep the raw data for a short period of time, like one month. And we have this other instance called the aggregator EFD at our data facility or in the cloud, where we have more resources and we can keep the raw data for longer. And we use Kafka, in particular, the Kafka Replicator connector, to replicate both topics and schemas from the source EFDs to the aggregator EFD.
Angelo Fausti: 00:21:35.497 So this is running at our lab here in Tucson and also at the observatory, and I have here examples of screenshots. So remember we talked about the camera. Here at the lab, we’re testing just one of the CCDs. We have a temperature sensor for one of the CCDs. Six temperature sensors are spread across the electronic board and two sensors on the integrated ASPIC. So this is real data that we are getting at the lab, and this was a huge thing for people that are testing or integrating this hardware. Another example here of a dashboard from the weather station in Chile. And I would just like to highlight this nice button here that tells all of us to change from UTC to local time. That was really important because we have people working on different time zones, and when they look at the same dashboard, this feature really helps. And here’s another example of dashboard. This is the summary state for all the systems running at the observatory. And to quote another colleague here, he said that, “Astronomers working on telescopes have to monitor different things depending on what their assignments are. And Chronograf gives us the flexibility to create dashboards that suit our needs on different situations.” So the nice thing about Tiago was that he never used Chronograf before. He went to Chile to observe at the Auxiliary Telescope. And for the first time using Chronograf, in one afternoon, he did those preview dashboards. And that was huge. That was super important for them to monitor what’s going on during their tests.
Angelo Fausti: 00:23:49.350 And here’s another example. If you need to do more advanced analysis, you can collect the data from InfluxDB. And we are using this aioinflux Python client in the notebooks. And just to illustrate, this is an analysis made by my colleague Simon Krughoff, who was comparing the computed versus commanded velocity for the Auxiliary Telescope Azimuth Motor. So he needed to put several time series together and analyze what was going on. So I think the combination of the notebooks and the data in InfluxDB to do this kind of analysis afterwards is just great. And for this system, we also have alert notifications going to Slack. So you see that there are lots of discussions here, and this is just great. It allows people to follow what’s going on and make these comments. Just to quote again my colleague Simon, he says that, “We have sensors publishing information in many streams and from multiple physical locations. And Kapacitor has been critical in sounding the alarm when the system’s not behaving correctly.”
Angelo Fausti: 00:25:24.210 So I will finalize describing a challenge that we had. So remember this mirror cell, we have lots of sensors here. And we had to record this data from about 120 different measurements at 50 hertz. And we didn’t know if Kafka and InfluxDB would be able to keep up with this data stream. So the question is, can we read data from InfluxDB, and at the same time that we are writing to InfluxDB, read this data for a real-time analysis? So to answer that question, we measured the end-to-end latency of the system. So essentially, we have the time when the message was sent. It goes through all the system. And then we have time when it was written to InfluxDB. The latency is just the difference between the write time and the sent time. And we were able to make these plots. And we saw that the latency was very low, like 0.06 seconds. And in this example, we are writing about 100,000 points per minute in InfluxDB. But you see also these bumps here where the latency increases, and this is when you are executing queries against InfluxDB. But that’s not a problem. You see that the system recovers [after that?]. So this is a pretty good result, showing that we are ready to handle these large data streams, especially from this subsystem, the mirror cell, which is the one that published the data at 50 hertz.
Angelo Fausti: 00:27:34.312 And about next developments, we are super happy to try InfluxDB 2.0. We are just waiting for annotations to be implemented in this version, as it’s a really important feature for us. And we’ve continued testing our solutions with increasing data volume and variety as more subsystems of the telescope are put together. We already tested data replication to our data facility and the cloud. But I think it remains the question, which resources do we really need to have to store the raw data for the lifetime of the experiment? One thing that we are doing, so all our deployments are on Kubernetes, initially used tools like Terraform to do the deployment. And we are switching now to Argo CD, which is, in my opinion, a better tool for continuous delivery and to manage these deployments. And it looks like we have time to solve other problems. So we have plans to use InfluxDB for the observatory electronic log system where observers make comments. And this is just another example of time series, right? And why not also evaluate InfluxDB to store astronomical data? So I think I will finish here, and if you have questions, I’m happy to answer. Thank you.
Caitlin Croft: 00:29:24.975 Thank you, Angelo. That was great. So cool to see what you guys are doing with InfluxDB. So we have a couple of questions here. What are you guys hoping to do with all of the data and photos that you guys are going to be collecting?
Angelo Fausti: 00:29:47.144 So you mean data from the sensors or actually astronomical data?
Caitlin Croft: 00:29:54.285 Both.
Angelo Fausti: 00:29:56.039 Both?
Caitlin Croft: 00:29:56.089 I think they were asking about both.
Angelo Fausti: 00:29:57.555 Okay. So yeah, I think the nice thing about LSST is that it’s a project that’s going to collect data for several different use cases in astronomy. So with this data, we’ll be able to study science from our solar system passing to our galaxy and also cosmology. So it’s really broad science use cases that we can do with this data. So yeah, I just don’t have time, but I welcome you to look at the project website and see, there, what are our different science applications. And then the sensor data, as I mentioned in the beginning, it’s really important to understand what’s going on with the telescope. So this project’s going to be on-sky for 10 years, and we need this data to see trends, like long-term trends, and see how the hardware is behaving. And the same for our softwares, make sure that as we develop or improve our algorithms, we are still within the science requirements.
Caitlin Croft: 00:31:39.309 Great. We have another question here. Did you have any cardinality issues with your team? Were there any issues that they ran into? It sounded like you did a good job of training them, but just curious if you had any more insight there.
Angelo Fausti: 00:31:57.966 Yeah. So as I mentioned, we had this issue with the run ID, which is an incremental number that we were recording as a tag, and that increased the series cardinality a lot. So we realized that, and then we just decided to record the run ID as a field instead of a tag. That solved the problem. But this was for our first use case. For our second use case, so we’re recording basically everything as a field. We don’t have tags, at least, for now. So basically, for each measurement and field, I think we have just one time series. We don’t have cardinality issues.
Caitlin Croft: 00:33:02.018 Okay. Great. Thank you. Do you have any guidelines or tips or tricks for someone who’s new to time series databases?
Angelo Fausti: 00:33:15.816 Yeah. I think it’s important to study the InfluxDB concepts. At the beginning, it’s kind of difficult to understand what the measurement is, what the field and a tag is. And so read about those concepts, think about your problem, and try to do the mapping between concepts that you have on your side and InfluxDB concepts. So you can design your database in several different ways, but yeah, it’s just a matter of trying and see which one works better for you.
Caitlin Croft: 00:34:04.830 Do you currently monitor InfluxDB as well?
Angelo Fausti: 00:34:10.205 Yeah. That’s right. So I should have mentioned that. We have at our infrastructure, both at the observatory and at our data facility, in both places we use InfluxDB, yeah, to monitor the hardware. So yeah, that’s great. It’s so great that we can share the same technology stack for applications that are so different.
Caitlin Croft: 00:34:43.862 Great. Another question and comment. Someone really likes that you monitor your CI pipeline. Don’t these tools already provide you with information about the builds, etc.?
Angelo Fausti: 00:34:58.714 Oh, yeah. Yeah. Yeah. So in our case, the information that we really wanted to store is the metrics that - so after the build, actually, we run the Science Pipelines code on test data sets, which are fixed. So if the code changes, we can evaluate that change and see if it’s a regression or an improvement. And so these metrics are really collected from our science algorithms, which is not originally in the build systems code. We had to produce those numbers and store them to analyze them.
Caitlin Croft: 00:35:52.060 Great. Another question. You mentioned that you found Argo was better. Can you explain why you found it was better for you?
Angelo Fausti: 00:36:01.850 Yeah. So I think that the thing that I really like about Argo CD and also Flux, which is a similar project, is the GitOps methodology. So with Argo CD, we can have all the configurations, secrets, and everything in a Git repository. And instead of doing changes, like manual changes to the deployment, we first do it on Git, and then Argo CD synchronizes the state of the repository with the state of the deployment. So this synchronization is handled by Argo CD. And it also has a very nice UI, so you can see all your Kubernetes objects in the UI. They have a health state - oh, sorry, a health status for them. If you need to delete [all?], you can do that from the UI. And so I think it enables other people in our team to manage the deployments without knowing tools like Terraform, which can be a little bit difficult for a person that doesn’t use it every day.
Caitlin Croft: 00:37:35.272 Great. Another question here. Do you have any plans for AI or machine learning?
Angelo Fausti: 00:37:43.740 Yeah. So machine learning is a very important thing in our project, I think more for the analysis of the astronomical data. So for instance, we have a problem where we have to look at the image and classify objects. They are stars or galaxies. And yeah, machine learning is a great tool for that. Actually, there are a couple of algorithms already that use machine learning for that. And I am sure there are other applications. I just don’t remember from the top of my head. So yeah, it’s a lot of data. We can train our algorithms. We’ve trained the sets, and then we just run this through all the data tools. Yeah. So yeah, I’m sure this is going to be really - these techniques will be really important for us.
Caitlin Croft: 00:38:49.301 It definitely sounds like you’re exploring it and figuring out the best way to use it. Do you monitor the Kubernetes deployments themselves, and how are the pods and containers behaving?
Angelo Fausti: 00:39:04.353 Yeah. So we just use the default dashboard that is provided by InfluxDB. I think an interesting thing about our architecture is that - I mentioned this source EFD that we run at the lab and at the observatory. So in the beginning, we didn’t have a proper Kubernetes cluster to run these things. And then we decided to use a lightweight Kubernetes. It’s called Kubes or k3s. So we first deployed our application on Kubes. And that dashboard actually was important at that time to see if this Kubernetes cluster - which is not properly a cluster because it runs on a single machine - was behaving well. And then, yeah, when we deployed Google, sometimes we also use these dashboards to see how the cluster is behaving.
Caitlin Croft: 00:40:23.240 Great. Thank you, Angelo. Does anyone have any more questions? Feel free to put them in the chat or the Q&A box. Angelo, this was such a great presentation. This is, I think, the third time that I’ve heard it, and I keep finding little nuggets of information. I think it’s really cool what you guys are doing down there and showing the stars. Since we have a little bit of time, you were at InfluxDays when you went a couple months ago, and you attended the Flux training. Do you mind going through a little bit of what that experience was like?
Angelo Fausti: 00:41:17.315 Oh, yeah. Yeah. I should have mentioned something about Flux in my presentation. And I think our users are already excited and scared at the same time about switching from InfluxQL to Flux. After that training, I actually used the material that we got from that training to demonstrate here for our team how Flux looks like and the capabilities of Flux to solve some problems that we cannot solve right now. So one example is this summary state monitoring dashboard. So we have the state from different - that’s in different measurements, right? That’s shown here and color coded here. And one thing that we cannot do in InfluxQL is combine or join the states from different measurements to get a summary table, for instance. So we have to display this individually and make individual queries to collect all these states. And Flux will help us with this example, for instance. So after doing the training, yeah, I think I convinced myself that, yeah, I think Flux is going to be much more flexible because it’s not just a query language. You can do more computations with that language. And also, the ability to combine data from different sources using Flux is great. So yeah, I don’t know if I could answer your question properly, but.
Caitlin Croft: 00:43:39.497 That’s great. It sounds like you were able to gain a lot in the training and that Flux was able to answer some of the challenges you were facing. Which version of InfluxDB are you using? Are you using Cloud or Enterprise?
Angelo Fausti: 00:43:56.461 So we are using the open source. Everything we did so far was using the open source. And we are still on InfluxDB 1.x and Chronograf right now, I think, 1.7.15, which is the latest release for the 1.x series.
Caitlin Croft: 00:44:28.773 Okay. And you mentioned that you’re using a new time format for stars. Can you go into that?
Angelo Fausti: 00:44:38.784 Time format for stars? I don’t remember that. Can you expand a little bit on that? Perhaps I’m missing the -
Caitlin Croft: 00:44:57.563 How about I unmute you, Chris, and then you can ask your question?
[silence]
Caitlin Croft: 00:45:20.698 All right. I think that’s it. Angelo, is there anything else that you’d like to add?
Angelo Fausti: 00:45:30.425 No. We’d just like to thank InfluxData for the opportunity to present our use case. And yeah, I encourage people to use the technologies and also go to the InfluxDays. For me, it was an incredible experience. The community is super friendly and willing to help. And just to give an example, I mentioned that this annotations feature is something that’s really important for us. And we missed that in the first alpha and beta releases of the 2.0. And we are right in contact with the developers, and I feel that we can also give them some feedback on how we use annotations. And then we can actually participate in the decision process for the design of this feature in the new version. So yeah, I’m super happy about all this interaction. Yeah. And I thank you so much for that.
[/et_pb_toggle]
Dr. Angelo Fausti
Software Engineer, Large Synoptic Survey Telescope - Vera C. Rubin Observatory
Angelo Fausti got his Ph.D. in Astronomy in 2009 studying the properties of dark matter halos in cosmological simulations such as the Millennium simulation. As a member of the LSST data management team, his main projects include creating a service for monitoring the LSST Science Pipeline metrics and the Engineering Facility Database that will record data from numerous devices and sensors to monitor the telescope and the observatory.