Percona Live Dublin recap

Navigate to:

On September 26, 2017 I was a speaker at Percona Live in Dublin. It was a huge event with more than 140 speakers—covering tracks about MySQL, MongoDB, Elasticsearch, MyRocks and use cases about how to successfully build or manage large databases from Cloudflare, Facebook, Percona, InfluxData.

I was in the time series track or at least that’s what I called it. Other than me speaking about the InfluxDB Internals, Daniel Lee from Grafana Labs spoke about how to build and visualize data with Grafana and a core contributor for Prometheus, Brian Brazil, and founder at Robust Perception spoke about the new Prometheus TSDM and Prometheus 2.0, and Roman Vynar from Quiq spoke about Using Prometheus with InfluxDB for Metrics Storage.

For me, it was a chance to understand how many people are currently using a Time Series Database; a lot of the attendees were managing a big MySQL or Oracle cluster and my expectation was to hear a lot of strong opinions about how simple a traditional engine can be turned to store events and time series.

Surprisingly, it didn’t happen. And a lot of people were curious about which advantages an engine designed to store a particular kind of data can create, and during my talk I spoke about some of them: compression algorithms, retention policy, and sharding. I got a lot of good feedback and impressions about this topic. My talk wasn’t recorded but InfluxData cofounder Paul Dix did and wrote the same presentation during the Carnegie Mellon University (CMU) Database Group. The video is available, and you can have a look.

Speaking about some InfluxDB users, I got some good ideas about what we the community need—like a better backup solution and more Kapacitor use cases. It was a good conference because I had the chance to speak with database administrators that are working with more traditional databases. They have a lot of good stories and scenarios that we can cover to constantly make InfluxDB easier to run and maintain.

Check out my slides from my talk.