Upgrading hundreds of Kubernetes clusters

Upgrading hundreds of Kubernetes clusters

Host:

  • Bart Farrell

Guest:

  • Pierre Mavro

How do you upgrade a Kubernetes cluster to the latest release without breaking anything?

And what if you had to upgrade hundreds of clusters simultaneously?

In this episode, Pierre explains the process, tooling and testing strategy in upgrading clusters at scale.

You will learn:

  • How the team at Qovery keeps updated with the latest (vanilla) Kubernetes changes and managed services changelogs.

  • How to upgrade Helm charts gradually and safely. Pierre has some tips for Custom Resource Definitions (CRDs).

  • How to test API deprecations with end-to-end testing.

  • How to automate the process of upgrading clusters.

You will also learn from Pierre's experience in managing stateful applications in Kubernetes with 4500 nodes on bare metal.

Relevant links
Transcription

Bart: Upgrading hundreds of Kubernetes clusters is no easy task, but Pierre Mavro is definitely one person who's ready for the job. He's a co-founder and CTO at Qovery and has been working with a small team of engineers that's been able to automate the process of upgrading Kubernetes clusters, both public and private clouds. We sat down with him for a nice podcast here in KubeFM to hear about how it's done and not to break the bank in the process. Welcome to the KubeFM podcast. Pierre, very nice to have you with us.

Pierre: Thanks for having me. Pleasure. I'm very happy.

Bart: As are we. That being said, we always like to ask our guests the following question. If you had to install three tools on a new Kubernetes cluster, which tools would they be and why?

Pierre: Okay. So I don't think I have only three tools, but maybe some stack or some tooling not really installed on the cluster, but that you can use. My first would go with K9s because kubectl is great, but when you start managing, digging and stuff like that, it can be long to type a lot of command and K9s is a great tool. a tool that helps you with a TUI to analyze your cluster, dig into the logs, get pod information, edit config, sorry, config map, and you can do a lot of stuff really easily when you start to learn the hotkeys for sure. But yeah, it's a very great tool. The second one is mostly a combo. I would say External DNS, cert-manager and NGINX ingress. Because when you have this stack, you can easily deploy an application, making it available through a DNS with a TLS without that much effort, because it's just annotation you put and the magic happen. And... I was amazed when I first discovered External DNS a long, long time ago. But this is one of the things that I found, oh yeah, we jump into the next level. And this is really great. So I like this stack. And finally, the last one is mostly an observability stack with Prometheus, Metric server, Prometheus adapter, and so on and so forth, to have a really good observability of what happened on the cluster. And cherry on top, you can do custom auto-scaling stuff. So, Yeah, this is for me a really interesting part and something that should be deployed soon.

Bart: Good. Well, now that we got that out of the way, we know about your personal choices regarding tooling. Tell us a little bit more about yourself. Who are you and what do you do? And where do you work?

Pierre: Sure. So I'm co-founder and CTO at Qovery. So I've been working in big companies like Red Hat, Criteo in the last decade. I've managed to deploy private clouds in telecom industry and work for public clouds, for various companies in public clouds. I've been working on a lot of various distributed systems, mostly for NoSQL databases. And finally, I think you would be interested that I started to work on Kubernetes in 2016, on the version 1.2, if I remember well. So it's been a long, long time and things have changed, but yeah, it was fun. And so I'm working at Qovery. And to give you some context with what we follow. Qovery is a self-service developer platform, meaning developers can deploy their code on a Kubernetes cluster without having any infrastructure knowledge, and they can have ephemeral environments with a single click. So the developer experience is high. In parallel, DevOps keeps control of the platform and Qovery is their best friend. We deploy on the customer cloud account, a cloud managed Kubernetes cluster, and we deploy several... infrastructure element, so basically charts, to have a fully ready production cluster quickly. So in a few minutes, a Kubernetes cluster appears in the Qovery web console and application can be easily deployed. So Kubernetes is at the heart of our stack. And this is why it's really important for us because as we are multi-cloud provider, I mean, we are using multi-cloud. We try to be as agnostic as possible and Kubernetes is the art of this.

Bart: Very good. Obviously, when you started in 2016, things were a little bit different. Tell me about that a little bit more. How did you learn Kubernetes? What was your process getting into this cloud-native world?

Pierre: Yeah. Learning Kubernetes is a long road, I would say. At this time, there were less elements for sure. Mostly all in alpha, as I remember. But yeah, it was different because, so in 2016, I was working for Criteo, but at the same time, I had my own company. And yeah, I was working on a project that was a little bit more complicated I was deploying Kubernetes on bar metal nodes. And for that, I was using KubeSpray. So for those who don't know KubeSpray, KubeSpray is a way to deploy a Kubernetes cluster with Ansible on cloud providers, and bare metal. I mean, there are several options where you can decide where you want to deploy this. And it was very interesting because I learned a lot of stuff on all the Kubernetes components because when you manage by your own Kubernetes cluster, there are several components that you have to understand how they work individually, all together, etc. It's necessary to learn and debug. I also learn with the Kubernetes the hard way from Kelsey Hightower. So it's a good way to start as well. Maybe a little bit complex if you are just interested by what is Kubernetes and how to use it. Because this is more how you deploy it and how you manage it. But it's really useful as well. And finally, the official documentation is... Also excellent, I would say.

Bart: Very good. If you had to go back and give advice to yourself in 2016, you mentioned reading Kelsey Hightower, Kubernetes the hard way. Do you feel like there were things that you did that perhaps looking back now you would do differently?

Pierre: Maybe not that much in fact, because KubeSpray was the best option at this time to deploy a Kubernetes cluster. It's been a long time that I didn't look at this project, unfortunately. But no, I don't think so, to be honest.

Bart: Okay. You mentioned about projects, you know, in terms of your growth, you know, as a technologist, the kind of projects that you've been working on, you did mention Bare Metal. You also mentioned, you know, you mentioned previously talking about, you know, private clouds, things of that nature. What kind of Kubernetes experience and projects have you been working on in terms of the amount of clusters, the amount of nodes?

Pierre: So I mentioned that when I was at Criteo, I had my own company and I built a 10 nodes cluster on bare metal. After a year, I saw all the benefits that Kubernetes could have. at Criteo where I was working on. Because at this time, to give you an idea, I had a NoSQL team. So this was my responsibility. The team was composed of five engineers to support several million of requests per second. And we had 4,500 nodes. So it was pretty huge, no VM, only bare metal. So you can imagine the maintenance that we could have when nodes were crashing, having some issues. red card, fine where I choose, or any failure elements. I mean, hardware ones. It was not that easy. And we were doing so NoSQL. We were providing NoSQL technology like Cassandra, CouchBase, Elasticsearch. And so the state, the load was only stateful. Okay. So Kubernetes as stateful, I mean to support stateful stuff was not that easy as well and maybe not super mature. Anyway, it was enough to manage our main issues. So the main goal was to replace manual operation for stateful database and have something fully automated or adding manual intervention. So the change was pretty interesting and I introduced Kubernetes at this time, but it was not that easy because it was in 2018. And at the beginning of 2018, Kubernetes was not yet the market leader. And internally at Criteo, we had Mesos as well. So some kind of competitor for Kubernetes. So it wasn't that easy to bring it, but we were able to do it. And so with my team at this time, we've made all the chef recipes to be able to deploy Kubernetes cluster on-premise. Then we've made some custom StatefulSet hooks, startup script, etc., to make it happen. And at this time, operators were very young and we wanted to validate the performance before moving on. And so after eight months, we've been able to roll out to Kubernetes, move all the workload on Kubernetes. So we're pretty happy about it. But it was not a walk in the park, to be honest.

Bart: I think, but you know, just for the sake of context, like you explained, 2018 Kubernetes is not the only option. You know, the container orchestration wars or competitions going on. Simultaneously, you're reading Kelsey Hightower, who at that point was very adamant about saying, don't run stateful workloads on Kubernetes. you are bravely leading a team of five people and say, look, we're going to do this. How do you get your team to level up given the lack of maturity around some of the tooling, the operator framework, you know, the landscape was not the same that it is today. How was that process?

Pierre: Yeah, the workload at Criteo was high. So we didn't have per node a lot of pods. This was the simple part of the thing, in fact. Because we had a big servers, when I say big, it was, they were all between 50 and 100 CPU each and 256 gigabytes of RAM up to 500 gigabyte of RAM. So servers were pretty big, the workload was pretty big. So there were no real, advantage having a lot of pod on a single node. Instead of that, cluster we had, so when I mean cluster it's not virtual, but you see, let's say a Cassandra cluster. And we could have multiple Cassandra cluster on a single Kubernetes cluster, but all nodes of Cassandra, for example, there was only one node per physical Kubernetes node. So only one real pod working on each machine. So this made the complexity simpler regarding the storage. We needed high performances. So you can't compete with local storage to be honest with SSD or NVMe. And we used Kubernetes to be able to detect some issues using PDB, StatefulSet, hooks and stuff like that, like I mentioned. And when something bad was detected, then we could cut a node, use another node to provision it. So a new StatefulSet pod, for example, was replaced at this time, and data started to move around. So we can always keep control of what happened, decide how many operation we could do and stuff like that at the same time. So, yeah. The complexity was not really on the number of nodes, but mostly on how the behavior we should adapt based on the situation for each Elasticsearch, Cassandra, etc.

Bart: It's very, very good to have that context to better understand the blog that you wrote that we're going to be focusing on today, which is titled, The Cost of Upgrading Hundreds of Kubernetes Clusters. So I want to hear a little bit about, you know, after managing all this, you know, very large infrastructure in previous positions, you moved on and founded Qovery. What was the reason behind that? What made you think like, hey, this is my next step as an engineer, as a technologist?

Pierre: Kubernetes is well known now and cloud providers decided to use Kubernetes. I mean, it has been selected to be the leader on the market. And I think that's a good thing. We discussed with a lot of companies struggling on Kubernetes, how it works. They want to be pragmatic, bring to developers what they need to be able to work, etc., But at the same time, everyone wants Kubernetes. And Clouds made a good work on providing Kubernetes. But. You know, it's just the skeleton. Then you have a lot of things to install on it to make it really usable for end developers. And this is where it takes a long, long time. And once you've deployed them, I mean, all the necessary charts, you have to manage them. It's not a, hey, it's installed, And I'm done. Because anytime you will have to make Kubernetes upgrade, you have API changes, you have software change because there are some updates for security concerns, or just because you want to update because you want to have new features, etc. You have a lot of things to do. And doing that is really, really, really long. So Having a common stack on all the cloud providers and being able to support it really easily for DevOps, it's super useful. And this is one of the reason we've made Qovery. But the second one is really on the developer experience because we want developer to be as autonomous as possible and be able to roll out their code really easily without headaches.

Bart: With that in mind, when we're talking about, you know, upgrading clusters, something that's talked about a fair amount is managing clusters. Now, the word management can mean a lot of things to different people. In your case, how do you define a recovery when people are speaking about managing clusters?

Pierre: Yeah. Managing means a lot of things, I agree. So here is what we are doing today. We manage cluster upgrades, meaning the customer is advertised when we roll out the next version of Kubernetes. Then we perform the upgrades on the advertised date. We also deploy several infrastructure elements to have a turnkey Kubernetes ready for production in 30 minutes. And we also have a lot of data. We manage chart lifecycle. For example, you have a logging stack, Metrics Server, NGINX ingress, External DNS, cert-manager, Vertical Pod Autoscaler, etc. There are a lot of charts to manage. Deploying on Kubernetes has never been so easy with Qovery. So We have interfaces like a web console, allowing you to give your Git repository containing your code and your Dockerfiles. We manage the build, we push to a container registry, and we perform the deployment with customers' desired option. Our approach is opinionated. So as soon as you are deploying through Qovery, we handle the lifecycle of the deployed elements through Qovery. user over the time don't have to think about, hey, what will have changed with the next version of Kubernetes and so on and so forth. We manage it for them. And as we are in charge of managing infrastructure element deployed on the cluster, we ensure the cluster will work based on the customer need. For example, We are using a Cluster Autoscaler. It automatically adds or removes nodes based on the consumption of the cluster. We've deployed a Vertical and Horizontal Pod Autoscaler on the infrastructure element to fit requested resources when the customer usage change. This is what we are calling managing, in fact. So the customer don't have to think about his infrastructure. Everything is automated and managed by Qovery.

Bart: All right. And just a quick question. How many, you know, how many engineers are we talking about? How many, how big is your team?

Pierre: So today the team, so the tech team is composed of 10 engineers working on the project.

Bart: And how is it possible that you and your team are managing hundreds of clusters with just a handful of people doing it?

Pierre: The answer is short, but it's complex. We do tests. To be honest, we have a lot of tests and we are ready to do it in the correct order. We are reading every Kubernetes updates on the Vanilla changelog and cloud provider updates. We have, for example, RSS feeds directly into Slack so nobody is missing the information. For each cloud provider, We support multiple architecture, so x64 and ARM. And for each of those, we have a running cluster that runs tests from our engine pipeline. So the engine at Qovery is the part of code managing all the Kubernetes part and the cloud provider part. So in the engine, we have a lot of unit tests. And we have also end-to-end test, deploying cluster, testing with different configuration, and so on and so forth. So for every commit we make, all those tests have to pass before being merged. And regarding the upgrade, we are also using some tooling to help us on API deprecation, for example. So we are using Kubent, popeye, kdave or Pluto. This is just an example of four tools that are really interesting, but it's some time we invest in to ensure everything is working as expected. And... Before writing a new Kubernetes cluster for our customers, we test it for weeks on our own cluster. And then everything is deployed on customer cluster. But there are some rules, for example, we deploy first non-production cluster and customer are warned about it. So if there is something bad happening, we are alerted and we fix our own cluster. the bad behavior, but until now, it never happened. But then we can roll out production one or two weeks after if everything goes fine.

Bart: With that in mind, it's no secret, there are a lot of moving parts in this ecosystem. And one of the things to keep track of that I'd like to hear more about is when there's a new release, right? In one of our previous episodes, we spoke to Grace, who is the 1.28 release lead about the Kubernetes changelog being a very long list of features that's sometimes difficult to decipher, can be tricky to stay on top of it. How does, what's your strategy for, for unpacking and spotting potential difficulties when there's a new release that's coming out? What, how does that, how do you, how do you approach that?

Pierre: Yeah. So to be honest, we read the changelog. This is the first thing to do. Uh, but only a few engineer of the team understand all the changelog. Why? Because, Those who never manage on-premise Kubernetes cluster, never dug into the Kubernetes code or familiar enough with Kubernetes internals, and will have a hard time to understand everything. Why? Because with Cloud Provider and managing Kubernetes version, there is no need to understand all that complexity. So generally, Cloud Providers are giving upgrade processes and important changes are written in the docs somewhere. I would say 99% of the time, it will be enough. And you won't have a bad surprise or breaking change. But you can do it without as well, a lot of tooling and a lot of tests. we always go through a manual process, even if we instrument a lot in our engine, upgrades and stuff like that. There are a lot of tests to ensure that everything is fine anytime. So if we change or if we don't change some parts because yeah, it's a... Kubernetes with a lot of charts, when everything is deployed, when we talk about logging, observability, ingresses, et cetera, et cetera, it's a lot of things combined all together because they all rely each other. So yeah, test is a big part of it. is the thing.

Bart: So when speaking, you know, we're speaking about upgrading clusters, but also regarding upgrading Helm charts. What's your process and how do you plan? How do you go about planning those upgrades?

Pierre: Yeah. So Kubernetes, upgrading Kubernetes cluster, may be complex, but the chart is, I think, at higher level, to be honest, because charts, is not only a chart upgrade, but there is also a software in it. For example, when you upgrade the Loki chart, you don't only upgrade the chart, but you upgrade Loki as well. So to understand what has changed, you need to read the change log of the chart, and you need to read the change log of the software you are upgrading inside it. So our process for that is pretty straightforward. First, every single chart we are using is committed in our engine repository. So this way we are not downloading anytime we want to deploy something, a chart. And it allows us to have a clear view of what we had at a given time, because everything is in Git. So we keep the control on it. For this, we've created helm-freeze. So it's a tool that we've made and it's open source, so anyone can use it. And we define the end chart we want to use and the version we want to stick on. So when we want to make an upgrade, we just change it, perform a hand freeze upgrade, and then the new version is downloaded and we can simply use git diff to see all the change impacted by the upgrade. So this is for the chart. So we can then adapt change on some values if necessary. directly with values override. We use our unit and functional test to validate updates as usual. And we route on our test cluster to validate that everything is running as expected. And then we roll out on all our test cluster. And after a few days, we release for all our customers.

Bart: And what about catching edge cases? Do you write more automation in the Helm charts to do that? Or what's your strategy for that?

Pierre: So we are happy to see more and more tests directly inside the community Helm Chart. Unfortunately, it's not a requirement, but we strongly believe it will help companies to trust community charts in the future. We enabled some Helm options by default in our engine, like --atomic or --wait to reduce upgrade failure. But sometimes it's not enough. we can have pods not crashing but failing to be healthy. in terms of the status, but it's seen as healthy from a Kubernetes's point of view. And when you look at their log, you see that something is wrong. So we instrumented our engine to handle additional tests when deploying an infrastructure chart. For every single chart we are deploying, we generally add an extra test or some extra test for it. So the engine is written in Rust. It's inside our engine today. We think about doing an external and dedicated library for it, so everyone will be able to use it. And then, So what we've made more than what Helm by default brings is for example, we handle CRDs upgrade. So when you are using a chart deploying CRDs, you know that when you have to make an upgrade, you have first to upgrade CRDs in the version required, and then you are able to upgrade the chart. So there are some process around CRDs and on our part is... automated. We just declare the CRDs and they are automatically updated in the correct order and stuff like that. We also backup and restore resources before upgrading. It happened, for example, on a cert-manager with some important changes and we wanted to be able to keep generated certificates. and ensuring that we won't lose them in a big upgrade, for example. So to ensure that, we made backups. And finally, for example, we are able to reinstall a non-critical chart if the installed version is not equal or lower than a specified one. So this is to avoid managing Helm chart upgrades with multiple releases in between. So we can directly jump to a higher version without any problem. And we also instrumented our engine to handle failure cases as well. So, for example, before running a cluster upgrade, several tasks are made to reduce the failure. For example, a job failure may deny the cluster upgrade, a bad PDB configuration or status may deny the cluster upgrade or a null group update. but in a bad shape, but non-started. I mean, there are several elements of behavior that could avoid your upgrade to be a success. So we automatically fix, clean, and make the update, and warn the user to ensure the upgrade will succeed if we can't automatically repair it. And finally, as we have a large set of clusters, we can also make the update, and a single way to manage them, it's possible for us to automate everything and run a great smooth.

Bart: One thing that's actually not scripted, but I just want to dig into a little bit. You talked about CRDs. A very good friend of mine said a long time ago, but it's really stuck to this point, that CRDs are, it's his favorite feature in Kubernetes, custom resource definition. Would you agree with that? And if not, do you have a feature that you would say would be your favorite?

Pierre: Yeah. So I think it's a good point because I mostly agree in terms of feature because it's excellent. You can do whatever you want. You are not stick with what Kubernetes provide. You can do everything. So this is excellent. However, I would say the support level for CRDs is not as good as the one it would be. This is why we had to implement other stuff to manage CRDs. That's a shame, I would say, that Helm doesn't better manage those CRDs because we have more and more CRDs always. I think the tooling around it is not mature enough today, unfortunately, but definitely CRD is very, very good.

Bart: All right. Just want to, it's always good to everyone has different experiences. All you've been using Kubernetes since 2016. So your experience, you know, is, is unique in that regard. Now you've explained this, this whole process that you and your engineers are using. How does this process scale if you go beyond 100 clusters? And what would it take, you know, any in terms of any additional resources or processes to be able to handle that kind of magnitude?

Pierre: So we have to go further, making more tests for those charts by using metrics for example. This could be a good solution because every application or mostly every application are bringing metrics and some are really good. And when you perform some upgrades, for example, you have metrics that could be used to ensure that the application is running as expected. But not only that, today we are close to have 300 managed cluster on the average. So to manage more, we'll have to be able to AB test with more granularity. Today we can limit to a given number of cluster, filter only production, not production ones, specific cloud provider, for example. So we can decide what to do where. But more control is needed, like stopping on a failure, giving more information to the end user, mostly because most of the time, issues are coming from the cloud provider itself, like quota issues, and user intervention is required here. The key to manage Kubernetes cluster at scale, in fact, is tooling. It was like my previous experience at Criteo, managing a lot of stuff. Tooling is necessary. It's not necessary. It's mandatory.

Bart: Shout out to your team. That's good. Yeah. And, and, and speaking of your team, what's next at Qovery? What's up, what's on your roadmap?

Pierre: Yeah. So. On our roadmap, we want to handle more cloud provider. I told you, we want to be cloud provider and the stick as much as possible. And we want to be able to cover all of them. But today we support AWS and Scale Way. At the end of the year, we support GCP as well. Our engine is written in Rust. We've made specific code like the one we discussed with Helm. We want to move those as dedicated libraries, as I told you, because we want to leverage the community on those parts. We really think that Kubernetes and Rust are working very well together. And we strongly believe that Rust is an excellent language and is a very good companion for production usage. So sharing our lives could help the adoption and of Rust and Kubernetes at the same time. We also have plans to bring a catalog, service catalog, where you can easily deploy what you want from Kubernetes or on a Kubernetes cluster. And this catalog will help DevOps to propose a simple deployable application to the developer with an excellent experience. So something that could be very complex for a developer, make it easy just by using a catalog.

Bart: And just out of curiosity, you mentioned some cloud providers, but didn't mention Azure. Any reason for that? Yeah. Yeah.

Pierre: Yeah. Bringing a new cloud provider with the team size we have today is complex.

Bart: So I thought you said they were seniors.

Pierre: We are senior, but we have a hard time as well with some cloud provider because yeah, some are more mature than others. Some have more experience, some have more resources and stuff like that. and provide different kinds of services. And today what we see is that AWS and GCP are the most requested ones by our customers. So we are focusing on this at the first time. But we also have in the box something even more modular, allowing Qovery to be deployed on any kind of Kubernetes cluster, no matter the cloud provider, but it's not ready yet. So.

Bart: We'll wait to hear. We'll wait to hear about that. Yeah. Yeah. Yeah. That's good. That's good. No, but anyway, it sounds like things are going in a good direction and that you very much seem like a person who likes challenges. And what I want to finish with is the question that I've been dying to get to. You have a black belt in karate. So tell me about doing karate the hard way. Or what's your experience with karate, and how does that maybe inform how you approach challenges, how you break things down to steps. I wanna know more about that.

Pierre: Okay. So I still practice Karate and I started 25 years ago. So to me, it's important for the way-

Bart: if you were in Kubernetes in 2016, this means this is Windows 98.

Pierre: Yes, yes, exactly. We can compare it to that. And yeah, practicing sport in general is important for sure. For example, when I'm fed up with something, my laptop is taking forever. So, you know, it's a circular kick. No, I'm kidding. But yeah, people are generally impressed by my capacity of taking on me when I'm tired. There are some stressful situations, but I strongly believe it's because of the martial art, which is helping me a lot, to be honest. But yeah.

Bart: I agree. I think because, all to be, I'm a huge martial arts fan, and karate was the first martial art that I practiced. I never became a black belt. But one thing that I think is really important that's taught in martial arts, and if the instructors are good, is that it seems like a lot of it's about, you know, control of your body, but so much of it prior to that is control of your emotions. And working with Kubernetes or cloud providers or all the technical challenges that we've been talking about will involve some kind of an emotional response. And so knowing what an emotion is coming and how that may block thought patterns or redirect them, I think it's something that a lot of people might not see or understand a lot about martial arts. I'd like to know what's your thoughts on that.

Pierre: Your question is odd because I really think is it depends. I mean, every individual is every person is different in the way we receive the information interpreted being able to absorb and give the information back from various ways. And keeping the control is totally different from one person to another. I've been working in the banking industry for around 10 years. So dealing with bank traders, you can't imagine how stressful it can be sometimes. Some people are able to deal with it. Some others are just not, and will certainly never be able to because it's just not the way they are, and they just can't. So I'm not saying that I can't handle everything. Not at all, but... But yeah, I think some people are more prepared on stressful situation and... some sports like karate can help a lot to manage it at a certain level, depending on the person.

Bart: Yeah, I think it's a good point. I also think that there are so many different kinds of martial arts. It doesn't mean that everyone has to go out and become a black belt, but the experience of self discovery, self awareness, also situational awareness, I think that it can provide because the risk factor in martial arts as it was any sport in any sport you can get injured like even in golf. I haven't played golf enough to get injured. I don't plan on it I don't plan on it either but because of the fact that yeah martial arts you know all the instructors that I've had have always explained you know this is not a toy this is not a game there is a deep level of respect that that's that's established in terms of creating spaces I just think it's something that when we decided to do the podcast, it's like, oh, this is going to be fun to be able to talk about that. And so you've been doing it for 25 years. And. I assume you continue, you want your plan is to continue with it as well?

Pierre: Yeah, I'm still continuing. And I don't think I will stop. I mean, the day I will stop that it will mean I won't be able at all to do karate anymore. But yeah, It's part of my DNA now.

Bart: Yeah, I think that's a really positive thing. So that being said, if people want to get in touch with you, whether it's about automation, testing or karate, what's the best way to do that?

Pierre: So people can reach me on LinkedIn or by mail at pierre at qovery dot com. Fantastic. But I don't think I would be a super teacher for Karate to be honest, but for other things, I can be useful.

Bart: KubeCon Paris, I propose that we do a karate session, even if it's 15 minutes, I think it'd be a lot of fun. I haven't done karate for a long time. Yeah, exactly, it's worth trying. Pierre, thank you very much for your time. It's wonderful to talk to somebody who's been working with this and has just taken challenge after challenge, doing Kubernetes the hard way, and really doing it the hard way back in the day, running stateful workloads. You earned my respect, and I'm sure for a lot of other people as well. I look forward to hearing more about what you've got going on as well as Qovery. So thank you very much for joining us today.

Pierre: Thanks for your time and thanks for having me invited in this podcast. Pleasure.

Bart: Talk to you soon.

Kubernetes experts reacting to this episode