The hater's guide to Kubernetes

The hater's guide to Kubernetes

Host:

  • Bart Farrell

Guest:

  • Paul Butler

This episode is sponsored by Syntasso, the creators of Kratix, a framework for building composable internal developer platforms

If you're trying to make sense of when to use Kubernetes and when to avoid it, this episode offers a practical perspective based on real-world experience running production workloads.

Paul Butler, founder of Jamsocket, discusses how to identify necessary vs unnecessary complexity in Kubernetes and explains how his team successfully runs production workloads by being selective about which features they use.

You will learn:

  • The three compelling reasons to use Kubernetes are managing multiple services across machines, defining infrastructure as code, and leveraging built-in redundancy.

  • Why to be cautious with features like CRDs, StatefulSets, and Helm and how to evaluate if you really need them.

  • How to stay on the "happy path" in Kubernetes by focusing on stable and simple resources like Deployments, Services, and ConfigMaps.

  • When to consider alternatives like Google Cloud Run for simpler deployments that don't need the full complexity of Kubernetes

Relevant links
Transcription

Bart: In this episode of KubeFM, I got a chance to speak to Paul Butler, co-founder of Jamsocket. He's going to share his insights on leveraging key tools and making strategic decisions in cloud infrastructure. In this episode, we'll examine emerging projects in the cloud-native ecosystem that Paul is watching closely, including SleekDB, a REST-based key-value store, and Caddy, an alternative approach to managing HTTP traffic. We'll also focus on Kubernetes. At the end of the day, we found out about an article Paul wrote called "The Hater's Guide to Kubernetes", exploring when Kubernetes is the right fit and when it might not be. Paul's perspective will resonate with anyone grappling with the balance between necessary and unnecessary complexity in cloud environments, as he shares his nuanced take on Kubernetes as both a robust technology and, at times, a challenging platform for rapid, user-facing applications. This episode is brought to you by Syntasso, the creators of Kratix, a framework for building composable internal developer platforms. Kratix supports platform engineers and creates APIs and lifecycle management for platform components, acting as the intelligent glue between a portal and infrastructure as code. Kratix users can upgrade their Backstage implementation from a portal to a platform and tame Terraform and Crossplane code at scale. Visit them at syntasso.io. Now, let's check out the episode. So, Paul, to get started, what are three emerging cloud-native tools that you're keeping an eye on?

Paul: One project that came up recently, which I've been following for a while and recently got open-sourced, is SleekDB. It's a key-value store written in Rust on top of object storage, such as S3 or Google Cloud Storage. I've been excited by the idea of object storage as a new, almost networked file system. Although it's not a file system, it's becoming a new abstraction. Even though it's been around for almost 20 years, there's a resurgence of people doing things directly on Blobstore that I find exciting.

In a similar vein, Distribution, a CNCF project, is a Docker registry or OCI container registry that's been around for a little while. It's backed by blob storage, or at least optional. I think it's a great little project. We use it, and it seems intimidating to run a Docker registry, but with Distribution, it's really not. It's fairly easy to spin up, even in Docker Compose, and play around with it. You can upload containers directly from Docker, which is convenient.

The third piece of software I'd like to mention, which has been around for a while, is Caddy. The Caddy server is something we've used in our cluster for a long time as an alternative to Ingress. It's one of those pieces of software that makes a lot of good decisions and has good defaults, easing you into complexity. If you want to, all the knobs are there, but if you don't want to, it's very easy to get started. I love software like that.

Bart: Now, Paul, for the people who don't know you, what do you do and who do you work for?

Paul: I work for a company called Jamsocket, where I'm a co-founder. We started almost three years ago. Essentially, we provide infrastructure for cloud applications to spin up processes on the fly. While this sounds similar to Kubernetes, our specialization is spinning up processes that are web-accessible. Every process has a hostname that's accessible from the web. We implement security measures to prevent guessing these endpoints. The abstraction we provide is based on sending an API call. In advance, you upload a Docker container to us, which we have ready. You send an API call, and we send you back a hostname that you can connect to from a web client, an Electron app, a mobile app, or anything similar. We keep the process alive as long as the connection is alive, and then we shut it down. A close analogy is Google Cloud Run, except instead of spinning up your function on every request and response cycle, we keep it alive for a longer bidirectional connection, which could be minutes to hours. Usually, it's the user closing the tab in a web app that triggers the shutdown. When the connection drops, we shut the process back down, managing the whole lifecycle.

Bart: And how did you get into cloud data?

Paul: I've straddled data science and data engineering roles throughout my career. I worked at Google for a bit, which gave me some exposure to Borg, mostly as an internal consumer. Honestly, I was a little afraid of it at the time - it's a big, scary cloud configuration, and I just wanted to run a simple app. I had a little bit of exposure to it, but then I went into finance, where I was on a research team for about five years. During that time, I did less hands-on engineering, but I kept my engineering skills sharp by building my own tools. If I wanted to build a tool to analyze lots of data, I'd build it myself.

Throughout my career, I found that anytime I wanted to deal with tools that handled large amounts of data, there was a gap in the infrastructure. It was easy to build tools that could handle tens of gigabytes of data on a local workstation, but increasingly, people wanted these tools to run in the browser. The browser is fundamentally limited, so you need to lean on the cloud to do the compute and load the data. Traditional stateless server architecture wasn't a good fit for this, as you can't have a Flask app load 10 gigabytes of data for every user - it's just not architected that way.

I realized that there was a missing piece that looked like Kubernetes. We ended up building our own distributed system, not directly on Kubernetes, but because of my exposure to Kubernetes, we decided to keep running our control plane on Kubernetes. We run a lot of our internal services on Kubernetes and use it pretty heavily internally.

Bart: And with you being users of Kubernetes on a regular basis, how do you stay up to date with all the changes going on in Kubernetes and the cloud native ecosystem?

Paul: I think we take a pretty boring approach to Kubernetes, in the sense of the "software should be boring" manifesto. Most of what we use is stable features that have been around for a while, so we're not really on the cutting edge of Kubernetes. We have a Slack channel where we receive all the change logs from Google Kubernetes Engine (GKE), and I believe some Kubernetes change logs as well. This way, we can see version by version, especially if there's a vulnerability or something that's been patched, so we can be sure to kick off the upgrade on our systems. For the most part, I keep an eye on Twitter and Hacker News. If something related to Kubernetes comes up on my feed, I tend to keep an eye on it.

Bart: And if you had to go back in time and give your previous self one career tip, what would it be and why?

Paul: One thing I've noticed about software development is that no matter what I've done - from the data engineering world, data science, infrastructure world, and dabbling in web development - there's a through line, a meta technique that's present across all software. If you can separate necessary complexity from unnecessary complexity and build abstractions over the unnecessary complexity, I think you can really advance what you're capable of. The meta skill that's most important is being able to identify whether complexity is necessary or not. I don't have direct advice on how to build that skill, but I think that if you start noticing that early in your career, you can develop it over time and become a very productive software engineer.

Bart: For our monthly content discovery, we found an article that you wrote titled "The Hater's Guide to Kubernetes." The following questions are designed to take this topic a little bit further. To get things started, we want to address the hot take. You said in 2022, although I must add that before we get started, there is an addendum that we'll mention at the end. There is an update to this blog. In 2022, you said, "I might gripe about Kubernetes sometimes, but it really is a great piece of technology. I highly recommend it to all my competitors." I imagine there's a lot to unpack here. Is Kubernetes really a great piece of technology?

Paul: The funny thing about that is that I think there are three statements in there. Kubernetes is a great piece of technology. I recommend it to my competitors. Maybe it was just the two. But I do sincerely believe both of those. I think that Kubernetes is a great piece of technology. The core idea of Kubernetes, as I explain it, is that you have control loops where you have this descriptive version of the world - this is what I want to exist in my deployment - and then control loops that make it happen, with cycles of control loops and recursive control loops that become a powerful but also fairly understandable abstraction. It's super robust, right? Control loops, by their nature, are very robust because if something breaks, the control loop is self-healing. Kubernetes takes that control loop idea and puts it on top of RAFT consensus, which is also self-healing and very robust. Combining these two abstractions creates a super powerful mechanism for making robust systems. So the first part of the statement is absolutely true - I think it's a great piece of technology. The part about recommending it to my competitors is snarky because the nature of what we do involves spinning up lots of processes, and we tend to expect that a human being is waiting on that process to start. Over time, I've become increasingly confident that I think Kubernetes is the wrong tool when a human being is waiting on a pod to start. I think that's a big anti-pattern that I've seen. When I say that I recommend it to my competitors, I do think that if we have competitors who are also starting processes on behalf of users and they're basing that on Kubernetes with a human in the loop waiting for a pod to start, I'm less concerned about them because I think they'll run into the same pitfalls that we did. We actually started running all these processes on Kubernetes initially, but we found it was easier to abstract away all of that into our own systems than to shape Kubernetes to our will.

Bart: Now, an adjective often used to describe Kubernetes is complex. This year, celebrating 10 years of Kubernetes, in KubeFM, we interviewed over two dozen CNCF ambassadors about their hopes for the future of Kubernetes. Many of them referred to the idea of becoming more boring or reducing complexity, echoing the "software should be boring" manifesto. In another article you wrote, you discussed accidental versus essential or necessary complexity. Can you walk us through what you mean by necessary complexity and how it relates to Kubernetes?

Paul: So, I think this is just a fancy way of saying you should use the right tool for the job. The analogy I like is that most people have a microwave, and every microwave these days has a timer built into it. If you think about the user interface to that timer and the state space of all the possible states it could be in, you get a pretty complex chart. If you were to map out the finite state machine of all those states, it would be complex for a timer. When I'm in an Airbnb and using the microwave, it's not immediately clear to me how to set the timer. It might take a few button presses before I figure it out, unlike an egg timer with a dial. If you're using a microwave to heat food, you need all that state space of the complex microwave UI. But if you're using it as a timer, you just need the dial. I think in a lot of cases, the things people are using tools for are not that complex, and there are simpler abstractions that would work. Maybe those better-abstracted tools exist, or maybe they should exist. I'm encouraging people to either go find them or go build them.

Bart: And if the perception is that Kubernetes is hard, which seems to be fairly widespread, but we can't simplify it further due to necessary complexity, how do other technologies get away with that? If we think of the example of Heroku, how is it so much easier to use Heroku?

Paul: I think one of the things that makes Kubernetes complex is the trade-off between a resilient control loop system and feedback loops. If the system fails to start a pod, it will try again and keep trying. However, this makes feedback loops longer and more complex. For instance, when you run kubectl apply and create a pod, it may appear successful, but in the background, a failure might be occurring. I believe that resilience to failure and feedback loops can be at odds with each other as goals in software.

I also think that software with a web UI can make state more discoverable, especially when the UI is well-designed. This is why software like Heroku, Railway, and Google Cloud Run, which are either abstractions on Kubernetes or equivalents to Kubernetes, can do better. They have highly discoverable UIs and short feedback loops.

Bart: So, with this in mind, when should someone choose Heroku or Kubernetes? Or, beyond that, why should anyone use Kubernetes at all?

Paul: So, we narrowed down to three reasons why I think Kubernetes is right for us. One is, we want to run a number of services and treat it as an abstraction over a bunch of machines, where instead of saying, "I want this job to run on this machine," I can have a pool of machines and a pool of jobs, and Kubernetes sorts out what runs where. Another thing is that we want to control it as code. I'm a big proponent of infrastructure as code, and I think it's the right approach. Kubernetes lends itself well to this because it's configured through code. The other thing is that I think one of the advantages we get out of Kubernetes is some of the resilience, both in terms of if a node goes down, we can have redundancy, and when we're doing a deploy, it'll do the right thing - it'll keep services running until the new services are up. I think the value of Kubernetes for a small company like ourselves is in the intersection of those three things. Despite the article being "The Haters Guide to Kubernetes", I think the "guide to Kubernetes" part is important too. The article is a description of what has worked for us in terms of what we have and what we did and didn't use in Kubernetes to make it work for us at our scale. So, even though it's "The Haters Guide," we are ultimately using Kubernetes, and this is a guide to using Kubernetes, not just a guide to hating Kubernetes.

Bart: Very important clarification: so folks don't get misled. On the flip side of all this, when should you know when to use Kubernetes? I was part of the Data on Kubernetes Community for three years, so don't hurt my feelings by saying databases or some kind of stateful workloads. But how would you answer that?

Paul: Our take on data is that we don't store state in Kubernetes, which isn't to say that's never good or never scalable. For us, one reason we do this is that as long as you can treat Kubernetes as stateless, the cluster itself as stateless, and the state living outside of Kubernetes, it's a really nice abstraction and assumption to use for day-to-day management. Especially if something goes wrong, it's nice not to worry about destroying state when destroying a service. It's reassuring to know that if we knock something over, it will rewrite itself and we'll be back.

We've encountered some issues with StatefulSets that can cause deadlocks and other problems. While people do use them and there are ways around these issues, until you've hit these problems in an emergency and worked through them, you may not be aware of the potential pitfalls.

We've found that using blob storage is a brilliant abstraction that we should all be using more of. We also use a traditional relational database, Postgres, which we outsource to Google Cloud SQL. We're running on Google Kubernetes Engine (GKE), and we run Postgres on their Google Cloud SQL. They do a great job of keeping it up without us having to think about it. We get a bunch of nice things on top of it.

Generally, I think it comes down to my preference to outsource stateful things if I can to a company that knows how to run it, coordinate backups, and provide a nice UI. At the end of the day, things that are stateful make me sleep better at night if I know they're in good hands, even if it costs a little bit more.

Bart: Fair points. Now, it's no secret that Kubernetes has a vast number of features that cover many different use cases. Its inherent complexity partially comes from the fact that it's standardizing and exposing all those features, giving you the chance to tune all possible knobs. You clearly have strong opinions on when and when not to use Kubernetes, but do you also have opinions on Kubernetes resources like pods, deployments, and StatefulSets?

Paul: In fact, in the article, I try to provide a value that I wish existed when I started dabbling in Kubernetes. The article is essentially what I wish I had when I started. I don't want to say a "blessed path," because that makes me sound like I know better than anyone else. However, I do think that certain resource types and paths have a higher level of difficulty in Kubernetes. There's a "happy path" where, if you mainly stick to deployments, services, config maps, secrets, and cron jobs - the bread and butter resource types that are really well supported across clouds and don't interface much with the clouds - you're just running a job, and it's all pretty standard, vanilla stuff. If you can stick to that as much as possible, and there are exceptions, which I discuss in the article, you can be very happy and avoid diving head-first into all of the complexity of Kubernetes.

Bart: Reading your article, there was another statement that stood out quite a bit that I think will make some of our audience members rethink their lives. You actively avoid RBAC resources such as roles and role bindings. Why is that?

Paul: I think I put those in the "cautiously use" category because we do use a little bit. For example, we don't use cert-manager. We have our own job that essentially is just a single process, no CRDs or anything like that. We just configure it on the command line, and it fetches certificates for us, then stores them in secrets. So, it needs to do some RBAC. We occasionally use RBAC for things like that.

For context, we're a team of four. We use Google Kubernetes Engine (GKE) access itself to Google Cloud to gate access to the cluster. So, there's some IAM there. We could make it more fine-grained with RBAC, but we just haven't had a need to yet. Really, we only use RBAC for giving deployments themselves, pods themselves, access to Kubernetes resources. In general, I've found that it's nice to assume that, for the most part, deployments are not interacting with the Kubernetes control plane. You kind of separate the application and data and control as much as possible for as long as possible. It just makes things nicer to manage. I think there's a point of scale where you need RBAC for users, and you probably want to do more interacting with the Kubernetes runtime. However, the happy path is avoiding that as much as possible.

Bart: Are there any other things that you categorically do not use in your clusters?

Paul: We've avoided operators in general, so CRDs and operators. There was a point where we went all in on that. We were writing our own CRDs and our own operator; our whole system itself was a Kubernetes operator. We found that there were enough foot guns with that, and it wasn't actually buying us that much. It meant that we were very focused on fast start times. We wanted things to happen quickly in our cluster, even at the expense of the one in a thousand cases that fail. If we have a failure case, we can retry it, but we're not optimizing for that case.

What we found was that anything that goes through a CRD goes through a RAFT consensus at least once, possibly multiple times. That does add up. It doesn't add up to that much if you're just starting a service that runs for 72 days. But if you're running a service that goes for 15 minutes and has a human being waiting for it to start, it does create noticeable latency to go through etcd and RAFT consensus a number of times. We found that we moved ourselves off of operators, and other things that used operators were also slow and had a long feedback loop, and were not the right fit for us.

We ended up using Caddy instead of cert-manager, which can fetch its own certs. We also wrote our own tool for fetching certs with ACME DNS01, for things that we couldn't put Caddy in front of, like non-http services. That sounds hard, but it was like a couple hundred lines of Rust. We'd already been working on the library for that for some other things, so we kind of had that available. Piece by piece, we could replace these operators with fairly little code.

Another thing we don't use is Helm. I don't like templating code in general. I grew up with PHP as one of my first languages, and I thought we learned back then that string templating was not a good way to do code generation. Even something like the evolution between PHP and React, where people are doing JSX, where syntactical validity is ensured.

Bart: Just to take it a little further, we had a previous podcast guest who agreed that Helm's design is fundamentally flawed, similar to the example of PHP. I'd like to expand on the point about CRDs, as several guests have mentioned that CRDs are their favorite Kubernetes feature. Could you walk us through the control loop inside the operator and discuss additional downsides that make CRDs not worth using?

Paul: I think there's a bit of both incidental and essential complexity with CRDs. One thing that bit us was running into size limits on how big the CRD definition can be because it all has to fit into etcd. Some of these issues are more design decisions than essential complexity. However, none of it was insurmountable, but it was a series of foot guns where we had to ask ourselves, "What are we gaining from this that's worth the cost of all the foot guns we're dealing with?"

Then there's the essential complexity, which gets back to the slower feedback loops. For a lot of things, you just want immediate feedback. I use the analogy of a thermostat versus an on-off switch on a heater. If you have a heater with an on-off switch, you know immediately if it's broken or working. If you have a system with a thermostat, you might set the temperature and nothing happens, which is expected. However, you don't know if the thermostat is broken, if it's just taking time to kick in, or if the system downstream of it is broken.

There are more moving pieces, but what you get for those moving pieces is great - you set the temperature and never have to think about it again. You don't have to actively turn the switch on and off. I think there's value in that essential complexity. Going back to what I said at the beginning, having a genericized resource on a control loop on a RAFT is a great idea if that's what you need for a system. Kubernetes executes that brilliantly and is a high-quality piece of software.

However, I think people see that and say, "This is going to be the only solution." It's like having only a hammer. For us, cert-manager was more complex than it needed to be. What we needed was just a for loop with a timeout that would write a certificate and private key to a secret.

Bart: We have another guest, Ori, who talked about his opinions on network policies, saying that he didn't feel they were the right abstractions. However, he didn't go so far as to say that he thinks it's a good idea to ban them. I guess there's a time and place for everything, but the goal is to minimize complexity whenever possible. Would you consider that to be a fair summary of how you're thinking about it?

Paul: I think that's right. We're trying to minimize complexity. The way I framed my post is that it actually started as a documentation of our internal wisdom on what did and didn't work in Kubernetes for us and why. This was triggered by a new hire joining our team, and I wanted to ensure that our knowledge wasn't just arbitrary, such as "we don't use Ingress" or "we don't use that." I wanted to explain why we've made certain decisions, what pitfalls we've encountered, and what we've learned from them.

Initially, I didn't intend to write a prescriptive post for others, saying what they should or shouldn't do. However, as I was writing, I realized that this information would have been very helpful for me a couple of years ago. So, I decided to make it a public blog post instead. I didn't frame it as "these parts of Kubernetes are bad." Everything in Kubernetes was added for a reason, and I don't mean to be dismissive of those reasons. However, some of those reasons are relevant to Fortune 500 companies, while others are not relevant to a four-person startup.

I actually wanted to combat the notion that a four-person startup shouldn't be using Kubernetes because it's not built for their scale. My point was that we can be very productive with Kubernetes if we understand where the happy paths are and where the pitfalls lie.

Bart: At the beginning of our conversation, you said Kubernetes is great for running multiple processes, running them redundantly, and thirdly, describing their relationship in code. What would you suggest to someone who doesn't have to run multiple processes or need redundancy?

Paul: For a lot of startups, especially companies doing a traditional SaaS kind of web application with a database, I would first look at infrastructure providers that offer a layer of abstraction above Kubernetes. My go-to is Google Cloud Run. I think it's a fantastic product that's not talked about much. When people think serverless, they think function as a service and Lambda. However, I think the sweet spot for serverless is not having to think about scaling services, but still being able to write a traditional server. This approach offers advantages, such as portability, allowing you to run the server anywhere. Cloud Run hits this abstraction well, as you provide them a server, and they'll scale it down to one and up as needed, putting it behind a load balancer. If you're only doing a stateless service, this works very well. There are also upstarts like Railway and Render, and companies that will do this on your cluster on Kubernetes, like Flight Control, among others. There's a big ecosystem of companies that want to help you if you're a SaaS trying to serve a traditional vanilla web app with a database. Until you need a bunch of services talking to each other or doing complex things like integrating a Docker registry, you probably don't need to go all the way to Kubernetes from day one.

Bart: We've now made it to the point where we get to hear from you about the update, the addendum to the post. And in addition, now that we know the whole story more or less, I just can't help but ask, do you still recommend Kubernetes to all of your competitors?

Paul: I think both points hold: Kubernetes is an excellent piece of software, and I hope our competitors in the specific space we're in use it. The addendum I referenced was about not using Ingress resources. Instead, we use Caddy in front of a load balancer. I stand by that recommendation, which we used for several years with great success. I received some pushback on this from people in comments, so to clarify, we run a Caddy server for every deployment. We configure Caddy to support multiple domains and obtain certificates for them automatically, similar to how you would configure an Ingress. This approach has worked well for us, and we still use Caddy to some extent. However, we realized we wanted to use Google Cloud Armor, which integrates well with Ingress and provides rate limiting and other checks, similar to Cloudflare. We migrated many of our Caddy routes to Ingress a couple of months ago. I'm still hesitant to recommend Ingress, as it's being replaced by the new API Gateway. It was reassuring to hear that no more work will be done on Ingress, but it is deprecated in favor of the new API Gateway. The migration was not too painful, and it has worked for us. Unless you're already familiar with Ingress, I would recommend using NGINX or Caddy initially, and then migrating to Ingress later if needed.

Bart: In terms of our closing questions, the first one I'm excited to ask. You were the founder of Jamsocket, but after reading your bio, I discovered that you created interactive reports on ranked ballot election results, a browser extension for exploring branching conversations, developed interactive essays on the Mandelbrot fractal, how to size bets, and the mathematical nuance of American elections. We're getting close to one. We can probably talk about that. And last but certainly not least, you wrote code notebooks for forensic video analysis, Lindenmayer fractals, optimizing pen plot routes, and generating pseudo 3D line graphics. Do you sleep and what motivates you to keep creating?

Paul: I do sleep. I increasingly care about work-life balance and am able to achieve it. One of the things that's helped is being somewhat prolific in my content, which I didn't think of myself as until you listed it all. Having been around the internet for a while and consistently pushing things to a place where they stick around has helped. Things I've done over a decade ago can still be found. I think the accumulation over a decade plus has helped make it seem like I do a lot.

Bart: What motivates you?

Paul: I'm really interested in tools that help people create, explore, and explain things. One thing I liked about data science is the ability of computers to amplify and satisfy curiosity, especially when working with a gigantic data set and asking questions of it, drilling down to find answers. Many of the projects you mentioned are interactive data essays, which is probably my favorite genre. I find it interesting to have a prescriptive path through data while giving people the ability to explore on their own, make their own conclusions, and toggle between different options.

Bart: What's next for you?

Paul: So Jamsocket is pretty much my 24-7 focus these days as a startup. We're doing a lot of internal work to keep going and make us stand out, even when compared to Kubernetes. We want to leverage our opinionated stack to do things that are harder to achieve with Kubernetes. Our focus has been on improving cold starts and authentication, as well as the benefits of having everything networked together. We've vertically integrated the networking all the way to layer seven and the containers. We've got a lot of exciting developments in this area. The core of what we do is open source as a project called Plane, which is at Plane.dev. We're continuing to build things in public that way.

Bart: Very good. And if people want to get in touch with you, what's the best way to do so?

Paul: I'm on Twitter or X as PaulGB. I write a newsletter called Browser Tech Digest, which is available at digest.browsertech.com. We also host events under the Browser Tech umbrella. To briefly explain, I think of Browser Tech as the new surface area of browser APIs, including technologies like WebGPU, WebAssembly, and WebSockets. You can also find me on LinkedIn as PaulGB, and if you want to email me, my address is paul@Jamsocket.

Bart: Fantastic. Paul, thank you very much for joining us today at KubeFM. We look forward to seeing what you're doing next and whether that "The Hater's Guide to Kubernetes" might get another addendum. Thanks a lot.

Paul: Thank you, Bart. It was my pleasure.

Bart: Cheers.