From complexity to abstraction: the future of Kubernetes platforms
In this interview, Deepak Goel, Director of Engineering at Nutanix, discusses:
How automated resource management through tools like Horizontal Pod Autoscaler and Vertical Pod Autoscaler can solve the challenge of dynamic resource allocation.
The importance of configuration management through GitOps and how tools like Kyverno and Flux enable better security and deployment controls.
His vision for the next decade of Kubernetes, focusing on abstracting complexity and making it more accessible, similar to how modern cars hide engine complexity from drivers.
Relevant links
Transcription
Bart: So, first question, who are you? What's your role? And who do you work for?
Deepak: Hi, I'm Deepak. I work at Nutanix, where I head the cloud native engineering team. I mainly focus on Kubernetes, specifically our Nutanix Kubernetes Engine (NKE), and Nutanix Data Services for Kubernetes (NDK), also known as NDK.
Bart: What are three emerging Kubernetes tools that you're keeping an eye on?
Deepak: I would say the first one is Kyverno. I really like that tool, which makes Policy as Code. One of the big requirements in Kubernetes is how to put guardrails in place. Kyverno, being Kubernetes-native, makes it easy to express those policies in terms of YAML files and objects. That's pretty promising. The second one is Flux. We know how difficult Kubernetes configuration management can be. One way to handle that is through GitOps, and Flux makes it easy to integrate with the rest of the platform. That's fairly promising. The third one is Metal3. Our Nutanix Kubernetes Engine (NKE) platform is Cluster API-based, and Metal3 extends the infrastructure to bare metal, which is where the industry is heading now. As we transition from traditional virtualization to containerization, it's becoming important to have a solution like Metal3, where you can run Kubernetes on bare metal. That's pretty exciting.
Bart: These are questions based on comments that our guests in the podcast have said. The first one is about automation and resource management. Alexandre expressed that having an automated mechanism is better than enforcing processes. What automation tools or approaches do you recommend for managing Kubernetes resources?
Deepak: One of the big challenges in Kubernetes is sizing. When you deploy your application, the first question that comes to mind is what resources to allocate to it - how much CPU and memory. However, there is no easy way to figure this out because, although these entities are static, they are very dynamic in nature. As your application starts handling more load, it might need more CPU and memory. In this case, I would recommend using Kubernetes features like Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA).
HPA is the Horizontal Pod Auto Scaler, which scales your pod or application by creating more pods if it needs to handle more load. VPA, on the other hand, is the Vertical Pod Auto Scaler, which changes or modifies your resources based on the demand of your application. This automates resource management in your Kubernetes cluster and takes away the burden of statically defining these resources.
Bart: GitOps and platform engineering. One of our guests, Hans, argues that GitOps is an excellent building block for building platforms with great developer experience. He mentioned the ability to merge, review, and discuss code changes in PRs, and the additional benefit of not granting permissions. Should all platforms use GitOps? What's your experience?
Deepak: One of the bigger challenges or missing areas in Kubernetes is [configuration management](Could you please provide more context or clarify what specific aspect of configuration management you would like me to link to?). Although you have Kubernetes Audit Logs, they are more reactive in nature than proactive. When you make a change, especially in your production cluster, you need to ensure that the changes do not lead to downtime, as a seemingly unimportant change might bring your application down. This is where GitOps, or especially Git systems, come in handy, as you can have reviews, deploy in a very automated manner, and have a history of changes, which is called a version control system for your configuration. This brings about both security, guardrails, and automation that help run the Kubernetes cluster in a production environment.
Bart: Regarding multi-tenancy, our guest Artem shared that once you slice your Kubernetes clusters into a multi-tenant environment, you should consider multi-tenancy for every tool you install, such as logging, monitoring, and scaling. What's your advice on building Kubernetes platforms that are shared in an organization?
Deepak: I want to take a step back on multi-tenancy before we move on. There are two types of multi-tenancy: hard multi-tenancy and soft multi-tenancy. Hard multi-tenancy is where you deploy different Kubernetes clusters for different groups because they need a dedicated Kubernetes cluster and cannot share. On the other hand, if you have a team or multiple teams that can share a Kubernetes cluster, then soft multi-tenancy is in play.
I agree that soft multi-tenancy is necessary in certain cases, such as observability stacks and logging stacks, which need to be deployed in a multi-tenant manner. Either you have applications that can handle multi-tenancy at the application level, or you have namespaces, which is the multi-tenant construct in Kubernetes. You can have multiple copies of your application deployed in each namespace, taking advantage of namespace multi-tenancy. However, before you go deeper into multi-tenancy, you need to ask yourself whether you are opting for hard multi-tenancy, whether your team needs a dedicated Kubernetes cluster, or if they can use the same underlying Kubernetes cluster to deploy applications in a soft multi-tenant plan.
Kubernetes turned 10 years old this year. What should we expect in the next 10 years to come? The first 10 years were marked by educating the industry that containers are the de facto way of deploying modern applications. There is now general acceptance of this after a decade. The next decade will be led by more automation, as the CNCF survey has shown year after year that Kubernetes is still hard to operate, and there is a skill gap. It feels like we haven't yet reached the level of right abstraction or interfaces and automation to make it easy to adopt. In the next decade, I believe we will build the right interfaces and automations in Kubernetes, making it an internal implementation detail and not as exposed as it is today.
Bart: What's next for you?
Deepak: I am big on building something that I compare to a self-driving car. As the car industry has moved from exposing the entire engine to just inputting your destination, I want Kubernetes to follow a similar path. We have exposed Kubernetes constructs that make it difficult for everyone to understand what they are. I think we need to build abstractions, such as the right steering, brakes, and accelerator, so that the driver of Kubernetes doesn't need to understand the internals of Kubernetes but can still use it for their own benefit. Then, we can reach the automation level where it's self-driving, self-operating, and self-healing. That's where I focus more - on making Kubernetes easy to consume by abstracting it away, so we stop talking about Kubernetes. How can people get in touch with you? I'm on LinkedIn; anybody can reach out to me on LinkedIn - just search for Deepak Goel, D2iQ. That's the easiest way to find me. I'm most active there.