Advancing multi-tenancy, security, and DX in Kubernetes

Advancing multi-tenancy, security, and DX in Kubernetes

Guest:

  • Will Stewart

Learn from William Stewart, Co-Founder & CEO at Northflank, as he delves into the intricacies of modern deployment strategies and the future of application delivery.

In this interview, Will will discuss:

  • The importance of multi-tenancy in Kubernetes for the secure and cost-efficient deployment of untrusted code.

  • The challenge of creating a seamless developer experience by integrating tools like Kata containers, Istio, Mesh, and Cilium and ensuring consistent deployment workflows for both platform and application engineers.

  • The dominance of Kubernetes as the default deployment platform, the trend towards managed services, and the resurgence of bare metal usage for cost optimization.

Relevant links
Transcription

Bart: Who are you? What's your role? Who do you work for?

Will: Hi there, my name is Will Stewart. I'm the CEO and co-founder of Northflank, a self-service developer platform.

Bart: What are three emerging Kubernetes tools that you are keeping an eye on?

Will: We use Kata containers quite heavily for nested virtualization and secure multi-tenancy. We use a tool called Cloud Hypervisor and are big fans of the project. It's fantastic. We're also big users of Cilium for our network security. We love the egress gateway support for Cilium. It allows us to provide egress gateways to our customers. We also use Istio. Istio has been around for a long time, but they're innovating with the ambient service mesh, which allows Northflank to solve the problem of the service mesh tax. A lot of our cluster resources are dedicated to Istio sidecars, and we'd love to use a sidecarless service mesh to optimize cost.

Bart: One of our guests, Artem, shared that once you slice your Kubernetes clusters into a multi-tenant environment, you should consider multi-tenancy for every tool you install, such as logging, monitoring, scaling, etc. What's your advice on building Kubernetes platforms that are shared in the organization?

Will: Multi-tenancy is the most important thing for a platform like Northflank. We have a multi-tenant model where people can self-serve and deploy untrusted code to the Northflank platform for our sandbox environments. And we deliver that through Kata containers, Istio, Mesh, and Cilium. Ultimately, these are things every team is going to have to do. It's a very complex task, but you have to do it. One for security and two for cost optimization. Multi-tenancy allows you to take a larger cluster and divide it into smaller pieces, almost like you're dividing a pizza. That means you get economies of scale and save money, but of course, it comes with the downside of larger overhead. Some of those tools I just mentioned allow you to deliver workloads securely inside those clusters, or you can look at platforms like Northflank, which out of the box will allow you to deliver secure multi-tenancy inside your Kubernetes clusters.

Bart: Artem also shared that between a single cluster with multiple environments and dedicated clusters per environment, the latter is easier to manage for a small team. When you share a cluster with multiple teams, should you share a single cluster or offer a dedicated cluster per tenant? What does your choice depend on?

Will: At Northlake, we don't care. If teams want to use multiple clusters between multiple teams, they can do that, or they can use one team, one cluster, or a multi-model, like having multiple teams on a single cluster. To us, it doesn't matter. It's all based on the end user. Do they want cost or security? Or do they have compliance reasons? Or does one team need to deploy on AWS and another team on Azure? Or one team needs to be in Europe and another in Singapore? So it's cost, regional requirements, and cloud provider requirements generally. And it's mostly driven by security. They don't want team A and team B using the same clusters. But some of the tools I mentioned earlier can really reduce that sort of overhead. We have 20,000 engineers deploying on a small number of Kubernetes clusters in our multi-tenant offering. So that's 20,000 engineers on a couple of clusters.

Bart: Another guest, Ori, shared that rushing into solutions without understanding the root cause can lead to fixing symptoms instead of the actual problem. He mentioned the case of network policies and how sometimes the root cause of a problem is a people problem. And the solution lies in addressing that. What is your experience with providing tooling and platforms on Kubernetes to other engineers? What are some of the soft challenges that you faced?

Will: Initially, it's not an engineering challenge. It's a developer experience challenge. How do engineers onboard to the tool? Do they know what it does, how it works, how it integrates? How does it work between an application engineering team and the platform engineering team? Ultimately, the platform engineering team is delivering a product, a self-service product to their own customer. That customer is the application engineer. And the application engineer is delivering a product to the end customer of the business. So ultimately, the experience has to be in tandem. The platform you're building needs to work for your engineers, not your team, but your engineers, because they're the end users. Generally, that experience is different if you're deploying databases or microservices, but it all has to be consistent. Teams want to use GitOps, a UI, Datadog, and Grafana. It's about taking all of these requirements for different software, tooling, and vendors and integrating them into a consistent story. So the teams have a way to consistently deploy workloads with confidence.

Bart: Kubernetes is turning 10 years old this year.

Will: I think Kubernetes has already won the argument. Mesos, Rancher, Cattle, all these platforms came before Nomad. Obviously, those platforms are being used less. Today, Kubernetes is the default way to deploy. Managed Kubernetes has made that even more the case. GKE, AKS, EKS—I think we'll see a lot more workloads moving from on-prem to managed Kubernetes, but we'll also see a lot of workloads moving from public cloud onto bare metal again. There will be more ways to operate and securely and consistently deliver experiences on bare metal. I think bare metal Kubernetes will see a resurgence mainly for cost optimization. Generally, it will just be Kubernetes as GA ready for business and simple to use. The developer experience is mainly focused on cluster lifecycle management. In the next 10 years, most people will be focusing on application delivery, which is the most important thing. That's what we're doing at Northflank.

Bart: What's next for you?

Will: What's next for Northflank is the ability to deliver a self-deployable control plane. A lot of the businesses at KubeCon today can't phone home. So a single tenant, multi-tenant control plane is obviously a concern. We're looking at working with more enterprises and larger organizations to fit their deployment models, whether that's on-prem, a hybrid multi-cloud, or deploying the control plane on their infrastructure. Mainly in the platform engineering space, we're a self-service developer platform. As we move up the stack, we're working with more platform engineering teams in the build versus buy argument. We don't necessarily see it as binary as build versus buy. People are leveraging Northflank as an API to deliver their own platforms and provide that self-service developer experience.

Bart: How can people get in touch with you?

Will: We're available on Northflank.com. I'm Northflank. Will on Twitter/X and my email is [email protected]. Very, very simple.

Podcast episodes mentioned in this interview