Kubernetes Security: from Supply Chain to Pod-to-Pod

Kubernetes Security: from Supply Chain to Pod-to-Pod

Mar 31, 2026

Guest:

  • Rodrigo Bersa

Running Kubernetes in production means dealing with security risks at every layer — from the images you build to the nodes you patch to the pods communicating inside your cluster.

Rodrigo Bersa, Container Specialist Solutions Architect at AWS, breaks down what actually matters when securing Kubernetes workloads — and where most teams get it wrong.

In this interview:

  • Security fundamentals: supply chain security with zero-CVE images, continuous CVE scanning with Amazon Inspector, and keeping secrets out of your cluster with external providers

  • Node vs pod security: why security groups protect nodes but Network Policies and Service Mesh (with mTLS) are needed for east-west pod traffic

  • The next 10 years: Kubernetes becoming increasingly autonomous — self-healing, self-patching, and AI-assisted security remediation

Subscribe to KubeFM Weekly

Get the latest Kubernetes videos delivered to your inbox every week.

or subscribe via

Transcription

Bart Farrell: So first things first, who are you? What's your role and where do you work?

Rodrigo Bersa: Hi everyone. My name is Rodrigo Bersa. I'm a container specialist solutions architect at AWS. But I work with Kubernetes since the early days, 2016 was my first experience with Kubernetes. Then I moved to OpenShift and then came back to AWS to work with all things EKS and Kubernetes.

Bart Farrell: What are three emerging Kubernetes tools that you're keeping an eye on?

Rodrigo Bersa: So basically for my kind of work, three things that I always keep an eye on, it's Karpenter. So Karpenter, for those who don't know, is the AWS cluster outscaler or node outscaler that can, it's really fast and can provide very good experience for scaling in and out nodes. It watches for any capacity that's over provisioned or under provision and help you consolidate your nodes and also spins up nodes very fast. Karpenter is the top one for customers looking for help. Second one will be KRO, which is Kubernetes Resource Orchestrator. So we all know platform engineering is a thing, and we need a way to manage resources, not just inside Kubernetes, but also what are the infrastructure resources that are required for our workloads on Kubernetes need to work. So, hey, maybe you need a database, maybe you need a storage outside your cluster. So KRO helps you to create what we call resource graph definitions and provision everything together. Kubernetes and infrastructure-related resources. And you can use things like controllers ACK, for example, to have AWS-based infrastructure resources. So that will be the second one. And third one is upcoming, is called Cedar, is an extension of role-based access control, or RBAC, for Kubernetes. That allows you not to have, just allow permissions, but also deny permissions and condition permissions. So, for instance, if you don't have, a role on RBAC, you don't have permissions to do anything, right? But let's say I have a cluster admin, but I don't want anyone to change secrets or watch secrets. So we can say, hey, Rodrigo is a cluster admin, but he cannot see any secrets that he's not the owner. So that's what the power that Cedar brings to the table. And we are aiming to bring this to Kubernetes 1.36 or 1.37 and probably will be available on EKS right after.

Bart Farrell: All right. For developers that are new to Kubernetes, what are the top three security concerns they should be thinking about when running production workloads?

Rodrigo Bersa: Developer-wise, they should always be concerned about securing their supply chain process. So make sure that the images that they build are secured by default. So building images from scratch or using third-party tooling like Chainguard, for example, to have zero CVE images and building with security. So this will be the first one, keeping your software supply chain secure. Then watch for new CVEs and new patches. Just because you built an image that's secure doesn't mean that tomorrow won't be a CVE finding on that specific image. So you need to keep constantly scanning that image for new CVEs. So think about this. On AWS, we have one thing that's called Amazon Inspector that has one feature that's called advanced scanning. So every 24 hours, it does a scan and brings you the results like SBOMs. and say, hey. These are images that didn't have a CVE in the past, but now you need to be aware of all those and fix those on the image so you have more secure images for the next version. So these are the two ones. Third one, access control. So 60 to 70% of all exploits and security threats are based on exposed credentials. So every time you're building a production level workload on Kubernetes, you probably need secrets. to access your database or to access your API endpoints. So you need to make sure that those secrets are safe by default. So maybe Kubernetes secrets are not secure enough because they're just base64 encoded. And if you don't have a platform engineering ready or a cluster that's fully ready to provide this specific very granular access to everyone, you should be storing your secrets in a third party or external secrets provider, like Secrets Manager, where they are fully encrypted. and you don't even need to have a secret inside your Kubernetes cluster, you can mount them as volumes. So unless someone has access to the bash script or the shell script of your images, they don't have access to the secrets. And this is tied to the first point, which is keeping your image secure. So when you're building from scratch, you need to make sure there's no shell access to your image, which is the best practice.

Bart Farrell: What's the operational burden of keeping Kubernetes nodes patched and secure over time?

Rodrigo Bersa: Managing a Kubernetes platform is not easy. We know all of that. Even more if you think about scale, hundreds of thousands of clusters, even more on multi-tenants and SaaS environment where you have different requirements, different kinds of tenants and different kinds of customers. So keeping your underlying nodes patched is a challenge because if you have a hundred of clusters, you have thousands of nodes or even tens of thousands. Right. So to keep those updated, you need to have runtime security, runtime protection. You need to scan for malware. You need to do replace your AMIs all the time. And if you use custom images, you have to rebuild new images and push them back. So it's a big operational burden to do that in a large scale environment. So today we have other tooling like Bottle Rocket, which is an open source project for container-oriented operational systems that are secured by default. And we have things like EKS Auto Mode, for example. So EKS Auto Mode patches your nodes automatically every 21 days because this falls on the AWS side of the shared responsibility. So the responsibility to keep the nodes patched falls under AWS. And that's what we are trying to do on the EKS side, which is remove all of this operational overhead that the customers have. Think about this, the operational burden to keep the nodes patched is a little bit high, even more on large-scale environments. And that's what we are trying to do on AWS and make your life easier for managing these Kubernetes clusters.

Bart Farrell: For teams moving from traditional infrastructure to Kubernetes, what security practices can they bring with them versus what needs to change?

Rodrigo Bersa: When you're moving from traditional workloads to Kubernetes, some things won't change on the security aspect. So for example, authentication authorization, make sure you're granting least privilege, doing perimeter security, infrastructure security, and this embeds, for example, keeps your nodes patched, make sure you have network access control, make sure just the right people have access to do the right things about authentication authorization. This is still the same. But when moving to a containerized workload or a container-based platform, you need to think about some other stuff in advance. So you need to keep your image secure. We talked about this, supply chain security, this is something that needs to be in place when you have a container-based platform. The way that you do CI, CD and deployments is also different. The lifecycle, managing your cluster, your control plane and all the activities that are inside your cluster. And when you go inside the Kubernetes cluster, a lot of things change. Even on the authentication authorization or network part, this needs to change. It's complementary of the standard or traditional infrastructure security. So think about this. When you have containerized workload, you will have hundreds of workloads running the same instance. You need to make sure that they are secure or they have the right segmentation and they one application don't have access to the other one or it's not consuming too much resources that will exhaust the other application or make them cause throttling and things like that. So you need to understand this kind of multi-tenant environment because Kubernetes is made to run multi-tenant platforms, not just single tenant. So these things need to be in place. So understand your multi-tenant environment inside Kubernetes. Understand the changes for authentication authorization. for the network protection and also your supply chain security. So these are the things that would be changing and some those are the things that you just can't stay the same.

Bart Farrell: Can you explain the difference between security and traffic between nodes versus security and traffic between pods and why that matters?

Rodrigo Bersa: When you think about security and traffic between nodes versus the security and traffic between pods, this is the difference. So nodes are pretty easy. It's a boundary. Okay, this is a compute boundary. If you block the access from one node to another, you have the access block and you can just open specific doors and you can use traditional things like security groups or firewall rules to have this back and forth. But when you talk about pod-to-pod communication, like I mentioned in the previous question, they can run in the same node. So think about 100 pods running on the same node that they cannot connect to each other. So that's where tooling like network policy that can control east-west traffic is important. So you don't want a front-end application talking strictly to a database application running on the same node, right? So you want to make sure that the network flow runs smoothly. So we have the front end, the back end, and the database as an example. So you want to block access from the front end to the database. And if they run on the same node, you should be concerned about that because it's really different if they were running on a separate node. But maybe you can just separate and make the database run on a different node, but it's not really efficient on performance. and cost. So you want to co-locate all of these applications and have the right segmentation inside your node and use the logical separation that Kubernetes allows you to have namespace separation and then let network policies to control access, but if you want to take it one step further, you can use tools like Service Mesh and have an Envoy to control authentication and authorization at the application level. So that will not just control layer 4 communication or pod-to-pod communication, but also will help you control what specific part of the application the other one will access. But this is required because Envoy can give you this permission to have this kind of authorization and also allows you to have mTLS or mutual TLS, which is a full encryption on pod-to-pod communication.

Bart Farrell: Kubernetes turned 10 years old a while back. What should we expect in the next 10 years to come?

Rodrigo Bersa: So I think Kubernetes was born 10 years ago when Kubernetes was born, I started work with Kubernetes and I learned what we should, what we used to call Kubernetes the hard way, where you have to do everything manually. You have to spin up nodes, create master nodes, make sure the worker nodes are joining the cluster. Then this was started to be facilitated and automated with tools like kubeadm, for example, and now we have cloud providers that can give you one click create cluster and I think this is the direction for Kubernetes because I mentioned Kubernetes is the battle of being the standard platform for containers but it's still challenging to manage Kubernetes even more on large-scale environment. So I think the direction for Kubernetes is to make it more and more autonomous even more now with all of this GenAI power tooling agents, MCP servers. So the way that Kubernetes will evolve is to remove all of this operational burden that we have and make it more and more autonomous where at some point you will just need to put your applications to run and Kubernetes will do everything for you even security wise shows you hey these are the things that you should be fixing on security side if you have CVEs or not and things like that So autonomous I think it will be the next thing.

Bart Farrell: What about you? What's next for you?

Rodrigo Bersa: Oh, for me? Okay, I really need to keep up with all this GenAI tooling that are coming out. I remember a couple of KubeCons ago, we started to see these trends, a lot of agents integration, a lot of Kubernetes management, Trislack or other chatbots. And now it's become a thing. And I think I'm running far behind because this technology is evolving so fast that we can't keep up and be up to date. So. I think be more up to date to generative AI tooling and this new agentic stuff, it's the next step for me.

Bart Farrell: And if people want to get in touch with you, what's the best way to do it?

Rodrigo Bersa: Find me on LinkedIn. My name is Rodrigo Bersa. It's pretty easy. If you search for Bersa on LinkedIn, I'll be probably popping up.

Subscribe to KubeFM Weekly

Get the latest Kubernetes videos delivered to your inbox every week.

or subscribe via