Kubernetes Tools and Supply Chain Security
Mar 2, 2026
Andrew Martin, co-founder of ControlPlane, shares which tools he's tracking in 2026 and why supply chain security is becoming the critical focus for organizations running Kubernetes workloads.
In this interview:
Performance vs flexibility trade-offs: Andrew responds to Yue Yin about why the kube-scheduler's design balances massive telco deployments with hobbyist needs
Supply chain security evolution: From SBOMs and Cosign signatures to continuous validation of security posture against zero-days
Andrew also discusses how open-source worms are proliferating through npm and why CISOs are prioritizing attestation and validation of supply chain metadata at runtime.
Transcription
Bart Farrell: So who are you, what's your role, and where do you work?
Andrew Martin: Hi, I am Andy, founder and CEO at ControlPlane. I've got a background in securing Kubernetes for regulated industries, tinkering with high-performance systems, and trying to break some of the most secure deployments we could get our hands on.
Bart Farrell: All right, good. Now, Andy, what are three Kubernetes tools that you're keeping an eye on?
Andrew Martin: I'm really interested in nvkind. That is NVIDIA's fork of kind that allows us to do local development against GPUs, especially building out the reference security architectures as MCP, agent-to-agent, the integrations that we're doing with Flux to support those. We found ourselves using that quite a lot. Great tool. Appreciate NVIDIA's fork. Obviously, FluxCD itself. We are doing a hell of a lot of development at the moment. We're introducing the ResourceSets API under the Flux operator, which reduces the amount of YAML you have to maintain. It's essentially combinatorial matrix explosion for multiple versions of, different environments. And finally, we're really interested in Coroot, which is a new eBPF observability tool, single container on a node, and that will give you a view not only of the performance of individual SRE-focused metrics, but also they have also integrated a Flux deployment view so you can see resources, you can see Helm releases, and correlate on a cluster, "Did this release just burn down my infrastructure?"
Bart Farrell: I want to get your feedback on, on something that was mentioned in one of our podcasts. So our podcast guest, Yue Yin, was speaking about Gödel and Katalyst achieving 60% utilization, and Yue shared that, "We think the kube-scheduler could approach higher performance levels with further optimizations. However, there will be inherent trade-offs because the kube-scheduler prioritizes flexibility, extensibility, and portability." What's your perspective on this performance-versus-flexibility trade-off in Kubernetes?
Andrew Martin: I think given what we had before Kubernetes, which was Mesos with pluggable schedulers, you could use Marathon, you could actually use Kubernetes on top of it as well, that was very confusing. It is not portable knowledge between jobs and there's a lot of fine-tuning. Kubernetes strikes an incredibly delicate balance between a telco running tens of thousands of nodes or a hobbyist who just wants to stand up and get building. When we look at some of the very large training and inference orchestrators, you know, we've got things like Armada from G-Research. They take a different view and they perform scheduling with higher bin packing because so many nodes to deal with. Um, you've also got some very interesting approaches from, places like OpenAI, which will just grow their individual pods by sticking more and more containers and scheduling in there with some Systemd magic. So there's an entire spectrum of things that can be done. From our perspective, the uptime of a system, the availability is one of the most important things. If it's not up, there's no point if it's secure. And so having that scheduler and ultimately using that for uniformity of as many workloads as possible makes our lives very easy. It's also very predictable. And the way that Flux integrates with that scheduler, it's also compassionate to how dependencies between objects are created. So a little bit of overlay and consistency in the usage of the scheduler, essentially the API call ordering, means it is hugely generalized for a massive range of different workloads.
Bart Farrell: Andy, looking towards 2026, what are your expectations? What topic are you most what are you most interested in exploring further in the Kubernetes world?
Andrew Martin: One of the things that we have believed for a long time is that the management of open source as it's ingested into organizations is paramount. The supply chain has become the sharp end of the wedge, and we're now seeing these rolling worms kind of all the way back to the MySpace Sammy worms or the disk, five-and-a-quarter-inch disk with boot sector viruses, worms again. What we're seeing now is this proliferation through npm specially, but open source. And so as things are ingested into an organization, we advocate for and we've run very large projects signing everything, SBOM generation, attestation. At the point that goes to be built and admitted into a Kubernetes cluster. So we've done the continuous integration, we've reused that metadata and generated more. The verification of the signatures, not only of the Cosign artifacts to ensure that it is the same thing that was sent over, but all those compositional attestations related to the SBOM and the transitive dependency tree, which is not always considered, at admission time into Kubernetes, and then continuously to validate that security posture against zero-days, is something we've seen upticking slowly over the past few years. At this point, the proliferation of open-source worms brings that really to the fore of a CISO's mind, and we expect to see that low-level attestation and validation of supply chain security metadata at runtime become a staple in Kubernetes security tooling this year.
Bart Farrell: What's next for you, Andy?
Andrew Martin: What's next for me? I am contributing into our advanced AI security training course, a five-day course with AI capture the flags, agent-to-agent metadata, and signing for the supply chain for the models themselves, tampering, backdooring, all of the red team activities that we need to enhance the blue team and ultimately run AI safely and securely. Really excited for that. That will be launching in Q1 next year. We also have another two training courses coming off the back of that. One is a governance level and one is a practitioner level AI security training course, and these support not only our customers, but also more widely our partners in deploying AI safely and securely.
Bart Farrell: How can people get in touch with you?
Andrew Martin: We are at control-plane.io. I am all over the internet as Sublimino. Do not mistake me for the furniture designer who appears to have Shanghaied my name.

