From Helm to AI-Driven Kubernetes Ops

From Helm to AI-Driven Kubernetes Ops

May 8, 2026

Guest:

  • Devin Allen

If you're trying to run Kubernetes in regulated environments, the hard part is rarely just shipping manifests. You also need to think about networking, compliance boundaries, automation safety, and how much context engineers and AI systems can actually trust.

Devin Allen, Cloud Infrastructure Engineer at Game Plan Tech, explains how those tradeoffs show up in real work.

In this interview:

  • How teams use tools like Argo CD, Octopus Deploy, and Helm to improve delivery and visibility

  • Why networking, observability, and platform knowledge still matter even with managed Kubernetes

  • Where AI agents can help in cluster operations, and where guardrails and human judgment still matter most

Subscribe to KubeFM Weekly

Get the latest Kubernetes videos delivered to your inbox every week.

or subscribe via

Transcription

Bart Farrell: So first things first, who are you, what's your role, and where do you work?

Devin Allen: my name is Devin Allen. I work for Game Plan Tech. It's a software company that also delves into delivering capabilities for GCP, so bringing organizations into GCP, mainly organizations that are looking for IL-4, IL-5, or IL-2 type compliance.

Bart Farrell: What are three emerging Kubernetes tools that you're keeping an eye on?

Devin Allen: Some of the ones that I keep an eye on or that I like to look at is ArgoCD, Octopus Deploy, some of those that Delve into the idea of like using DevOps practices to deliver capability, deliver software into an organization in an automated fashion so you have more visibility into what's happening, why something's failing or something like that. I really enjoy that side of stuff.

Bart Farrell: managed Kubernetes gets judged on the things you only notice when they fail. Upgrade safety, identity, networking, and cost controls. Which default matters most to you and which one is still hardest to get right?

Devin Allen: I'd say observability is always a key one with networking as well. If any of those go wrong, you can have a lot of complications or things go wrong or maybe have something that is invasive inside there. And it's always kind of tricky to make sure that it's done right.

Bart Farrell: Regarding networking, people often say it's the most, it is the trickiest or most painful part about Kubernetes. Do you agree? And if not, what is the most painful part?

Devin Allen: I would say I agree. It's very tricky. But for the most part, once you have a model setup, the thing that makes it tricky is depending on where you're deploying your stuff or how you want to organize it and have it communicate with each other. And then I guess networking is just in general is just a tricky subject because you're dealing with a bunch of numbers and trying to figure everything out where things are going. When you have a complication or an error, tracking it down can sometimes seem obscure.

Bart Farrell: Internal platform teams often say developers should not need to know Kubernetes. Do you agree or do teams build better systems when application engineers still understand some of the platform underneath them?

Devin Allen: there's two sides of this, right? There's the idea of even with like a platform, that developers should just be able to push into the platform and have it run. But I think having a general knowledge, like all knowledge that you have is going to be for your good. And so if you understand how something works, you might be able to build something that's even more performant. I don't know as if you shouldn't have to. You should always be curious and have a good relationship with knowledge so that you can build a better platform. or a better application. But at the same time, a developer to be able to deep dive into things should be able to trust the platform that he's developing into and be able to push his software and know that it's going to work.

Bart Farrell: There is growing interest in using AI systems to operate Kubernetes rather than just generate YAML. Which cluster operations would you hand over first? And which ones are too risky to automate?

Devin Allen: I think what AI is really good at watching is being able to deploy configurations for either type of networking or other stuff like that like I used an agent to rewrite my entire home network and that was interesting to learn and see how it operated some of the things that it's not really good at is just really complicated tasks of being able to know all the context of how your architecture works so when you can bring that knowledge or that context to the AI it can perform better but Some of the things that are tricky is just knowing all the different configurations within how a helm chart needs to be deployed or how to create that helm chart for your specific platform that it needs to be deployed into. So I think that's kind of the tricky part is AI is really performant and can do really well but you still have to be able to safeguard it and look at it and know what it's changing and how it's going to affect your environment.

Bart Farrell: some of the folks that you work with aren't highly regulated. industries where they're subjected to lots of security compliance protocols that perhaps aren't as strictly enforced in other areas. For people that might think like open source isn't cut out for this or open source technologies can't be trusted, what are the conversations that you have with the stakeholders you're working with to ensure that yes, these technologies are in fact safe and yes, they can be used responsibly?

Devin Allen: Open source is definitely a perfect model for secure environments because you have more visibility into the code so you know what's happening inside there. There've been instances where there's been times where there's been something that's gotten in, but those have been resolved pretty quickly. And a lot of that was because of the open source nature of it. I forget the name of the actual application that had a problem, but it was discovered by one of the maintainers just by simply a few milliseconds off of the build time. And by being able to have that many eyes on something, you're able to discover and see what's going on inside of the code. I think open source is a perfect platform for secure nature and should be the standard for anything that's going to be put into secure areas.

Bart Farrell: Two things I hadn't heard about until today, Kubernetes and an A10 Warthog. Tell me about your experience there.

Devin Allen: I worked as a civilian on the A10 OFP team, and my team, we were in charge of building apps. Automation software for real hardware in the loop for the safety of flight testing of the A-10. And that was something that took about 30 days, 30 plus working days to be able to complete where you had two engineers clicking buttons and following a manual of steps to test these procedures out to make sure that the OFP was safe for the test pilot to go fly. And as we started developing this technology to automate some of this testing, There was a lot of resistance in it. Some people even wondering why we were wasting time on it. And we just had this idea that we could take it from that 30 working days and be able to get it done within a week. And our first implementation of it, we were able to get done within 14 days. And that was with like problems and other things that way. It's eventually got it down to even less than that, where I think they're operating now in about six days to complete these safety flight tests. And the really interesting part about that was that It's always the pilots that have to okay that they're gonna they're good to fly on the new OFP and the pilots were that they never felt safer because we had shown them everything that we were doing with that and that's what led me into Kubernetes and Docker containers is because we built everything on Python and it was on a single server inside of the SIL so if there's anything wrong you know kind of the wide Fix it in IT is we got to re-image the computer and we would lose all of our libraries, our dependencies and stuff and our program wouldn't work. And so we started looking into how can we containerize and actually run on that so that it made it easier and faster for us to get back up and running again.

Bart Farrell: Kubernetes turned 10 years old almost two years ago. Here we are at Google Cloud Next hearing a lot about the future, talking about AI a lot. What do you expect to happen in the next 10 years regarding Kubernetes?

Devin Allen: Kubernetes is going to become the de facto standard. It already is, but I think with AI agents and with Kubernetes in the mix is you're going to start seeing a higher degree of network capability and also just networking in general. And not networking in for just actual networking, but just the ability to communicate between agents because that context and the data is going to be where it needs to be. A tricky part with AI is being able to have that information readily available to the AI and controlled in a way that it's safe, it's protected. Kubernetes just offers the ability to self-heal in ways that is unprecedented already and being able to securely put AI into the mix to even improve that even further. I think it's just going to leverage people to be able to spend more time on the problems that truly matter. I think principles that have been talked about, I've heard like DevOps is dead or Agile is dead. And to a nature, I think like maybe there's some truth to it, but I don't think it's the whole story. As AI gets into the mix, there's got to be some changes on how we operate that. And maybe it's just that we go through iteration processes faster now, but you still got to have that human connection. And so we're going to be looking at how do we improve. Instead of a two week agile cycle, maybe you're just meeting up every day and then working on. Your applications are then meeting back again to discuss what's happening as human beings as we're figuring out how to use these AI agents to the best of our abilities, which will open up time for us to deep dive into even more challenging problems. And I think that's what the future is going to be, is we're going to see engineers discovering even better solutions to challenging problems that we're having because AI is able to handle some of the mundane stuff we're able to get the iteration loop going faster in that feedback loop. Anytime you improve that feedback loop, you see innovation happen a lot faster.

Bart Farrell: Now, Devin, what's next for you?

Devin Allen: That's a good question. I usually just take things as they come. I tend to be a problem solver. And so next for me is just finding out what the next problem is that needs to be solved. And how do I leverage the current tools that are available to me to accomplish the thought that problem.

Subscribe to KubeFM Weekly

Get the latest Kubernetes videos delivered to your inbox every week.

or subscribe via