Bart Farrell: So first things first, who are you, what's your role, and where do you work?
Andrew Block: My name is Andy Block. I'm a distinguished architect at Red Hat. I'm a maintainer of several CNCF projects, Helm, ModelPack, SOPS, and ORAS.
Bart Farrell: Where does code quality break down first when changes span application code, Kubernetes configs, and CI pipelines in the same pool?
Andrew Block: It's difficult because it's very hard to separate the differences between configuration versus code versus testing, you name it. There are different types of content, different types of purpose. In most cases, your Helm charts are going to be the manifest. I don't want it to be combined with the actual Helm development code, for example, and vice versa.
Bart Farrell: Fair point. Taking Helm to the next level and you being very active in that project, why do YAML and Helm changes consistently escape the same scrutiny as application code? And what does that cost teams in production?
Andrew Block: Well, a lot of times it comes down to the reusability of tools. There's a lot more tools out there for general software versus YAML is only a subsection of software and artifacts. You have programming languages like Golang, Java, Python. There's a larger community out there that's able to develop tooling around that. And in many cases, it has to do with enterprise adoption. These programming languages and practices are adopted in enterprise organizations. They're going to be willing to pay for the funding, the software, et cetera, to make sure their development and their production systems are safe.
Bart Farrell: Andy, AI writes more of the code going into Kubernetes deployments. What does good governance of that code actually look like?
Andrew Block: Well, first of all, I always say, make sure you have that human in the loop. AI can do a lot of great things. It's moved mountains thus far, but especially in the community, we're already seeing it being a bit of a toil on maintainers because of additional scrutiny they have to take to any contribution. Is it an actual purposeful contribution? So it's in many ways, causing burnout for some of the maintainers because of the burden that had to be put on.
Bart Farrell: And where are engineering teams getting AI code review right today and where are they still flying blind?
Andrew Block: A lot of it comes down to leveraging a lot of the practices and patterns that have come out when it comes to like Spec Kit and skills, etc. They're able to reuse a lot of best practices. They're doing great work with that, being able to share that. Developers, whether they be older developers, newer developers, you name it, who are just going ahead and blindly using AI and trusting AI without giving it the second opinion, giving it basically someone's actual usage of the tool instead of just assuming the tool not? correct.
Bart Farrell: What would it take for you to trust an AI recommendation on a production-bound infrastructure change? What's the bar?
Andrew Block: For me, it comes down to, I'm like a chef. I'm going to taste it. Is it going to work in my environment? I'm going to test to the maximum. I'm going to give it different variances, different use cases. And if it plays out and actually tests out, okay, it's going to go into my production system. But I'm always going to be that last level of dependence.
Bart Farrell: Kubernetes at this point is almost 12 years old. We celebrated 10 years ago, almost two years ago in June. So it's still over a decade old and still accelerating. What does the next era look like for teams trying to maintain code quality at that scale.
Andrew Block: Kubernetes has certainly changed a lot in the last 12 years. I mean, if you look at 12 years ago, just deploying manifest was a challenge. You look at the challenge and burden the teams have today, it's only going to get easier because we're going to create these abstractions, they're going to simplify it, but it's going to take time understanding what works well, what doesn't work well. But very much like Kubernetes itself, we're going to learn best practices of what does work well, and we're going to make our lives easier with tooling.
Bart Farrell: And what are you focused on building or solving next?
Andrew Block: For me, it comes down to how can we use AI effectively? How can we use it productively? And most importantly, how can we use it securely and safely? I have a lot of work in the security space with some of my prior work in the community. I want to take those lessons learned in the other assets that are part of the community today and apply it to today's AI and tomorrow's AI.
Bart Farrell: That's great. And just as a bit of a bonus question, when speaking about the subject of security, It seems like nowadays, you know, that security stakeholders have broadened out. But beyond, you know, the DevSecOps team, for anyone, you know, thinking about running, you know, using AI tools in Kubernetes, using AI in general, what are basic security measures that simply can't be overlooked in 2026?
Andrew Block: Zero trust. Zero trust patterns, zero trust methodologies. Assume your AI can be nefarious. So assume the worst, but hope for the best. So do what you can to protect yourself. Give it the least amount of permissions. Give it only what you are expecting it to use and what you want to use.
Bart Farrell: Where can people follow your work or get in touch with you if they want to have another conversation?
Andrew Block: For me, go ahead and reach me on LinkedIn. You can feel free to reach out to me. I'm happy to talk on various Slack workspaces, CNCF, Kubernetes. Just feel free to reach out to me. I'm happy to have a conversation.