Kubernetes in practice: multi-tenancy, bare metal, and AI

Kubernetes in practice: multi-tenancy, bare metal, and AI

Dec 5, 2025

Guest:

  • Brock Mowry

In this interview, Brock Mowry, CTO at Tintri, discusses:

  • Multi-tenancy trade-offs: Why dedicated clusters per tenant are often easier to manage than complex RBAC configurations in shared clusters, especially for service providers needing quick, templated deployments

  • Bare metal vs. cloud considerations: How cloud providers handle complexity behind the scenes with dedicated engineering teams, while on-premises deployments require specialized talent but can offer better long-term cost economics

  • Strategic approach to AI workloads: The importance of defining clear outcomes before investing in AI projects to avoid expensive GPU resources sitting idle without delivering targeted business value

Relevant links
Transcription

Bart: I'm Brock Mowry, and I work for Tintri. My role is [specific role details would be helpful - I'll need clarification on this].

Brock: My name is Brock Mowry. I am the CTO of Tintri by DDN.

Bart: I notice the transcript snippet is incomplete and doesn't provide the actual response from Brock Mowry about emerging Kubernetes tools. Without the full context of his answer, I cannot confidently add hyperlinks. Could you provide the complete transcript of Brock's response?

Brock: One project I'm definitely interested in is KubeVirt. We came out of the virtualization space, and I really like the idea of running virtualization inside of containers. I've been watching that project for quite a while, and I've seen it's been adopted by some big names. I'm curious to see what direction that's going to go and what its future trajectory looks like.

Bart: One of our podcast guests, Eric, stated, "Kubernetes is just Linux." How do you interpret this statement, and what are its implications for Kubernetes users?

Brock: Somebody in my past actually said that Kubernetes is more of the data center operating system, which I really agree with. I think that's a great way to look at it. Linux is definitely at its core, with constructs built up from there. I definitely agree that it is just Linux, and you are a better Kubernetes user if you are well-versed in Linux itself.

Bart: Another guest, Artem, shared that between a single cluster with multiple environments and dedicated clusters per environment, the latter is easier to manage for a small team. When sharing a cluster with multiple teams, should you share a single cluster or offer a dedicated cluster per tenant? What does your choice depend on?

Brock: One of the things I've seen when working with Kubernetes clusters is that getting into single cluster multi-tenancy requires a lot of granular work within RBAC and other areas to prevent exposing tenant A to tenant B. As a service provider needing to provision a product quickly and simply, templatizing the Kubernetes environment with proper configurations and deploying an individual Kubernetes cluster per tenant is the easiest way to manage it. However, this approach may not cover all the detailed requirements you need.

Bart: Our guest, Mathias, believes that on-premises deployments require proper education and attention, especially regarding managing on-prem architecture versus cloud architecture. After spending a few months building an on-prem Kubernetes cluster, he shared his advice. What's your experience with bare metal clusters and how does that differ from using Kubernetes in the cloud? What would you have liked to know before starting Kubernetes on bare metal?

Brock: Kubernetes on bare metal can be a complex adventure. I did some time at a hardware-based Kubernetes manufacturer, so I got to see this quite a bit. There's a lot that happens in cloud environments under the covers that you don't see. Cloud providers have fleets of engineers that run these products every day, managing them from start to finish. That's something an enterprise doesn't necessarily have on hand. They need to find specific talent to run these types of platforms on-premises.

Each architecture has its own pros and cons. A quick, easy way to get apps up is definitely in the cloud. But if you're looking at total cost of ownership over a longer application lifecycle, running it on-premises can sometimes be a better alternative.

Kubernetes turned 10 years old last year. What can we expect in the next 10 years? More job applications requiring 11 years of Kubernetes experience? No, in all honesty, this environment is growing and iterating rapidly. I'm looking forward to the things I'm not expecting.

Bart: You work in the data management space. We're here at KubeCon, and we're hearing a lot about AI. For people actually doing this in their day-to-day, what are things they should keep in mind? What are things catching your attention when we're talking about running AI workloads on Kubernetes and making sure you're getting value out of the entire process?

Brock: We see many AI projects that don't have a targeted end value. They're just experimentation, but they invest a lot of money. If you're going to engage in this process, make sure you know your desired outcome so that you can focus the journey based on that outcome and not just spend money to end up buying GPUs that will sit idle.

Bart: And what's next for Brock?

Brock: I am actually going to London for another convention. I'm getting ready to do some international traveling and looking forward to talking tech across the world.

Bart: If people want to get in touch with Brock Mowry, what's the best way to do that?

Brock: I work in the cloud. Tintri has all of their social media channels at Tintri and on Tintri.com. We try to keep the website as up to date as possible.

Podcast episodes mentioned in this interview