Bart Farrell: So first things first, who are you? What's your role and where do you work?
Joel Vasallo: Hey, my name is Joel Vasallo. I'm the Senior Director of Platform Engineering at the Aspen Group. We oversee a lot of large brands like Aspen Dental, Clear Choice, Lovett, WellNow, and MyChapter. So we've been working in the space for about 10 plus years now and just been from VMs all the way now to Kubernetes and I guess now AI. So here we are.
Bart Farrell: Now, what are three emerging Kubernetes tools that you're keeping an eye on?
Joel Vasallo: You know, honestly, I'm going to be a little bit all over the place on this one. So I'm going to start with delivery. I think the folks with Argo are doing some great work. I think they've done some awesome things. It's become the de facto standard of how to deploy to Kubernetes. But a little bit on top of that, I've seen this open source project known as Kargo, where it's kind of creating a way of progressive delivery, kind of artifacts, making it a little bit more visual, abstracting away a little bit of what Argo's complexities are. So that to me is a great from a software delivery perspective. Another open source kind of tool, CNCF, Kagent and also Agent Gateway. I think those two tools kind of have been the growing standard as to how to do agent routing, LLM routing, because really, honestly, that's kind of where a lot of us are in platform engineering now and in cloud infrastructure trying to figure out, hey, how do we route these requests? This isn't like let's spin up an ingress and just route slash foo to bar anymore. right? So we're definitely talking about How do we structure these calls and more importantly get visibility into them? I equate this a lot to when Service Mesh kind of came about and like Istio, we had a challenge, a bunch of microservices developers at the time were adding Maturity, logs, and traces. These platforms started adding, these platforms now also expose basic metrics so in lieu of a perfect agent or application, you at least get some level of visibility, so us as a platform team can at least say, hey, it's not working, right? So I'm trying to think, a good third one, a good third one for Kubernetes. I named two in the last one, so.
Bart Farrell: That's okay. Two is fine.
Joel Vasallo: You know, I think that's probably a good start here nowadays. And I mean, obviously, you know, there's a lot of, well, I'm actually, I mean, like, I will say, like, here I am at the Google conference. I'm definitely learning a lot about Google's platform on how to deliver to Kubernetes with Vertex AI and their workflows like ADK, but also giving a shout out to Kagent as well. It's kind of creating a framework on how to deploy. Again, for me, I love vendor platform tools like stuff from Google, stuff from other vendors and platforms, but if it's open standards and it can target where you're at, it helps you at the end of the day to figure out the right space to deploy. So to me, one of the things I like about Kagent is all the things I mentioned, A2A and ADK and all the Google AI frameworks. It lets you develop but have a standardized way to deploy. And if you do it right, the stuff, the tools I mentioned earlier, Agent Gateway and stuff will also kind of play back into that. So those are kind of three tools. Again, a little bit of all over the place on the spot. But I think that's a good list.
Bart Farrell: Internal platform teams often say developers should not need to know Kubernetes. Do you agree or do teams build better systems when application engineers still understand some of the platform underneath them?
Joel Vasallo: I'm a little torn on this one. So one, I'll reference a quote I heard from Kelsey many years ago, Kelsey Hightower, where he was at a conference and he's like, hey, engineers, who here likes YAML? And like they all, no one raised their hand. It's like, see DevOps people? No one cares. It's all abstraction language. So part of me is like, look, yes, you definitely need to know the house that you're building in. You have to have some expertise. You have to know what a pod is. You have to know what a CrashLoopBackOff is. Right. But at the end of the day, especially if you look at the direction of the industry, you're looking at all these tools. Should you need to know or should you rather kind of come up with that platform that gives you best practices, gives you recommendations, gives you insight into it? So. I'm halfway torn where I think it's wise for the platform teams, even teams to know, but it shouldn't be a requirement to understand it, right? Like if people want to learn it, come look at the platform. Help give me pull requests. I love free work, right? Come support the platform that we as a company or we as an organization or whatever do, but like to expect like, oh man, that she's such a great. .NET developer, but she doesn't know Kubernetes. Like, ah, you know, like it always kind of comes at a catch-22. So in my opinion, know the platform, but don't let it be the only thing, especially with these tools. It's making it extremely easy. So empower the developer, but don't let it be the deciding factor, in my opinion.
Bart Farrell: Managed Kubernetes services keep promising to remove operational burden. At what point does abstraction stop being helpful because your team can no longer explain or debug what happened in production?
Joel Vasallo: I think you have to give enough. So kind of going back on what I said, yes the platform should be abstracted but the things that developers are familiar with should be accessible. Right, you know, often times I've seen that Logs be locked away in platforms they don't have access to. So I think at the end of the day, logs, if a developer knows how to set up logs and metrics on their own infrastructure, on their own platforms, that same thing should be accessible in a production or staging or whatever development environment. So almost like don't take away tools from developers that know what they're using. So I would say it's a little bit of both. abstract away all the hard stuff that they may not understand. Like if it's a long stack trace and you know that stack trace is basically saying the image is out of date and you have to use the new version, just tell them image out of date. It makes it a lot easier. Maybe give them even a recommendation on how to fix the error. But when it comes to like core logging or things like that, if a developer is also on task to like write that stuff, make sure that at the end of the day, they have control over what they're doing, right?
Bart Farrell: Our podcast guest Molly believes many people initially think they need their own cluster because they are scared of multi-tenancy, but they learned that the operational overhead of maintaining multiple clusters is significant. What is your experience with teams being scared of multi-tenancy versus dealing with operational overhead?
Joel Vasallo: and I'm, you know, I still remember, it was a few Cloud Next ago, and I'm not going to call anyone out, but it was, I remember someone just being so proud of like managing a thousand clusters, and I was like, A thousand clusters and really like when I had to go speak afterwards to them and they're like, why? It's like because we don't want that multi-tenancy problem. Like we can't trust team A and team B. I'd argue that if it's done right, you should be building a cluster with purpose. And it should be done intentionally because of the operational overhead mentioned. If you're scared of it. look, you definitely have the right thing because you don't want like a random app compromising app B. That's where I would say that you have to focus on the standards of how to get into the cluster. So for me, like an example is most of our developers don't have like direct admin over the clusters, but they certainly have access to go look, debug, do their day to day. But when it comes to how things land in that environment, there are things that before we even start talking about the deploy or done and checked, I mean, you know, this could be as simple as, Let's run a QA test. Let's verify the packages with some sort of SBOM to make sure that everything in there is a dependency. Maybe it's also coming up with a compromise. Hey, developers, you can deploy to your own namespace, but you can't expose anything without this additional step. Maybe it's a checkbox. Maybe it's an automation. Again, I'm not trying to put like an iron wall to what a developer can do, but can we come up with a perspective where developers are still productive, engineers are still productive, We're also not managing like 20,000 clusters. I mean, that's, it's crazy. I mean, how do you manage the IPs for that? That's crazy. Like, so there's a lot of complexity. It's not just, let's give everyone a cluster. It's, I think it's a fine line between deliberate clusters, knowing when to fragment them and how much work you want to incur. It's before we even start talking about that.
Bart Farrell: Kubernetes turned 10 years old about two years ago. What can we expect in the next 10 years?
Joel Vasallo: Ooh, I mean, I think As a platform, you know, Kubernetes solved one of the hardest challenges. So let's rewind the clock back 10 years ago. We were all probably making VMs, maybe still even by hand, where we didn't have things like Terraform. We didn't have things like GCP and automations around it. We certainly didn't have the explosion of AI that we have nowadays. It solved running applications at scale well, and it was opinionated enough that it gave structured guidelines to teams. To know how to structure their application, right? Like no more one VM is broken, but the other five are working, right? You had to be very deliberate on your automation to ensure that what you deployed is consistent across. So looking past that, you know, running apps nowadays, unfortunately, is not the hardest thing in the world. I mean, we have managed offerings, like I said, from the cloud providers. I think the biggest thing is going to be now routing. Service accessibility, like how do I scale past just a cluster? How do I scale to the right version of my app? More so than just like V1 and V2. I'm talking about like we're seeing model inference workloads. Kubernetes has an initiative right now. It's just dawning on me right now with Inference Gateway, I believe it is. Sorry, we could look at that later off the tape. But I believe Inference Gateway now is definitely like showing how I can have my apps inside of Kubernetes, but routed differently. We're not talking about a canary or an A/B test. We're talking about deliberate versions that are structured in a way, like this is a very large model that can do more processing, where this is something quick, and it can give you a faster response. So I think Kubernetes has to evolve into that, and it's doing a great job. I mean, I would recommend anyone who's debating, oh, is Kubernetes going to scale for AI, go to some of the CNCF events, the Linux Foundation events It's been just a hot topic for the past two, arguably three years, So I would definitely recommend, and I think to us as end users, we need to evolve Kubernetes into that state. Because at the end of the day, agents, AI apps, if you do it right, it's just a container. Maybe it's a zip file in a container or whatever. But if you do it right, it's applications that you have to deploy. Now the challenge is you just have a lot of them or you have to make it extremely accessible. So two things, I guess, in closure, I'd say evolving Kubernetes into AI. It's already kind of happening right now. But two, accessibility of getting things into the cluster and faster is going to be the next thing. How do we get past like a service? How do we start talking about AI apps? How do we start talking about experimentation a little bit more natively into the platform? That's where I think Kubernetes is probably going to be going the next, I'd say, at this point, five years. if not 10 years. We'll see. AI is going to be doing a lot of change here in the next few years.
Bart Farrell: What's next for you, Joel?
Joel Vasallo: Well, I mean, look, at the Aspen Group, I'm personally at work. I think a lot of future We're trying to figure out how to get AI into medical, right? So it's not easy. I mean, we certainly don't want to give the wrong diagnosis. So there's actually some creative ways of Less harmful ways of doing it, so there's a lot of experimentation. We have over 1,500, almost 2,000 if you include all the brands, offices, from pets all the way to humans, to Botox, all the way to dentures, right? We have a wide variety of data. Personally, I'm going to be diving back into the GDE program, the Google Developer Experts program. I've been a member for about two years. I'm working on getting more public speaking events, getting more out in the community. Just sharing some of the expertise. I'm pretty candid. You can read my blog articles. Sorry if you don't like the opinions. But you know what? Like the way I like, I love conversation. I love being wrong. I'm not going to be perfect. But that's how we as a team, as a technology org, as an open source community, we learn and grow from one another is just through conversation and honestly just sharing what we know.
Bart Farrell: So if people want to get in touch with you mentioned, you know, your blog, but what's the best way to do that?
Joel Vasallo: I mean. My blog, I think, has my LinkedIn on there. You can find me on X. It's just justjoelv You can find me on LinkedIn, Joel Vasallo. If you Google me, I'm sure you'll hopefully find me. That's a good testament to see how good my socials is. But I mean, I would say that's probably the best way. Check out my blog. It has my contact info there. And if you have any questions, I'd love to chat. If you're ever in Chicago, I also run the local developer community there. Come on out. We'd love to host you if you have anything you want to share and pitch. We're an open community.