Bart Farrell: So who are you, what's your role, and where do you work?
Shani Adadi Kazaz: I'm Shani. I'm working at AWS, and my role is Application Modernization Specialist in Kubernetes and also serverless.
Bart Farrell: So what three emerging Kubernetes tools are you keeping an eye on?
Shani Adadi Kazaz: I think that first is kro, which I think is a very cool feature that AWS started, but now is with the other three biggest cloud providers. Argo CD, For sure, this is the basis everyone needs to use something like Argo CD and ACK. Well, ACK kind of allows you to control everything from Kubernetes, which I think is a very good way to control your resources. So,
Bart Farrell: What challenges do organizations face when managing Kubernetes deployments across multiple clusters?
Shani Adadi Kazaz: I think that the first one is probably will be maintenance. It's a general challenge in Kubernetes, right? Like, look at this event. It's so huge. Kubernetes runs super fast. So maintenance is an ongoing challenge for sure. Second one, I think, is the infrastructure management. Well, you have a whole ecosystem of open sources, right? And with open sources and the strength and the advantage that it gives you also need to manage everything by yourself. So this is the second one. Third one is the tools. You have so many tools that you need to master, not only Kubernetes, but the whole ecosystem and observability and everything. So different tools for different, I guess, aspects in your application. And the last one is inconsistency. So even though you can use kind of the same template for the same workload, if you're not putting the right tools in place, you might see inconsistency across your clusters. So you know, you are launching a whole thing with a template, but one team just changes it a bit, and another team changes something else, just tiny bits. But if you have like hundreds of workloads, you need to manage all of this in slightly different workloads. It's just a real challenge.
Bart Farrell: How has GitOps changed the way teams deploy applications to Kubernetes? And what operational overhead does it typically create?
Shani Adadi Kazaz: I guess that GitOps is the answer for a lot of those challenges. So basically, GitOps means that you are using one source of truth, which is your Git repo, right? Everything that you are changing, building, it's going to be in your Git repo. and the magic happens behind the scenes and your production looks exactly like your git repo. Now if you are managing everything from the same git repo, it means that you can see in one place how your production looks like, right? And you have an easy way to roll back if something happens. So this is solving a lot of things. It's solving the inconsistency, it solves how I manage everything. multiple clusters in one place, but also it creates a challenge because to make this magic work, you need tools such as Argo CD and Flux, right? So you need another tool to control all of this. And this is an open source tool. So you need to take care of the scaling and you need to take care of the upgrades. And how do you deploy it? So do you deploy Argo CD in one cluster that will manage like the deployments of different clusters or do you launch an Argo CD in all of your clusters? And if you do so, you need to manage like if you have, I don't know, 200 clusters, you need to now manage like 200 Argo CDs, right? But if you want to do it in one cluster, you need to take care of all the connectivity and networking yourself, which is definitely not easy. So it creates another challenge.
Bart Farrell: What does day-two operations really mean for Kubernetes? And why do teams often underestimate this complexity?
Shani Adadi Kazaz: Day two operations is basically everything you do after you launch your application, right? And it's a challenge. I think it's the top challenge that I'm seeing customers facing. It blew my mind when I just started with Kubernetes. Every single customer said something about the maintenance. Well, Kubernetes lets you run so fast. A minor version is released every four months and it's supported only for 14 months. It means that you need to upgrade all the time your Kubernetes. And Kubernetes doesn't work by itself. You have the whole ecosystem of open sources that you need to upgrade as well. And what about the add-ons? And what about the other tools, the resources that you're using? Modern application is not only Kubernetes, it's also maybe an RDS or S3 or a queue. So you need to maintain all of this. And we're seeing customers that just struggle a lot and need to do it every time. In some cases, even the platform team becomes the bottleneck again because they're just drowning under maintenance tasks and can't actually deploy new workloads.
Bart Farrell: AWS recently announced EKS capabilities. Can you explain what problem this solves for organizations running Kubernetes at scale?
Shani Adadi Kazaz: Yes, sure. So capabilities is a really cool feature in my opinion. We launched it in the last reInvent. And basically, we took three at launch. We took three main open sources that we think that our customers will benefit the most out of. So the first one is the Managed Argo CD. So it means that we are managing the infrastructure for you and taking care of the upgrades and security patches and so on. But it also gives you a lot of cool features out of the box. So for example, if you want to deploy Argo CD in one cluster, you are getting out of the box the connectivity between the other clusters. So it's cross clusters, cross VPC, cross accounts, and you're getting it out of the box and you don't need to build it, which can be a challenge. The second one is ACK is an open source that lets you control the cloud resources, the AWS cloud resources from the Kubernetes. So you can deploy the same as you're deploying your Kubernetes, your pods. So you can deploy and manage S3, EKS cluster, and it gives you the ability to enjoy the self-healing mechanism that Kubernetes have. So it's a really powerful tool. and then you don't need to manage your Kubernetes in one place and your cloud resources in infrastructure as code, for example. And the third one is kro, Kubernetes Resource Orchestrator. So kro lets you manage a set of resources as one unit. So in one YAML file, you define different resources that act together. And kro manages the dependencies between them and passes the parameters between them when you're launching them. But then it also Kubernetes manages everything, so you're enjoying the self-healing mechanism. And it also gives you the ability to give the developers a really simplified YAML file. So instead of taking care of all the different configurations, they have a really simplified YAML, where they can just choose whatever they need. And you create a very complex architecture using kro. You can define different RGDs it calls, so different groups of resources, and use nested resource graph definitions, so you can build a very complex architecture, and the developers can configure in a very simple UI, really complex one.
Bart Farrell: Kubernetes turned 10 years old a while back. What can we expect in the next 10 years to come?
Shani Adadi Kazaz: I guess AI will enter Kubernetes massively and simplify things, hopefully. I hope that maintenance will not be a problem anymore. And with Kubernetes we can just do whatever is needed in a very simplified way.
Bart Farrell: What's next for you?
Shani Adadi Kazaz: I want to explore more how like GenAI and AI can enter Kubernetes, do the things for us, but still how we are managing all of this, right? Because we cannot let it run by itself. How we create an order in all of the mess that it can create as well. So this is what I'm focusing on right now.
Bart Farrell: And how can people get in touch with you?
Shani Adadi Kazaz: you can reach out to me via LinkedIn. I would be happy to talk with you. And you also can join the workshops that we are having in multiple locations, virtually as well. You can get in the console, the EKS console, and you can see the different workshops we are having and please reach out to me directly through LinkedIn. I would be happy to have a talk.