Not Every Problem Needs Kubernetes

Not Every Problem Needs Kubernetes

Host:

  • Bart Farrell

Guest:

  • Danyl Novhorodov

This episode is brought to you by Testkube—where teams run millions of performance tests in real Kubernetes infrastructure. From air-gapped environments to massive scale deployments, orchestrate every testing tool in one platform. Check it out at testkube.io

Danyl Novhorodov, a veteran .NET engineer and architect at Eneco, presents his controversial thesis that 90% of teams don't actually need Kubernetes. He walks through practical decision-making frameworks, explores powerful alternatives like BEAM runtimes and Actor models, and explains why starting with modular monoliths often beats premature microservices adoption.

You will learn:

  • The COST decision framework - How to evaluate infrastructure choices based on Complexity, Ownership, Skills, and Time rather than industry hype

  • Platform engineering vs. managed services - How to honestly assess whether your team can compete with AWS, Azure, and Google's managed container platforms

  • Evolutionary architecture approach - Why modular monoliths with clear boundaries often provide better foundations than distributed systems from day one

Relevant links
Transcription

Bart: Does every team really need Kubernetes? Or are we paying a complexity tax that shouldn't be necessary? In this episode of KubeFM, we sit down with Daniel, a veteran .NET engineer and architect at Eneco, to pressure test his thesis that not every problem needs Kubernetes.

Daniel walks through the cost model, complexity, ownership, skills, and time—why a modular monolith often beats premature microservices, and how platform engineering changes the build versus buy calculus. We also explore alternatives such as Dapr, Argo CD, and Actor/BEAM runtimes (Erlang, Elixir, Orleans, Akka.NET) for resilience without orchestration sprawl. Plus, the real edge cases where multi-region hybrid deployments justify Kubernetes.

If you're making 2025 infrastructure choices, this episode will definitely sharpen your decision tree. Special thanks to TestKube for sponsoring today's episode. Need to run tests in air-gapped environments? TestKube works completely offline with your private registries and restricted infrastructure. Whether you're in government, healthcare, or finance, you can orchestrate all your testing tools. Performance, API, and browser tests without any external dependencies. Certificate-based auth, private NPM registries, enterprise OAuth—it's all supported. Your compliance requirements are finally met. Learn more at testcube.io.

Now, let's get into the episode. Hi, Daniel. Welcome to KubeFM. What are three emerging Kubernetes tools that you are keeping an eye on?

Danyl: I'm glad to be here. I'm not a Kubernetes expert, but an engineer who knows how to build, deploy, and run apps in Kubernetes—like writing YAML deployments and Helm charts. In this conversation, I'll primarily share perspectives from an engineer or architect's viewpoint.

Tools that improve developer experience are of particular interest to me. One such tool is Dapr (D-A-P-R), which is part of the CNCF. Another is ASP.NET Aspire, which is related to the .NET stack I work with. Introduced in .NET 8, it's a developer experience and application composition layer focused on building, wiring, and running distributed apps.

I'll also mention Argo CD, a tool I used when deploying apps in Kubernetes.

Bart: Could you give us a quick introduction about what you do and where you work?

Danyl: I've been a software professional for more than 20 years. During my career, my roles ranged from software engineer to lead developer, technical lead, and software architect. Early in my career, I started as a C++ engineer but later switched to .NET. I've worked in different companies from startups and scale-ups to big enterprises, across various industries including automotive, telecom, agriculture, hospitality, and banking. Currently, I'm employed by Eneco, a producer of natural gas and supplier of electricity and heat in the Netherlands. I'm working on re-architecting some business-critical code and making it cloud-native friendly.

Bart: Fantastic. Since you mentioned cloud-native, how did you get into cloud-native?

Danyl: Around 2013, I was working for a startup building a social network for house property owners—a local analog of Nextdoor. It was a quite ambitious project. We were running workloads on a local VPS hosting provider, mainly Windows servers. Azure was emerging at that time, and we started using basic Azure functionality and services like running websites, Cluelift mobile services, Azure API Management, and so on, eventually migrating more workloads to the cloud.

This was my first experience with cloud. We didn't have a lot of documentation and guidelines on how to build cloud-native applications back then. I believe the cloud-native term itself was introduced much later. So it was a trial and error approach, experimentation, and it was a lot of fun.

Bart: Very good. This ecosystem of cloud-native and Kubernetes moves quite quickly. How do you keep up to date? Is it through blogs, videos, official documentation? For example, when you want to learn about Dapr or Argo, where are the places that you go? I ask because we also have a monthly content analysis report where we track different trends about what people are looking at and what resources are most helpful. So in your case, how do you stay up to date with all the changes happening in these technologies?

Danyl: I read a lot of blogs from various sources. From time to time, I follow CNCF blogs, but as I mentioned, I'm not a Kubernetes expert. I also keep an eye on the technology reader from ThoughtWorks, where you can see emerging and trending technologies, and those which probably need to be retired. There are quite a lot of technical books on different topics. These are my primary sources.

Bart: If you could go back in time and share one career tip with your younger self, what would it be?

Danyl: Spend more time learning basics and foundation. Most of the things in our industry are not new and were invented in the 50s, 60s, 70s, and 80s—operating systems, compilers, network protocols, database internals. What changes is the packaging, the syntax, and the hype. Mastering the boring fundamentals is often the most differentiating thing you can do in a world obsessed with shiny new tools.

Bart: As part of our monthly content discovery, we found an article that you wrote titled "Not Every Problem Needs Kubernetes". The article claims that 90% of teams don't need Kubernetes. Let that sink in. That could be controversial. Let's start with some context: How did Kubernetes become such a dominant force in container orchestration?

Danyl: Docker explored containers around 2013, making them usable for everyone. Containers solved the "it works on my machine" problem by bundling code and dependencies. However, a new challenge emerged: how to run hundreds or thousands of containers in production?

Several orchestration tools emerged to solve scheduling, service discovery, and scaling problems. Tools like Docker Swarm, Mesos, and Kubernetes were developed. While others existed, Kubernetes stood out because it was open source, extensible, and backed by Google. It was also battle-tested by Borg, Google's internal cluster management tool.

Google open-sourced its Borg-like orchestrator at the right time, and support from CNCF and cloud providers created a snowball effect. This effectively ended the orchestration wars, making Kubernetes the default choice.

Bart: There's an argument that Kubernetes adoption is often driven by buzzwords and resume building rather than actual technical needs. What patterns have you observed in organizations when they're choosing Kubernetes?

Danyl: Kubernetes often enters the room not through a clearly defined problem but through aspiration. Teams want to level up, be cloud native, or match what they think big players do. In many cases, this decision is politically or emotionally driven, not technically.

Some examples I've seen include early-stage startups with five or ten developers and a single .NET server spending weeks containerizing everything and creating Helm charts to deploy something that could run on a $20 VPS or Azure App Service. The ops overhead is huge and essentially slows down development.

Another example is a mid-sized enterprise where the decision came from the top down after a few engineers attended KubeCon. Suddenly, a small .NET monolith got split and shipped into a Kubernetes cluster, and no one knew how to debug anything anymore. Deploying pipelines became fragile, and the cost of development and onboarding increased.

I also worked for a media company that handled hundreds of video streams daily. On the surface, it sounds like a textbook use case for Kubernetes—with elasticity, batch workloads, and hardware scheduling. They did benefit in a few areas, such as services handling traffic spikes during elections or sports events. However, most media workloads like transcoding or packaging were offloaded to external services like AWS or Azure. The team's actual job was orchestration, coordinating APIs, metadata, and availability.

Even in this seemingly valid case, Kubernetes was partially justified but slightly over-engineered for most workflows. Of course, there are valid use cases for Kubernetes in companies with hundreds or millions of customers and often spiky, unpredictable usage patterns.

Bart: Kubernetes: The article suggests that most teams end up with a system so bloated and fragile when using Kubernetes. Is this a fair characterization or does it reflect poor implementation rather than inherent platform issues?

Danyl: Kubernetes itself is not a culprit. The platform is powerful and flexible, but with that power comes enormous complexity. Most teams underestimate what it takes to own it responsibly. Kubernetes can be clean, resilient, and efficient, but only in the hands of a team with a deep platform understanding, clear boundaries between infrastructure and application responsibilities, operational discipline, security, CI/CD, and other critical practices. Without these, it's like giving a Formula One car to someone who just got their driving license. It's not that the platform is broken, but how it's often misapplied.

Bart: The article mentions that without proper developer platforms built on top, Kubernetes becomes hostile to developers, requiring service meshes like Istio or runtime layers like Dapr. Is this additional complexity inevitable, or are there ways to keep Kubernetes developer-friendly without these layers?

Danyl: Kubernetes complexity is not inevitable, but without intentional design, Kubernetes is default. Out of the box, Kubernetes is an infrastructure tool, not a developer experience. If we want developers to be productive on Kubernetes without introducing Dapr, Istio, or an entire platform engineering organization, we need to build a lightweight developer experience layer.

At minimum, this should include simple deployment tooling because kubectl is not enough. It should have clear abstractions that hide YAML as much as possible (avoiding complex Helm configurations). It should have built-in observability like logging, tracing, and metrics out of the box, preferably without requiring developers to configure Prometheus or complex rules.

The platform should be easy to roll back and provide preview environments. It's not developer-friendly unless it's debuggable and reversible. With these elements in place, Kubernetes becomes tolerable, maybe even smooth for developers. Without them, it could be a jungle of YAML, role-based access errors, and constant guesswork.

Regarding service meshes like Istio, Linkerd, Runtime, and Dapr: they solve real problems like traffic routing, retries, telemetry, and circuit breakers. However, these solutions are only necessary at a certain level of complexity. You probably don't need them if you have a handful of internal APIs, basic network topology, and simple routing.

But if you're running dozens of microservices with zero trust requirements, blue-green rollouts, and multi-tenant traffic policies, service meshes can simplify things by centralizing complexity that would otherwise be scattered across application code. However, these layers are not free—they require operational expertise, upgrade and compatibility management, and debugging skills. Unless you truly need these features, it's often better to delay adoption and avoid adding unnecessary complexity just to appear cloud-native.

Bart: There's an interesting point about BEAM, referring to Erlang and Elixir, and Actor models as alternatives for fault tolerance and scalability without orchestration complexity. How do you see these architectural patterns comparing to the Kubernetes approach?

Danyl: Right. This is a hugely under-discussed area. While Kubernetes has become a default tool for orchestration, it's not the only or even the most elegant way to achieve it. Actor models like Akka, Orleans, Proto.actor, and BEAM-based runtimes (such as Elixir and Erlang) often offer a radically different paradigm that builds fault tolerance into the application layer itself.

Both Kubernetes and actor-based models aim to solve problems like resilience, fault tolerance, and scalability, but they approach these from opposite ends. Kubernetes is platform-agnostic, heavy, yet appealing because it's language-agnostic. It provides a unified control plane for compute, storage, and networking, and is backed by major cloud providers and ecosystems.

However, Kubernetes achieves reliability through external orchestration. It uses probes, sidecars, nodes, and rescheduling, while the application itself remains unaware of its own lifecycle. This complexity comes at a cost—lots of plumbing, YAML configurations, and operational overhead.

In contrast, BEAM runtimes like Erlang, Elixir, and other Actor model systems offer intrinsic resilience. Crashing is expected and even embraced. Supervisors automatically restart failed processes. Processes are isolated and extremely lightweight. You get distributed messaging, concurrency, and fault recovery without needing an orchestrator.

When building apps in Elixir or Erlang, fault tolerance isn't an add-on—it's the foundation. Similarly, .NET's Orleans or Akka.NET provide virtual actors with concepts like location transparency and automatic lifecycle management. These systems scale horizontally without thinking in terms of pods or deployments—it's just actors and messages.

Too many teams jump into Kubernetes for scalability and resilience without realizing these could be achieved with less complexity by choosing the right runtime model. You don't always need a cluster of nodes; sometimes you just need a runtime that knows how to take care of itself, like BEAM.

Kubernetes is a general-purpose tool, while the Actor Model is purposely built to solve these exact problems. If you can afford to go all-in on these paradigms, Actor Models will often get you there faster and cheaper, with less configuration. You write high-level code, and the code knows how to handle itself.

Bart: The article critiques teams that claim to have hundreds of microservices but actually have distributed monoliths with tightly coupled services. It raises the question of how teams can honestly evaluate whether their architecture truly benefits from Kubernetes orchestration.

Danyl: Many teams proudly claim they have 300 or 400 microservices, but under the hood, it's often a spaghetti mess of services that can't be deployed independently, fail together, or know too much about each other's internal state. That's not really microservices; it's a distributed monolith, and Kubernetes won't fix this problem. It might even make it worse by giving just enough scaffolding to pretend it's working until it doesn't.

Real microservice architecture has certain aspects: a service should be independently deployable without coordination, owned by distinct teams with clear domain boundaries. They should communicate over well-defined APIs or events, not internal contracts. They can fail independently without cascading impact, use separate storage or at least separate schemas to reduce coupling, and be tested and versioned in isolation.

If all of this is true, you're on the right track of building microservices. Otherwise, you're probably not. Kubernetes is great at managing complexity, but it won't save you from complexity you invent yourself.

Bart: Platform engineering is presented as essential for Kubernetes success, with the article suggesting that most organizations can't compete with managed services from major cloud providers like AWS, Azure, and Google Cloud. What's your perspective on build versus buy for developer platforms?

Danyl: Platform engineering is essential for Kubernetes to succeed in any meaningful and sustainable way. However, most companies are not in a position to build a reliable platform, nor should they try. Building an internal developer platform on top of Kubernetes is not just about writing YAML templates and adding a CI pipeline. It requires a dedicated team of platform engineers with strong infrastructure code practices, GitOps culture, consistent monitoring, logging, tracing, and alerting across all services.

The platform should support an onboarding and training model, as well as incident response. This is a real engineering effort that requires at least three to five experienced engineers dedicated to platform work. Otherwise, the internal platform is likely to become a patchwork of half-solutions that developers quietly resent.

An uncomfortable truth is that cloud providers have already built great platforms and will do it better, cheaper, and more securely than 95% of internal teams. For example, Azure App Services, AWS App Runner, and Google Cloud Run offer managed deployment, auto-scaling, zero-cluster operations, and easy rollback. Platforms like Backstage, Humanitec, and others provide self-service infrastructure without starting from scratch.

My take is that if you're not ready to treat your platform like a product with a complete roadmap, support, and lifecycle management, you probably should not build one.

Bart: The legitimate use cases mentioned include multi-region redundancy and hybrid cloud environments. Are these requirements becoming more common? Are they still edge cases for most organizations?

Danyl: Multi-region and hybrid cloud are often thrown around as justifications for Kubernetes and infrastructure. For most organizations, these are still edge cases. A real need emerges when you are legally or contractually required to keep data in specific regions. This could be, for example, GDPR data residency laws, or serving latency-sensitive workloads across continents like low-latency gaming or live trading. You might also be running workloads on-prem due to hardware requirements or legacy integrations.

In these cases, Kubernetes might help. However, you're not adopting Kubernetes because you are multi-region; you're adopting it despite the complexity of multi-region infrastructure. Kubernetes helps to orchestrate some challenges, but most of the pain is still yours to own. Kubernetes doesn't solve multi-region problems completely. You still need to build things like global DNS, traffic routing, data replication strategies, and cross-region service discovery.

For most companies, multi-region infrastructure is unnecessary. But if you genuinely need it, then proceed.

Bart: There's a provocative statement that DevOps is not a role, but a practice, in the context of needing dedicated Kubernetes teams. So how should organizations approach the skills and team structure needed for Kubernetes?

Danyl: DevOps is a practice, not a job title. This doesn't mean you don't need dedicated people doing operational work. DevOps as a movement is about collaboration, automation, and shared responsibility, breaking silos between developers and operations. The idea is to empower teams to build, test, and deploy on their own.

Kubernetes doesn't magically make this happen. In some cases, it reintroduces silos because the complexity is so high that only a few people can manage it. Ironically, you end up needing a dedicated SRE team or platform team, even if everyone is doing DevOps.

Instead of asking who should own DevOps, we should ask how to structure teams so that product teams can ship safely, quickly, and autonomously, without drowning in complex configurations. Sometimes the answer could be Kubernetes with a platform team, sometimes it's app services and a DevSecOps pipeline, and sometimes it's best not to use Kubernetes at all. It depends.

Bart: The article suggests starting with monoliths on VMs with autoscaling before considering Kubernetes. This seems to go against the microservices-first trend. What's your take on this evolutionary approach to architecture?

Danyl: Modular monolith as the first architectural step isn't nostalgia or fear of scale, it's about simplicity and evolutionary design. It's about giving your system room to grow without forcing you to maintain 15 distributed services before your product has even gained traction.

Why start with a monolith? Because it gives you the best of both worlds: deployment simplicity of a monolith and logical separation stability of microservices. It's important to have a system built in a modular way, so the ability to refactor and extract services later is justified. You can transition to microservices more easily if your architecture is clean internally.

You can extract modules into services later, deploy features independently within one artifact, and keep observability, debugging, and testing straightforward. This approach allows you to delay the operational tasks of distributed systems until you are truly ready for load and scale. A well-designed module teaches good boundaries before distributing those boundaries across a network.

Consider extracting microservices when a module is on a hot path and scaling disproportionately—for example, if it has a pricing engine or encoding process. Or when you hit team-level friction where multiple groups interfere with the same code deployment, or need isolation and deployment independence.

Do this based on real signals, not hype. Too many teams chase microservices early because it sounds cool and they want to mimic big tech, thinking it's the only modern way to scale. But they often lack the engineers to support multiple deployments, proper observability, tracing, and alerting.

Without a real product-market fit, you might end up building a distributed monolith that is tightly coupled with fragile integration points. The key is to get the structure right, keep deployment simple, and evolve based on real needs, not external pressure.

Bart: Container services like AWS Fargate and Azure Container Apps are positioned as a middle ground between traditional deployment and full Kubernetes orchestration. How do you evaluate when these solutions are sufficient versus needing full orchestration?

Danyl: Services like AWS Fargate or Azure Container Apps represent a sweet spot for most teams. They provide containerization benefits like isolation and packaging without the pain of managing Kubernetes control plane or worrying about nodes, pools, ingress controllers, and cluster upgrades. These services are often sufficient for 80 to 90% of use cases.

You can stick with these tools when you have stateless services, APIs, or background jobs, and need simple autoscaling while wanting to move fast and deploy without infrastructure concerns. They work best when your app fits the cloud provider's opinionated runtime, with no need for advanced scheduling or sidecars, and you're focused on cost efficiency and developer velocity.

For example, the team I worked with deployed over 10 services across environments using Azure Container Apps and Dapr for lightweight messaging. They didn't need full Kubernetes and shipped features weeks faster with fewer operational headaches. Kubernetes should be an exception, not the starting point. For most teams, managed container services are more than enough.

Bart: The article implies that Kubernetes adoption is often driven by resume building or following trends rather than solving real problems. How can technical leaders make more objective infrastructure decisions?

Danyl: Two main infrastructure decisions start with "everyone's using Kubernetes, we should too" instead of asking what problem we are trying to solve and whether Kubernetes is the best fit. As technical leaders, our job isn't to chase innovation, but to protect focus and simplicity. That means filtering out the hype and making deliberate and grounded choices.

We should ask: What is the actual job we need this tool to do, and can that job be done with simpler or existing tools? We can use a cost model that stands for complexity, ownership, skills, and time. For complexity, ask what new cognitive or system complexity it adds. For ownership, consider who will maintain it after the initial excitement fades. Regarding skills, do we have the necessary in-house expertise, or will we become dependent on a hero engineer? On the time scale, what is the ramp-up time to value, and can we afford it?

One of the healthiest habits leaders can cultivate is running small-scoped experiments. Try Kubernetes for one non-critical service first, and measure not just performance, but deployment velocity, developer happiness, and maintenance overhead. Run the same service on a managed container platform like AWS Fargate or Azure Container Apps and compare. Pay attention to the new documental glue code and tribal knowledge required to run it in Kubernetes—that's the hidden cost.

Some strategies for separating hype from value:

  • Challenge buzzword fluency by asking people to describe technical choices in plain terms

  • If the case for Kubernetes can't be made without acronyms, the case probably isn't strong

  • Involve product teams and ask what they need to ship faster

  • If they don't mention custom pod affinity rules, that tells you something

  • Hold technical justification reviews before adopting any tool

  • The team must answer what pain is being solved and what happens if the tool isn't adopted

If Kubernetes is the right tool, great. But if it's not, don't be afraid to choose boring, reliable, and well-understood technology.

Bart: While the article is critical, Kubernetes has become the de facto standard for container orchestration. What are the genuine innovations and benefits that Kubernetes has brought to the industry, even if it is overused?

Danyl: Kubernetes has brought massive, undeniable innovation to the industry, even if it's been over-applied in places it doesn't belong. We should give credit where it's due. Before Kubernetes, we had a wild west of do-it-yourself scripts, hand-rolled batch deployments, and other challenges.

What Kubernetes gave the industry is a common vocabulary: pods, services, deployments, volumes, etc. It provided a standard API for interacting with clusters and a modular model that allows extensibility via CRDs, operators, and controllers. This created a level of portability and predictability across teams, clouds, and companies that simply didn't exist before. It's the reason we now expect a containerized app to just work across environments.

Kubernetes embraced the idea of immutable deployments—build once and run everywhere. It introduced declarative infrastructure, version control with Git as a source of truth, tools like Argo CD, GitOps culture, and self-healing behavior through controllers and reconciliation loops. These ideas weren't new, but Kubernetes turned them into practical defaults. From this came modern practices like progressive delivery, infrastructure drift detection and correction, and GitOps tools.

Interestingly, Kubernetes also succeeded by highlighting its own complexity. This pressure led to the rise of cloud provider solutions like Azure Container Apps, Google Cloud Run, AWS Fargate, and lightweight Kubernetes platforms like K3s, MicroK8s, and serverless Kubernetes platforms. Abstractions like Dapr, Knative, and others appeared as responses to Kubernetes' complexity, helping users navigate its challenges.

My argument isn't that Kubernetes is bad, but that it's a powerful tool that should be used with intent, not by default. Kubernetes set a bar that others wanted to meet and exceed, but in a more user-friendly way.

Bart: Last year we celebrated 10 years of Kubernetes, and we've asked quite a few people about what they expect to happen in the future. Looking forward, do you see the pendulum swinging back towards simpler solutions, or will the complexity of Kubernetes eventually be abstracted away enough to make it accessible to all teams?

Danyl: I see this pendulum swinging back towards simplification, and it's already happening. The industry is starting to collectively admit that Kubernetes is powerful, but do we really need all of this for most workloads?

We are seeing shifts from raw orchestration power towards better developer experience, some abstractions, and just enough infrastructure. Platforms like Azure Container Apps, Google Cloud Run, and AWS are taking about 80-20 of Kubernetes and baking it into a zero-ops experience. You bring the container, and they handle scaling, networking, certifications, and routing.

Other tools like internal developer platforms (Backstage, Humanitec) are abstracting Kubernetes. They call it golden path templates and cells with rapid, self-service UIs so that developers never even touch configuration YAMLs. Projects like Pulumi, Dagger, and lightweight Kubernetes runtimes like K3s, MicroK8s, and others are emerging.

I think we'll still be using Kubernetes under the hood, but most developers won't know or care. It will be part of platform-as-a-service with opinionated defaults and tight feedback loops. We won't stop using Kubernetes; we'll stop talking about it. It's more about having less Kubernetes in your face—using abstractions and tooling that are developer-friendly without needing to deep dive and understand everything behind it.

Bart: One counter argument to the article's thesis is that Kubernetes represents necessary complexity, that the problems it solves are inherently difficult and simpler solutions are often incomplete. Do you see this complexity as fundamental to the problem space, or will we eventually see tools that can abstract it away while maintaining the same capabilities?

Danyl: Distributed systems are inherently complex because you need to deal with partial failures, network partitions, scheduling, container scaling, and security. There is no silver bullet. In this sense, Kubernetes isn't introducing complexity; it's surfacing and managing it. However, it exposes a firehose of knobs and levers that most teams aren't equipped to handle.

Kubernetes tries to abstract away some essential complexity, like scheduling and auto-healing, but it also introduces a lot of accidental complexity—YAMLs, CRDs, operators, Helm versus Kustomize, Kargo, etc. The danger is that teams adopting Kubernetes thinking they are getting simplicity are actually trading one class of complexity for another, probably harder to debug one.

Can we abstract Kubernetes without losing power? We are getting closer, and emerging patterns show promise. Big platforms like Azure Container Apps and AWS App Runner hide the cluster entirely while offering auto-scaling, metrics, and networking. These services are Kubernetes-backed but users never touch YAML.

Internal platforms can build paved roads and self-service portals that surface just enough control without exposing every Kubernetes knob. Tools like Crossplane let teams orchestrate declaratively across clouds without handwriting manifests. You can abstract complexity but can't delete it.

Kubernetes solves hard problems, but that doesn't mean you have those problems. Its complexity is often justified at certain scales, but premature exposure to that complexity hurts teams. The future isn't about dumbing down; it's about designing platforms that let us grow into complexity only when we actually need it. Smart tools, sensible defaults, and fewer footguns will win—not less Kubernetes, but less Kubernetes in your face.

Bart: With that in mind, what's next for Kubernetes, what's next for you?

Danyl: I continue working on migrating legacy applications to cloud-native, refining our architecture to align with team topologies and related streams. On my side projects, I really want to find time and dive deeper into the beautiful world of Elixir and Erlang, and explore the BEAM actor model to understand its true power of fault tolerance and scalability.

Bart: Very good. The topic you chose for today may cause a little bit of controversy. I'm sure you've already experienced people with opinions. But whether people disagree or agree, what's the best way to get in touch with you?

Danyl: I'm on LinkedIn, so feel free to drop me a message. I'm often checking it, which will probably be the best way to reach me.

Bart: I noticed that the provided transcript is very short and seems to be a closing remark from the host to the guest. Without more context from the full conversation, I cannot confidently add hyperlinks. Could you provide more of the transcript or context about the discussion?

Danyl: I noticed that the provided transcript is extremely short and contains only "Thank you very much." Without more context from the full transcript, I cannot confidently apply hyperlinks. Could you provide more of the transcript or context about the conversation?