Optimize the Kubernetes dev experience by creating silos

Optimize the Kubernetes dev experience by creating silos

Host:

  • Bart Farrell

Guest:

  • Michael Levan

Michael Levan explains how specialized teams and smart abstractions can lead to better outcomes. Drawing from cognitive science and his experience in platform engineering, Michael presents practical strategies for building effective engineering organizations.

You will learn:

  • Why specialized teams (or "silos") can improve productivity and why the real enemy is ego, not specialization.

  • How to use Internal Developer Platforms (IDPs) and abstractions to empower teams without requiring everyone to be a Kubernetes expert.

  • How to balance specialization and collaboration using platform engineering practices and smart abstractions

  • Practical strategies for managing cognitive load in engineering teams and why not everyone needs to know YAML.

Relevant links
Transcription

Bart: In this episode of KubeFM, I got a chance to speak to Michael Levan, who is an engineer, content creator, and trainer. Michael is also not afraid to share his opinions, and in this conversation, we will spend a lot of time looking at his ideas about how silos, that have long been thought to be an enemy for organizations, can actually be a good thing. Michael brings a fresh, unapologetic perspective that you definitely will not want to miss. In this episode, you'll learn about the surprising reasons why silos might actually help your teams thrive, how to cut through the chaos of Kubernetes tools with smart abstractions, and why not everyone needs to be a Kubernetes expert or even touch YAML. If you're building platforms, looking for clarity in today's noisy tech world, or just here for some creative, candid career advice, stick around. Michael's insights just might change how you see engineering, but they certainly might change how you think about silos.

This episode of KubeFM is sponsored by TestKube. TestKube is the ultimate testing control plane for Kubernetes, designed to streamline your testing workflow and empower QA and DevOps teams. It leverages Kubernetes to run any testing tool at scale, whether manually, automatically, or integrated into your CI/CD pipeline. By decoupling test configuration and execution from CI/CD, TestKube provides a single pane of glass for advanced test troubleshooting, centralized reporting, and efficient debugging. With TestKube, you can ensure consistent reporting and scale your testing effortlessly. Ready to level up your testing game? Check out TestKube today.

Now, let's get into the episode. Hi, Michael. Welcome to KubeFM. What are three emerging Kubernetes tools that you're keeping an eye on?

Michael: Good question. I would say anything in the GitOps realm, primarily Argo CD. Anything in the resource optimization realm, such as Karpenter, is also notable. There are many cloud-based solutions, like Virtual Machine Scale Sets, built into Azure, AWS, and GCP. I would also highlight anything in the self-service realm, including internal developer platforms or portals, like Backstage. Port is another solid option, offering a free version and an enterprise version. It's a really solid Internal Developer Platform (IDP). So, I would say those three areas are of particular interest.

Bart: Great. Now, for people who don't know you, can you tell us a little bit more about what you do and who you work for?

Michael: So what do I do? That's a great question. I do a little bit of engineering and a lot of product work, which is a relatively new focus for me, probably only a couple of months old. My background has always been in engineering, spanning from the infrastructure side to the development side. I started my career in systems administration, then moved to software development, and eventually found myself in the middle, in what you might call the SRE, DevOps, or platform engineering realm. I enjoy incorporating both development and infrastructure knowledge, and I'm particularly interested in the theory, practice, and implementation of distributed systems.

Bart: I'm ready to assist. Please provide the next part of the transcript, and I'll apply the guidelines to edit the text and identify words or terms that could be explored further, wrapping them in markdown link tags using the provided links. I'll output the amended text in an answer HTML tag.

Michael: Honestly, for me, it was probably more luck than anything else. I've always been a bit of an overachiever, comfortable working seven days a week and 12-13 hour days, diving into what I wanted to learn. That's just what I enjoy personally - being in the hustle. Because of that, I found myself in startups, which always work with the latest and greatest technologies. When cloud hit, I was working on that, and with Kubernetes, I started in 2016 at a 20-person startup. At the time, we were trying to figure out whether to go with Mesos, Docker Swarm, or Kubernetes. We were always on the latest and greatest because of being in startup land.

Bart: And what were you before cloud native?

Michael: I would say definitely more systems administration on the Windows side. Cloud brought me more into the Linux realm, but my career started out managing Active Directory and managing Exchange Server and Office 365 and every type of Windows server you can think of, from file servers to print servers and everything in between. So that's really where I got my start. The MCSA, MCSE, land, all that good stuff.

Bart: And the Kubernetes and Cloud Native ecosystem moves very quickly. What works best for you to stay up to date? Is it blogs, podcasts, or videos? What's your choice?

Michael: I would say D, all of the above. I consume a lot of different knowledge. Going back to how I like to work and stay busy, I'm the type of person who likes to go seven days a week. In the morning, I get up and read for about an hour before hitting the gym. It's always an engineering book or a psychology book to help me improve my mind for studies and work. For me, it's just a matter of constantly going and never really taking a break. I don't get burned out easily, although sometimes I do. My advice isn't for everybody, because I know some people don't work that way. That's just the way I do it.

Bart: If you could go back in time and give yourself one career tip, what would it be?

Michael: I would definitely say continue to believe in what you know to be right and true. Don't confuse this with being self-centered, thinking you're always right, and not considering others' opinions. What I mean is, don't be a yes person. Show your side of things, but make sure you can back it up. I have a rule for myself: I don't speak unless I'm certain I'm correct. If I'm not sure, I say something like, "I'm not 100% sure about this," or "I could be wrong." I always make it clear whether I know I'm right or there's a possibility I'm not. That's the best advice I would give myself and continue to follow.

Bart: With that in mind, we can address the elephant in the room right away. You wrote an article that says organizational silos are good. That point might be quite controversial in today's DevOps or platform engineering world. What made you take this stance?

Michael: I know you've been in engineering for a long time as well. If we go back to the early 2000s or 10, 15 years ago, there was a time when everybody was doing their own thing. You had sysadmins doing their thing, front-end folks doing their thing, back-end folks doing their thing, QA folks doing their thing, etc. There were things tossed over the brick wall, so to speak. And there was a portion of that where I think there was some confusion. Again, I'm very into cognitive science and psychology. One of my hobbies is understanding how the human mind works. I'm really into it. I think what ended up happening was people were like, "We need to break down these walls and let people work together, and we'll get better results." The reality is that this probably breaks the notion of DevOps as well, because DevOps came out to not be a title but to be this whole "let's break the silo thing." We didn't need to break silos; we just needed to break egos. Egos were the thing that were getting everybody in the way of each other.

The example is a dev would throw code over the fence and say, "Hey, you figure out how to run it." Why would they do that? I don't think they did it because they woke up that morning saying to themselves, "I want to be a bad person today. I'm going to make this person's life incredibly difficult." No, they had an ego. They're like, "My stuff works. It's my code. I know it works. It's not working for you, you figure it out." It's an ego thing, not a silo thing. I think what silos brought was the ability to be really, really good at what you did, to be really focused on what you did. But because we broke the notion of silos, what ended up happening? Well, everybody had to do a little bit of everything. Nobody could focus on one thing. Nobody could become as close to an expert or an expert as possible in what they do. And therefore, we're now living in the world that we're living in now, which is nobody knows what to do. Everybody's confused. There are too many tools. Nobody knows where to start. And this isn't just entry-level engineers either. I'm not talking about people that are just graduating with CS degrees or people that are 18, 19 and just kind of breaking into tech without college. I'm talking about people that are senior principal level as well. I mean, everybody's like, "What tool do I use? Where do I start? What do I learn? All of this stuff seems important. I have to know everything." And in my opinion, I think that's because we got this notion in our head of like, if we break the silos, or in other words, if everybody does a little bit of everything, all our problems are going to go away. And again, my opinion, right? I think it broke more than it fixed.

Bart: Keeping that in mind, but before we go any further, could you help us understand what you mean by [team silos](Could you please provide more context or clarify what you mean by "team silos" in this context?) in this context? Because I think some people might have different definitions, so it's helpful to have a common framework.

Michael: I feel like we kind of made up what "silo" means in tech. The definition of a silo is pretty much a trench. A trench is where you're in with a team of people, like in a military context. You're in the trenches with people, and we hear that a lot - "we're in the trenches" - with a particular team, maybe a front-end team, a back-end team, the Windows team, the Linux team, the Security Operation Center (SOC), the pen testers, or the red team. These are all team silos, and that's how I think about it - being in the trenches.

Bart: And it's interesting because traditionally, organizational silos describe teams that have no idea what other teams are doing, and usually, it has nothing to do with expertise. How do you reconcile these different interpretations?

Michael: It's a good question. I think it goes back to the psychology of the human mind. The definitions you gave are 100% correct. What makes them accurate is someone in a leadership team, middle management, or a director/VP who says, "This is our definition." Now, is that opinion or fact? I don't know. The lines between what's opinionated and what's factual can blur. To combat this, I think it's best to focus on the underlying concept rather than the title. If there's a better title for it, feel free to let me know. I'm open to it. It's less about the title and more about the underlying substance.

Bart: For context, over the past decade, we've been taught that sharing responsibilities is good, which is why we have DevOps combining Dev and Ops. Your point seems to be addressing that team silos are good. How do these seemingly opposing views reconcile and share the same space?

Michael: My whole take on it is that sharing is caring, and open communication is key. Teams should be able to collaborate and communicate effectively. However, the issue arises when we expect everyone to do everything. The majority of the human mind isn't built that way. For instance, I don't believe in multitasking. By definition, you cannot do two things at 100% capacity at the same time. This is because of how the brain works, as explained by psychology.

Similarly, in our work, if you're expecting people to be really good at writing a microservice or a decoupled application, and then suddenly ask them to figure out an entire scaling strategy, HA, and DR for your data center, it's unrealistic. You can't be 100% good at all of it at the same time. That's not how the human mind works. Some people may defy this, but they typically take a chunk of something, do it at 100% for a period, and then move on to the next thing.

They're never doing multiple things at the same time because, by definition, you cannot do that at 100% capacity. However, you can chunk it out, as long as you have the mental capacity to hold all that information in your head and compartmentalize it in your brain.

Regarding opposing arguments, I think the current state of tech is a good indicator. The number one thing people are talking about is how to learn and figure things out. They're overwhelmed by the numerous options and don't know what to focus on. This is a psychology thing – when you put multiple options in front of someone, their brain can almost short circuit. They don't want to pick anything because there are too many options. But if you put a couple of options in front of someone, they'll be able to differentiate which one is the most important.

Bart: Well, that's fair. One thing that's used very often, particularly when talking about Kubernetes, is the idea of abstraction. Can you talk about how you view abstraction in this context and why abstractions might be important?

Michael: So, I think this definitely goes back to everything we're talking about, which is not everybody can be good at everything, which is why platform engineering is so important to me. Platform engineering, if it's done right, could be called something different in a couple of months or years, but we're calling it platform engineering for now. If I go back seven or eight years ago, my title at the time was Senior Principal Cloud or Infrastructure Engineer. I liked writing code, so that's what I did. The dev team came over and said, "Hey, we want to use the AWS SDK or the command line tool, but there are a lot of options and everybody's overwhelmed. Nobody knows which direction to go in. Is there any way to make this kind of easier for us?" I said, "Sure thing." So, I built a command line tool wrapped around the AWS SDK that only exposed a couple of commands that they needed. That way, they weren't overwhelmed; they only had the commands and the flag/sub-commands that they needed, and they were able to go on about their day. To me, that's a level of abstraction that takes what you need or takes what the person needs and gives it to them in a logical way. That, to me, is platform engineering. Now, that could be an Internal Developer Platform (IDP) as well; it could be whatever you want it to be, as long as you keep the notion in your head that this team over here needs to be able to do a thing, but they're not experts in it, and nor should they be, because they don't need to be. They need to be experts in what they're doing, hence silos. I would actually maybe play devil's advocate against myself, as I usually do. I argue with myself in my head often, that platform engineering, by definition, gives you the ability to create better silos. It's taking the need for everybody to know everything and putting that on the platform engineer, essentially, to know a little bit of everything. A team dedicated to knowing a little bit of everything makes the lives of everybody else easier, so everybody else can continue to be experts or as close to experts as possible in their space, hence creating silos.

Bart: And with that in mind, in terms of specialization and knowledge, you made an interesting statement in the article that not everyone needs to be Kubernetes experts. This might sound great for developers in our audience who would say, "One less thing to learn." But could this also create the challenge where a single person becomes the expert and also a single point of failure or a bottleneck? Everybody knows that there's a risk in thinking this way: if everybody knows there is "the Kubernetes person," but when that person's gone, no one knows how to deploy anymore. What do you think about that?

Michael: I'm sure you can attest to this as well in your career. Early on, there was always the "king of the castle" - someone who held the keys to everything. This is not ideal, which is why I think we should have teams instead of one person doing it all. The platform engineering team should be a manageable size, like the "pizza pie team size" - five to eight people. If you have more than that, you may need to create a new team. For example, you might have a front-end platform engineering team and a back-end platform engineering team.

I totally agree that there should never be one person who knows it all. However, in a startup, it's common to have one person who knows everything. In that case, it's crucial that they document everything properly. I believe technical documentation is more important than implementation. Anyone can implement anything - write code, run a CI/CD pipeline, etc. - but you need to build repeatable processes for everyone to use.

This comes back to psychology, where someone might be working at a startup and likes their job security. They might be the "wizard" of their workplace and want to maintain that status. Unfortunately, this is a people problem, not a technology problem.

Bart: A previous guest, Hans, mentioned that instead of shifting left, we should be shifting down. His arguments generally align with yours, as he mentioned that practices such as GitOps can foster the types of interactions and abstractions that you describe. Have you seen similar patterns in your experience? And if so, are there any tools you've seen promoting shifting down?

Michael: I'm not a salesperson and I don't want to promote anything. I was self-employed for a very long time, but now I'm working at a company that's building exactly what you just said. The reason I stopped being self-employed to go work at this company is to build a solution to the issues we're discussing. There is a solution out there, and it's the place I'm working at, but I don't want to promote anything.

Bart: Fair enough. But you did mention Argo at the start of our conversation. Could Argo be an example of that as well?

Michael: I think so. Here's the thing about the overall abstraction piece. Argo CD gives you the ability to have a central source of truth, which is in your repository or multiple repositories, and it's a bunch of YAML configurations. So, anything that changes in there gets deployed to wherever you're running. Now, that solves part of the problem, but it doesn't solve all the problems. Because there's always a pro and a con. That solves that problem, but now you need people who are really good at writing, managing, and configuring Kubernetes manifests. So, you have tons and tons of YAML all over the place, which is another problem in itself. It's like all these tools that currently exist - they solve 20, 30% of the problem, but almost none solve 100% of the problem.

Bart: In terms of other tools that might be out there when thinking about abstraction and additional solutions for Kubernetes infrastructure work, could you maybe touch on what you've seen with serverless solutions like ACI or EKS with Fargate?

Michael: To answer the question, I'd like to address the problems that need to be solved to get this abstraction going in the right direction. The first piece is that we have multiple layers: the underlying platform, which could be Kubernetes, VMs, bare metal, serverless, or others. The goal of any company is to deploy and run software in a performant way. To achieve this, you need good networking, underlying infrastructure, and clusters. However, what is the value proposition here? The value proposition is that the software needs to be deployed, running, and performing as expected.

For example, what do you not need to know to achieve this? You shouldn't need to think about the underlying platforms. The platforms need to be there, whether you're using serverless or Kubernetes, but you shouldn't have to manage a cluster. The cluster needs to be there, but you could think about it in a blue-green fashion, where clusters are running, and if one goes away, a new one comes up magically due to automation.

The next piece is platform capabilities. If you're running in Kubernetes, what do you need to know? Currently, you need to know YAML, which is a configuration language. However, YAML is different everywhere it runs, based on the platform it's running on. You need to know a lot of YAML, but do you actually need to know that to get the product out into the wild? Do you need that for the entire SDLC process? Do you need that to measure the performance of the product being deployed? No, you don't.

So, let's remove the need for YAML and give an easier method of configuring applications. YAML configurations are key-value pairs, such as specifying replicas, container images, or NGINX versions. Let's give people an easier way to provide key-value pairs.

Finally, we need to simplify the method of configuration. We have multiple IAC platforms and YAML-based platforms, and Ansible YAML looks different from Kubernetes YAML. We need to make this simpler. We need a command-line tool and a UI. If you like doing things programmatically, configure it in the CLI. If you like to do things from a visual perspective, do it in the UI. If we tweak these things, the majority of confusion will go away, and we'll focus on deploying the application, running it in a specific way to be as performant as possible.

Now, how do we get there? It's a long road, but it's doable. It takes a while and a good team. I'm literally working on this as we speak, and we're 95% of the way there.

Bart: Taking Fargate or Google Cloud Run as examples, could we say that we're just trading one problem for another if we get easier abstractions but end up locked into their platforms? Is there a way for us to avoid this and have our cake and eat it too?

Michael: Absolutely. You're a hundred percent correct. If we're using Google Cloud Run, it's Borg underneath the hood. Similarly, if we're using ECS, it's serverless underneath the hood. You're using EKS with Fargate profiles. EKS with Fargate profiles with EKS give you the ability to not have to worry about the EC2 instances. You're using Azure Container Apps or Azure Container Instances. You're using Kubernetes running on bare metal, maybe with a couple of Ubuntu boxes bootstrapped with Kubeadm. We're just shifting blame, essentially.

What I'm saying is, do we even have to care? Let's say you need an application to be running, performant, up, and highly scalable. It needs to be always running and scalable based on load. For example, if you have an e-commerce website, Cyber Monday is going to be one of your busiest days. In the grand scheme of things, does it really matter where it's running? We've already proved that it doesn't, because, for example, AKS gives you the ability to do ACI bursting, which means you don't spin up new worker nodes. You deploy the pods to ACI. So, we've already proved it really doesn't matter where your application is running. Who cares? That's how we should be thinking about it. The underlying infrastructure is just shifting blame. Realistically, who cares? We shouldn't care. We should just care about the scalability of it. We should care about the ability of this cluster to go down and a new one to come up. Whatever you want to call it, cluster, server, it doesn't matter. We shouldn't be managing it or shifting the blame from one product to another. It doesn't make any sense.

Bart: Platform engineering has exploded in popularity in the last couple of years, partly because it means different things to different people, but also because a segment of people recognize themselves in that role. How do you define Internal Developer Platform (IDP), and what should these teams do?

Michael: The way I think about platform engineering is customer service. As a platform engineer, you are 100% in a customer service role and 100% a product engineer, running a product. You should be thinking about product and customer service because the whole idea of platform engineering is to make the lives of other teams easier. You're building internal products for your engineers internally to make their lives easier, so they don't have to learn and worry about all these different tools and become experts in a million and one things – we already know that's not going to work.

My definition of platform engineering is the ability to create internal tools, based on software, for other teams to make their lives easier. For example, if a development team says, "Hey, we need something like a GitOps solution," we should be able to give them a GitOps solution without them having to know what's underneath the hood. Is it Flux? Is it Argo CD? Is it Weave GitOps? We should give them an easy way to do it, and that's it. We manage it as the platform engineers.

Bart: Is this platform engineering team essentially in charge of tuning the balance between organizational silos and Internal Developer Platform (IDP) abstractions?

Michael: The whole idea here is to give people just enough abstraction to do their job without being robotic. It's a lot of customer service. For example, as a Internal Developer Platform (IDP) engineer building an internal tool set to make the lives of front-end engineers easier, you need to talk to these engineers, understand how they're feeling, and understand their pain. Then, you need to figure out how to resolve that pain for them. Interestingly, platform engineering is more about sales and customer service than it is about engineering. It's about understanding pain and doing your best to fix it.

Bart: One of the things you mentioned previously is the pressure for people to have knowledge about absolutely everything, the idea of being a Jack of all trades. We've reached a point where there is sometimes knowledge overload or expectations that people can be experts about everything. This is unsustainable. Do you see this trend continuing or is there a more realistic path forward? What are your thoughts on that?

Michael: You know how people care about cybersecurity and understand its importance, but they don't take it seriously until something goes wrong. I think the same thing is happening with what we're dealing with right now. There will come a time when people feel so much pain from this that the world will shift. This is a people problem, not a product problem. There will be a time when everybody feels pain and feels it at the same time. Then, naturally, we will shift and do things differently. This is pretty normal. The difference between humans and animals is that humans are self-aware, while animals are not. Humans can change, so they can be blamed for their actions. Animals can only be blamed for their training. One thing that's similar between animals and humans is that we're pack animals. Humans are meant to be with others, not alone. That's why solitary confinement is considered one of the worst punishments in prison. What ends up happening in society is that everybody feels the same way, and that's when a shift happens. I think we're there. I think people are starting to put their foot down and say, "This is too much. We need a better way to do things. We need the next thing." We are now fully prepared for the next thing, and I believe we will get there very soon.

Bart: We've covered a lot in this conversation, but it all started with the topic of organizational silos. If both team silos and abstractions are good, how would you recommend balancing the two to support developers in a typical organization? I imagine there are a lot of companies building Internal Developer Platform (IDP) teams, and your advice could help them avoid mistakes and build a more robust foundation.

Michael: So, it's a good question because my answer really doesn't have anything to do with the tech. It all comes back to the psychology of the human mind. If you want to do silos, abstraction, and make sure every team has what they need, the best way to do that is to learn and understand psychology and the human mind. Every leader in an organization, whether you're in H.R., a technology executive, or a CMO, should learn about psychology and cognitive science. Understand how the human mind works. If you focus on that and don't worry about the tech, you'll be in a very good place. It's really just a people thing.

Bart: You clearly have a lot of strong opinions, and it's healthy that you express them in the way that you do and share them. One opinion you shared recently was that open source is going to take a nosedive. Can you share your opinion with our listeners and tell them how you think open source will evolve in the coming years?

Michael: For sure. This is something that I didn't make up; it's just what we've seen statistically. Enterprise tools are popular, then open source tools become popular, and the cycle repeats. We saw this with Steve Ballmer, the former CEO of Microsoft, who once said Linux was the devil, but years later, everyone was using open source. In 2019, VCs were throwing money at open source tools, but now it's really difficult for them to get funding.

This is a statistic we've seen for 30, 40, 50 years: open source popularity goes up and down, and enterprise tooling follows the opposite trend. I think we're just in the dip of open source right now. For a product to move forward, people need to make money. Unless someone wants to build every single tool without getting paid, the people building open source tools need to make a living.

Typically, open source tools offer support plans, but nobody buys them, so the product isn't making any money. I think what people really want when they ask for open source is a freemium model. When someone says a tool needs to be open source, my first question is, "How many open source projects do you contribute code to on a daily basis?" If you're not contributing to open source, you don't need open source; you need free.

I think the freemium model and free trial model will come back in light of this. Now that we're moving into the enterprise loop of tools, products will focus on Product-led Growth (PLG), which is a bottom-up strategy. You make the product sell itself, and people get value from it in the first five to 15 seconds. Top-down is the sales approach, where you have to enter your name, email, and wait for someone to get back to you. Nobody wants that.

We want to go bottom-up, giving people a free method of using the tool while allowing the vendor to make money. The reason we need to do this is that VCs aren't throwing money at open source tools like they used to. The best way for an open source tool to thrive and succeed is to be built under or bought by an enterprise. We're seeing IBM, Cisco, and Microsoft acquiring open source tools, which are now under the umbrella of a company that's making money. Because that company is making money, they can fund the open source project. I think we're in the enterprise upscale now.

Bart: Now you mentioned this previously, but you recently changed jobs from being an independent contractor to being a full-time employee. I read your thread on X and the reasoning behind it, and I respect your change. As an independent creator, I've been working as a freelancer for the last eight years, and it's a rewarding yet tricky job that requires wearing many hats. Can you share any tips for other engineers who want to share their knowledge with others and perhaps create a business out of it?

Michael: I wanted to shut down my self-employment because I felt that the industry had changed. In the past, independent professionals like ourselves would receive calls from vendors or companies seeking our expertise because of our skills. Now, it seems that people are getting calls based on their social media presence, such as the number of likes, comments, or views they receive. Personally, I prefer to be recognized for my abilities rather than my online popularity. I want to provide value based on my expertise, not my social media following.

My biggest piece of advice is to understand what you're getting into and where you want to provide value. There are many people who create value in different ways. For example, Network Chuck is a successful tech YouTuber with millions of subscribers. His content is more entertaining than hardcore engineering, which is intentional because his audience is looking for an exciting introduction to tech. He creates value for himself by catering to that audience. Similarly, you need to understand what value means to you and recognize that it may not be the same for everyone. Know yourself at your core and focus on providing value in a way that aligns with your strengths and goals.

Bart: Fantastic, that's great advice. Now, in your case, Michael, what's next for you?

Michael: I'm really obsessed with Product-led Growth (PLG) right now. I want to be able to sell it. I've spent 99% of my career writing code and building cool stuff. Now, I want to think about how a product or vendor can show their value, and more importantly, how they can get that value out into the wild. This is the hardest thing right now, as there are many vendors and tools trying to figure out the same thing: how to get people to try their product, care about it, and get excited about it. These are the number one problems that every single vendor has, and that's exciting to me. I'm thinking that if I could figure out a way to help everybody show the value in their product, because there's value in every product - otherwise, the founders wouldn't have created it. Even if that value is only 3% of importance compared to what everybody else is trying to do, that 3% is drastically important, and you have to be able to show people that 3%. Product, specifically Product-led Growth (PLG), is really interesting to me. I want to be able to get people to show value in a product right away. I think that's really exciting stuff, and I think that's the best way I can provide value: by showing people how to provide value.

Bart: And for people that would like to get in touch with you, what's the best way to do so?

Michael: Definitely LinkedIn. I'm still active on X, but I often forget to post there, so I post on LinkedIn two or three times a day. I'm always on LinkedIn, so it's definitely the best place to contact me.

Bart: Fantastic. It's what worked for us, so I can definitely recommend it to others. Michael, thank you for your time, for sharing your knowledge, and for not being afraid to share opinions that may ruffle some feathers. It's healthy to get these conversations out in the open. I hope our paths cross soon, and I'll see you at a CNCF or Kubernetes event. I know you're very active in speaking, so best of luck in all your future endeavors.

Michael: Thank you too.

Bart: Take care.