Bart Farrell: AI agents are starting to do some weird stuff in Kubernetes. They're not just writing code, they're starting to run the cluster. And that raises a pretty uncomfortable question. If an agent can deploy your infrastructure, what happens to tools like Helm? Because we've talked about this before on KubeFM. Helm works, but it also turns into this giant mess of templates, overrides, and YAML that nobody really wants to touch. And in this episode of KubeFM, we go one step further. We talked to Mike Solomon, who looked at a 50,000 line Helm setup and decided to throw it out completely. No charts, no templates, just a Markdown spec and an AI agent that figures out the rest. Debugging, patching, even fixing its own mistakes. So the real question is, are we moving from infrastructure as code to infrastructure handled by agents? This episode of KubeFM is sponsored by LearnKube. Since 2017, LearnKube has provided trainings for Kubernetes engineers all over the world through courses. They are given in person and online to groups as well as to individuals. They're instructor-led, are 60% practical and 40% theoretical, and students have access to the course materials for the rest of their lives. For more information about how you can level up your Kubernetes skills, go to learnkube.com. Now, let's get into the episode with Mike. What three emerging Kubernetes tools are you keeping an eye on?
Mike Solomon: It's tough for me to say about emerging because some of them are kind of old, but I'm definitely keeping an eye on them. So one that we're using internally heavily is called Argo, which, of course, a lot of people are used to for CI/CD. But one thing that I find that teams use less but is quite relevant these days is workflow automation. So a lot of the parallels that I make in the article and that I'm making generally these days are to a broader movement to kind of re-architect many different stacks, not just Kubernetes, towards AI-native workflows or AI-native stacks in general. And one important aspect of that is workflows, which is kind of things that proceed by step that can wait an indeterminate amount of time for a result to happen. And Argo has always been good at that. But a lot of the contexts that I was in previously have used it for CI/CD. And AIATELLA uses it heavily for workflows. And I see a lot of things going in that direction. Another tool that's also old, but I also see being used in a really novel way, it's even more foundational always, vCluster, which has been used kind of in the past to create some sort of virtual private cloud within a Kubernetes cluster and now is very used, and I've been using it for GPU sharing, which is more and more important with AI workloads, and certainly for us. And then Maybe one other one that I haven't used, but that I find interesting is Karpenter with a K. Just because I've heard that it can save a lot on Kubernetes resource use. And the more that we have kind of microagents spinning up, the more interesting I think it can be to make sure that they don't use too many resources when a Kubernetes cluster is deployed.
Bart Farrell: How did you get into Cloud Native?
Mike Solomon: Maybe one thing to know is that I'm always hacking on many different projects and that article specifically talked about this tool called Autodock, which no longer exists, but done in the context of a project that very much still exists and is an incredible project called AIATELLA, which is a radiology tool that is used to help doctors and other clinicians speed up aortic measurements and also be more accurate with measurements of different regions with the aorta. Which can detect kind of a wider range of anomalies from aneurysms to malfunctioning iliacs to sort of lots of different sort of any reason that you would get an aortic scan and is looking at indicators like maximum and minimum diameter in 12 separate regions. So that's kind of the article was about two things, but the use case was that particular one, which is a kind of incredible. Literally a life-saving project that's heavily using AI.
Bart Farrell: The Kubernetes ecosystem moves quite quickly. It's no secret. How do you keep up to date with all the changes going on in the Kubernetes and cloud-native ecosystem?
Mike Solomon: So maybe one thing to state is that I'm not a particularly experienced or fluent Kubernetes developer just kind of historically. So A lot of my knowledge is new, and a lot of it was kind of formed in the post-agentic world, meaning over the past year. But I would say the best way that I keep updated is Context 7, which I know is maybe an answer that a lot of developers have these days. But Context 7 is an MCP server that has increasingly more and more of the world's developer documentation. And when I need something in Kubernetes, the first place that I reach out is Context 7. And most Kubernetes projects of import have documentation on Context 7. So it winds up being kind of on demand based on need. So it might not have bleeding edge stuff, but at the same time, because it's opt-in and I think it has crawlers as well, it does a pretty good job. So, for example, I needed, I'm trying to think of an example where Context 7 helped me out. Just with webhooks and Argo is one that I can think of off the top of my head. But basically when I need something, Context 7 is able to step in and give me a hand. So I would strongly recommend that any Kubernetes developer or any developer use Context 7 for discovering new helpful tools.
Bart Farrell: If you could go back in time and share one career tip with your younger self, what would it be?
Mike Solomon: Ah, well, I'm 43 right now, so there's many varieties of my younger self. But maybe one is to love the project. So when I started in my career, I started in open source, and I absolutely loved the project that I was working on. And then as I started getting hired for paid gigs and doing more paid development work. There were many projects that I loved, but also ones that I loved less. And it was just very clear to me that my intrinsic motivation mattered a lot, which might not be the case for other folks, different folks at different things that kind of keep them like showing up and doing good work. But for me, it was very much based on kind of who was part of the project. Which doesn't mean that you can find that open source energy sort of in every professional setting, but there's many that kind of recreate something like that. So I would tell my younger self to try to bottle up what you have kind of from the open source world and just bring that sense of community and belonging into every setting in the professional world.
Bart Farrell: As part of our monthly content discovery, we found an article that you wrote titled Using Claude Code to Pilot Kubernetes on Autodock. So the following questions that we have are designed to explore these topics further. For almost a decade, Helm has been the standard way to package and deploy applications on Kubernetes. You write templates, define values, and render the final YAML. Most teams treat Helm charts as a source of truth for how their infrastructure is configured. What does a typical Helm-based deployment workflow look like? And why has it become so deeply embedded in the Kubernetes ecosystem?
Mike Solomon: Definitely. So we were Helm with AIATELLA until we weren't. So I could definitely describe what it looked like in there. So the advantage of a Helm chart, like there's many analogies, for example, like Helm is sort of like crates or sort of like NPM, but it's this idea of being able to pull in different packages. So like one of the really classic ones is that a lot of teams don't necessarily think about logging immediately as a first class citizen, but of course that becomes very important. And then sort of you might hear about Loki and Grafana and maybe Prometheus and Tempo. And you want to find some way to pull these into your project. And when you start googling around for that, you'll see that these and many other tools have community-maintained Helm charts. So it allows you to create a manifest of all the things that your application needs, and also create a manifest for your application, meaning that if it's going to be consumed by somebody else, that they could pull it in using a Helm chart. So Helm allows for a great deal of composability. And also it allows for a great deal of overrides. So Helm chart, like when you invoke something using Helm, you can use, of course, an arbitrary number of parameters that you pass into it that trickle down to like literally every nook and cranny of the deployment. So it's extremely flexible. And that goes in general with the Kubernetes ethos of being able to deploy onto many different targets. Whereas Other sort of package managers might be a little bit more opaque. For example, if you're using crates or NPM, you're rarely passing like flags to those, but then they're designed for ecosystems that are much more hermetic. So Helm is great because it preserves the openness of Kubernetes while at the same time, having the composability and expressivity of kind of a tier one package management system.
Bart Farrell: That repeatability is exactly what made Helm so popular in the first place. But at AIATELLA, you recently split your code base into two repos, one for the FDA regulated ML code, one for infrastructure and deployment. The infrastructure repo inherited a significant amount of Helm configuration. What did that code base actually look like when you opened the hood?
Mike Solomon: Oh, so there's a before and after story in here, and that's what the article's about. I mean, the spoiler alert is that it has zero Helm right now, but when you opened it up, there were sort of, how can I put it, that the Helm had become gargantuan, and it was pulling in other Helm charts that themselves were out of date. For example, at a certain point on the infrastructure that we were using, the Loki Helm chart didn't work. So we wound up patching it in all sorts of esoteric ways just to get it off the ground. But then we were checking that into version control. And what we realized very fast, and this is one of the core reasons that we kind of went down the road that I talked about in the article, is that there was no good semantic layer in Helm, meaning like there's kind of expressive Helm where you write like the thing that you want. And then there's like emergency Helm where you write the thing that you have to write to get the deployment over the line. It's the same YAML either way, but kind of one is your intent and one is the thing that you wish you didn't have to write. And the only way that you could sort of differentiate between the two is comments. But those are comments that like, you know, I mean, you'll write for a rainy day, but probably nobody will read them. So when you opened it up, given the complexity of this application, it's a very complex application. Just wound up having far too much Helm. And then the other part of it is that because we were developing it quite fast, most of it was machine written by different agents. And that led to a huge understandability problem, meaning that Helm is, of course, very valuable when you're the one that's authoring it, but when you're not and it's opaque, then it kind of runs into the same problem that code does too when it's vibe-coded, that it just requires a level of kind of auditing and review that in certain cases might be even more fastidious and time-consuming than just writing it by hand in the first place.
Bart Farrell: That sounds like a code base where the cost of making a change is almost as high as the cost of getting it wrong. The infra repo needs to be nimble. Hospitals have unique setups, and the FDA-regulated side has six-month review cycles you can't afford to block on. How do you approach this problem?
Mike Solomon: So a couple of things. One, and this is independent of any of the infrastructure sort of choices that we made, but by splitting that, it gave us a lot of freedom to set up the infrastructure repo in the way that we wanted, meaning that it's not going to be FDA regulated. So we have kind of a lot more freedom there. And that gave us freedom to experiment. And I think just the freedom to experiment wound up leading us to the solution that we wanted. So kind of going back to what I said in the previous answer, we knew that the declarative Helm as it was being written was extremely fragile. And as you say, and it's absolutely correct that the cost of making a change was as much as the cost of fixing when it's broken. So instead, and this is kind of where Autodock at the time came in, we wound up getting rid of Autodock, but are still very much using a method that was inspired by it. What we wound up doing is just describing our intent. And the idea was that Helm is supposed to kind of pull together a lot of things, and it does it quite quickly, of course. But for us, we don't have any time. Time-based emergency, meaning that we're fine if the deployment takes like half hour, an hour to get stuff spun up because every hospital, like realistically, we're going to be taking days, meaning it's not just the deployment, it's also the liaison with the people who are working there. So it was extremely important for us to be able to have that split, to be able to experiment with deployment strategies in general. And that liberty to experiment is what led us to the agentic route. Whereas if it had been more tightly coupled, I think that we would have just had the sense that regulators' eyes are on and probably wouldn't have felt as emboldened to experiment in the first place.
Bart Farrell: So you have an on-demand cloud environment and an AI coding agent that execute commands on it. Walk us through what happened when you asked Claude to bring up Argo workflows. Where did things start going sideways?
Mike Solomon: Sure, for sure. So, I mean, one thing that This question has a bit of history in it because what happened kind of depends on the models that you're using. And I think that kind of when the coding history of 2025 is written, a lot of it is going to be about multiple groups and movements kind of discovering over time that things that were branded as coding agents could actually do a lot more than writing code. I mean, that story has played itself out in numerous ways, probably the most famous of which right now is OpenClaw. But there's many ways that this discovery played out in different communities. So I think that kind of the first answer to your question is sort of not realizing that it could do it at all, meaning that the first thing that we have the agents doing is kind of proposing, like let's say that you have a Kubernetes cluster up and running and your Helm chart is 98% correct, but you're just running into like one thing that doesn't work, for example, like GPU sharing or like a small thing where it could edit the Helm chart and reapply, or it could just go in and like issue a command, you know, a command correctly with kubectl or something like that. And one thing that you'd find in like May or like June 2025 is that sometimes it's just proposing to edit to issue these raw Kubernetes commands directly, meaning that because the context window is so small, it doesn't remember that it was supposed to be editing a Helm chart, but it's able to issue those commands just fine. And of course, in this training data, it knows what those commands are. So it's offering to do these things for you, and pretty soon you realize that if you just kind of let it go and figure out the problem and troubleshoot that, it will get it right, meaning that if it's not able to locate your NVIDIA GPU, and you just ask it to power through in like 10 minutes you will have your GPU set up and it will be consumed. And then what you realize is that you've arrived at the outcome that you wanted without having it edit a single line of Helm code. And then what that leads you to is, I wonder if this could just be expressed in a Markdown document where you say the thing that you wanted to achieve and just trust that its training data is good enough and the scope is well-defined enough that it will get to the same or better outcome than the one that's specified in the Helm chart.
Bart Farrell: Two things trying to terminate TLS on the same machine is the kind of conflict that rarely shows up in documentation because most guides assume a clean environment. How did Claude figure out what was going on and how did he get past it?
Mike Solomon: Really good question. So what was happening is that Argo itself within Kubernetes was trying to terminate TLS and then Caddy was also running on the box on the outside. The same way that Nginx would and was also trying to terminate TLS. So there you had TLS double dipping and it just kind of led to confusion. So at first, maybe one thing is that I had the advantage of knowing exactly what was going on in the box because I knew kind of the internals of Autodock. So I was able to see that it was just really struggling with HTTPS and just kind of hitting walls that traditionally come up when you have a TLS-based issue. So I knew that there was stuff outside of the Kubernetes infrastructure that could be creating a problem. And I asked the agent to look outside and zoom out and just look at the system logs in general. And when it did, it was able to find that there was this TLS problem. And what that led me to think is, OK, the issue with Autodock, and this is why we got rid of it, is that it's in a way doing too much stuff. It's batteries included in a way that's actually sapping the agent's ability to do its work, not adding to it. So realistically, the best possible outcome we could have, and kind of fast forward to today, this is what we have, is that from the beginning, we give the agent the agency to actually spin up the box that it's going to be using, and to keep in persistent memory the process that it took to spin up that box. And that way it knows exactly what's on the box. When it's being spun up. Like we've chosen a default image that is extremely threadbare precisely so that it doesn't kind of run into these unexpected walls, which then means that once it zooms into Kubernetes, it can make correct assumptions about what's going on without worrying about the external environment that's actually on the bare metal getting in its way.
Bart Farrell: With the port conflict resolved, Argo still needed to be configured to work within this environment. TLS is handled externally, so the server's own defaults don't quite fit. Patching Kubernetes resources via kubectl JSON patches is notoriously finicky, even for experienced engineers. What did Claude run into?
Mike Solomon: Claude ran into two things. So a couple, three things. One is exactly what you just said, that it's extraordinarily finicky. Two is that Claude does not give up, meaning that it just patches until it gets it right, and sometimes it gets it wrong. So that's an important thing to know as well. And then three is that you can drastically reduce the surface area by choosing the correct abstractions up front. So a lot of times when Claude was patching things, it made the same mistake that any experienced engineer would. But what I realized is that it would kind of keep going until it got it right. But then I didn't necessarily know what it did. And even when I read through the system logs and I could kind of reverse engineer how it got to the solution, sometimes I didn't think it was the right one. And sometimes I was wrong and not thinking it was the right one, and sometimes I was right. But even more importantly, there was no kind of persistent memory layer that said how it arrived to those conclusions. So there was no audit trail internally in our team, which meant that the next time I ran into that problem, we could choose the same or different answers. So we started keeping runbooks of correct JSON patches for our particular infrastructure that are always needed. For example, sometimes we're on GCP, sometimes we're on Azure, sometimes we're on a box that's already running with k3s. Sometimes we have a completely clean slate. And in all those cases, the patches are going to be a little bit different. So basically what we developed in Markdown is a vocabulary of things that might need to be done. We kind of hedge. And then as it's being deployed, it can kind of dip into any of those as sort of its internal knowledge base. And then the third thing I said about making sure to choose the right abstraction. So as soon as you opt into, I mean, most healthcare things are workflow based and deal with like a PACS system that is kind of pushing data to one place, it's treated writing and kind of extracting and pulling it to another place. So if you buy into a workflow based abstraction, which is what we did, then you're no longer patching. Kubernetes with JSON nearly as much as you're patching Argo workflows. So that's a much more limited surface area with kind of a much more constrained set of primitives, which means that the LLM has a better chance of leading it to a correct solution and one, most importantly, that you can understand. So I would encourage anybody that's doing it to really brainstorm upfront of what you want the primitive set to be.
Bart Farrell: Once Argo was up and the UI was exposed, you moved on to deploying your ML workload as a workflow. The container image lives in a private registry, which means the cluster needs credentials it doesn't have yet. How did Claude handle that?
Mike Solomon: There's this really interesting feeling that I think a lot of developers have of what it means to have a human in the loop. And it kind of goes all the way to the kind of comedic and maybe dystopian where you have these rent-a-human websites where OpenClaw will rent a human in some country to do some act for you because, you know, it can't do it itself. So credentials is kind of one of the classic and maybe earliest places in 2025 where this would come up, that the LLM would run into a problem and then start realize that it didn't have credentials instead of asking you for them, which is try to solve the problem without credentials, which would lead it to, for example, like create entire mocks or just bypass. So Basically one bad situation that we got into was that we realized that the LLM was creating in production code, a bunch of kind of cop-outs that we absolutely didn't want, like essentially these branches for a state where there wasn't credentials, whereas in production, we would never run it without credentials. So what we realized is that the default, at least in mid 2025, was for the LLM to kind of treat a lack of credentials as an engineering problem, as opposed to lack of credentials as a let me pull the human in the problem. So we were really bullish and aggressive with our prompt engineering to write like stuff in all caps at any point that we believe that it would be better for it to pull us in and call in different credentials and that's exactly what's happened so kind of fast forward to now and this is like after the point the article is written we have it doing it for like Xray on Jira we have it doing it for services like Socket.dev and all those things it's not trying to cop out or exit around but rather we know how to proactively tell the LLM the sharp edges that we'll run into and sort of when and where to pull us in to get credentials and make sure they're sufficiently secure and audited.
Bart Farrell: Your inference engine also needs an ingestion endpoint. External systems need to send data that triggers a workflow. Wiring up event-driven workflows in Kubernetes involves multiple moving parts. How did that come together?
Mike Solomon: Maybe this gets to a general ethos that we've applied throughout the project, but that started with the answer to this question where we were coming from a logic of REST servers, so essentially FastAPI, and the idea even before using something like Argo was to build a FastAPI server that you could ping, so essentially it acted as a receiver of webhooks that would kick something off and then write something back. But all that would happen over HTTP. And what we realized is two things. One is that Argo has extremely rich event-based expressiveness and the second is that sometimes the best thing that you could do is keep it really dumb for lack of a better word, the smallest, easiest possible surface area so rather than having some sophisticated exchange mechanism, our container literally has a unique and single API which is that it's a Docker image with a directory called i for input and a directory called o for output. And you kick it off, it reads what's in i and it writes to o. All the magic, of course, happens inside it with the actual machine learning algorithms that are running. But what that simplicity allowed us to do, and more importantly, allowed the LLM to do, was have a crystal clear contract about what is going on, meaning there is never any doubt about the one and only way that this Docker image works, which meant then that bolting on event-based workflows, for example, having file sniffers, being able to take a bunch of DICOM files and zip them up, all sorts of things that we try all the time, integrating them into PACS systems, all that stuff. The bottleneck was this very simple contract of an interface with a Docker image. And keeping that contract simple means that the LLM has one less concern to worry about and is able just to focus on the integration layer. And then that simplicity we try to take to all choke points in the setup.
Bart Farrell: So at this point, everything works, but it was built through trial and error across a long session. Claude fumbled, debugged, patched, and eventually got it over the line. If you had to do it again on a fresh box tomorrow, you'd be starting from scratch. How do you capture all of that hard-won knowledge so it's not lost?
Mike Solomon: So a couple of things. One thing that a lot of developers should know is that Claude and all agents, or most agents, keep their full chat history somewhere, usually in their internals. And they have memory to a certain extent that's built in that allows them to draw significant experiences. But of course, then they have to make the determination of what is and isn't significant. For us, we let it fail and let us fail, as much as necessary, we'd scan those failures and codify all those failures into a knowledge book. That just goes into Markdown. So basically we have skills. One skill is our clinical trial skill that literally says how to run the clinical trial from end to end. And we would just run it. And every time it would run into a new, different set of errors. And those errors would go in as optional asides in the Markdown. We don't want to make it too prescriptive as if it will definitely run into those errors, but it's more that if you run into this, try this. If you run into that, try that. And then we just rinse and repeat. So one really nice thing is that on one day we ran the clinical trial. Four colleagues in parallel ran the clinical trial. And a lot of it is just interacting with agent, meaning that the agent asks you questions. We use Obra's Superpowers, which is very good for setting up plan-based workflows where an agent asks you questions. So it often opts into using Superpowers and then uses that skill in order to ask you questions, take stock of the answers and then goes autonomously until it runs into a wall. And we also give it extremely explicit stop criteria so that it doesn't paper over anything or solve anything that we don't want it to solve. So to answer your question, that failure is something that we introspect, codify into the Markdown and then try again. And then the fixed point of that system is just when you run it and it works and you don't need to add anything. So then you're done.
Bart Farrell: A specification is only as good as its reproducibility. You can have a beautiful document, but if it doesn't actually work when you hand it to a fresh environment, it's just documentation. What happened when you tested that?
Mike Solomon: It broke in ways that we didn't expect. And there's a whole process of hardening. One thing we're working on right now, for example, our clinical trials are going to be on Azure, whereas most of our development happened on GCP for a variety of reasons. So we're porting it over to Azure, which runs into issues that we didn't have in Google Cloud. So the most important thing is having a meta skill set up. Usually this is in a CLAUDE.md or AGENTS.md file where you describe what the dialectic workflow is that produces these documents. So it's like in the performing arts that you rehearse. And by doing that, it improves the muscle memory of the entire system. So at that point, it resembles a performer or an athletic activity where the most important thing is regular repetition in a variety of circumstances.
Bart Farrell: Helm charts can also include comments explaining why things are configured a certain way. Someone could argue that better documented Helm charts would achieve a similar result. What's structurally different about having the reasoning live alongside the instructions in a single file?
Mike Solomon: It's a really good question. Maybe one aside that I would make is it's gotten so big right now that we actually don't have it living in single files. We use this convention that's emerged in the community where we split it into multiple files and then have TopMatter in the Markdown announce what aspects of the file might be pertinent to an LLM doing a given task and in that way we're able to not pollute context windows. So I say that because right now we have a multi-file setup as well. But the question is, of course, still valid and still stands: why use that compared to a Helm chart with documentation. And there, I would say that as soon as you have anything committed to a Helm chart, you're writing something that is very rich with intent because even though it's YAML, it's machine code, it's read by Helm and executed essentially as an instruction. So there we wanted maximum flexibility in our instruction set. And given that the surface area of stuff that Helm could handle would be low, we decided that it wasn't necessary. And the other thing I would say about that is that by working in Markdown, you're able to be 100% intent driven. And Helm, even though you could have comments that do that, it just gets laborious and tiresome to write the intent above every single line. And of course, some lines are self-explanatory. So we found that we didn't need it. And another thing to note is that one thing that we've seen is that agents will generate Helm files, meaning that if you have good Markdown, it will generate a Helm file for the Markdown. And if the Markdown is underspecified, it will ask you more questions. Then it will inject those things into the Markdown. So it's actually trivial for an agent to be able to create ad hoc Helm charts. And this, I would say, goes towards an agentic world that a lot of folks are imagining where, for example, there's no app store anymore because apps are created ad hoc. And here I could argue that there's no pre-authored Helm charts because Helm was created ad hoc and it feels like we're moving in a similar direction.
Bart Farrell: If an agent can deploy dev and staging environments from a Markdown file, the obvious next question is production. You mentioned you've already used Claude for production deployments. How do you think about the trust boundary there?
Mike Solomon: What we try to anchor on as best as possible is, does this screw up more or less than a human being on our team would? And there, maybe the important thing to say is that, of course, we're building medical devices. And even though the infrastructure part is outside of the FDA regulated part, it's still vital to be able to deliver accurate results to clinicians. And for example, we're using that to make alternative file formats like DICOM SR. It's made outside of the official software, not inside it. So all these things are still pursued by physicians. So you can't screw it up. So there we have a really honest periodic internal discussion about what is the accuracy rate and what type of anti-hallucination patterns can we put in there. And for us, internally we deem it to be production infrastructure when we get something that is a highly reproducible result which means that the Markdown is specified enough so that in the happy path case it is as deterministic as written software and of course the LLM, it's predicting the next token so it will veer in different ways in its reasoning but if the guardrails are good, then like a marble going down a funnel, it will wind up in the point that you want it to wind up. So that's the test that we're applying to it.
Bart Farrell: You said something in the article that I think a lot of engineers are quietly thinking. You wrote, and I'm paraphrasing, being a developer was nice while it lasted. Is Claude replacing you at your job?
Mike Solomon: It's really tough. It's something that we joke about internally in that project, AIATELLA. The short answer is yes. And then the long answer is what does it even mean to have a job and be employed in that sector in 2026? And I think that the way that a lot of companies are moving is to think of themselves as collectives of thinkers that spend a lot of time coming up with specs and then making sure that the specs lead to reproducible outcomes. And I mentioned OpenClaw before, a lot of times now people are treating OpenClaw essentially as a full backend. It's not going to serve a result as fast as a REST server, but for a lot of cases, it's fast enough. That collective requires fewer people than it used to. And one of the unfortunate things that you see is that it results in a reshuffling of the cards. So Block, for example, a company of 10,000 people laid off 4,000. And different people can debate the wisdom of a choice like that. But the CEO released a very clear statement about his thinking with respect to doing that and whether or not he was right or wrong, time will tell. But you can't fault him for not having thought about it. Obviously, this is a painful decision that he and other people are making. And as I said, time will tell if it was a correct one.
Bart Farrell: So of course, there's these economic ramifications that are quite serious and important that have to be taken into account.
Mike Solomon: And I would say that maybe in a smaller company like AIATELLA, I don't think anybody's job is going anywhere anytime soon. You'd have to ask our CTO and CEO about that. But what I can say is that it has evolved very much into less of grinding out code and much more conversation about the thing that we want to build. And I think that conversational approach to building anything is what's going to persist going forward.
Bart Farrell: Mike, what's next for you?
Mike Solomon: I'm always wearing multiple hats and juggling multiple projects, as I think a lot of people who are builders are. I was a founder in another life and enjoy wearing that hat from time to time. So the projects that I'm working on, I love in this particular context. We're talking about AIATELLA, which is a fantastic team. So concretely, one major thing that's next for me is continuing to work with the team. Another thing that's next for me is a parental leave, which will be quite nice and different than hacking on code. But maybe one thing I'll say is that there's a project that I've started with a few friends that brings traditional coding and DevOps methods, of which, of course, what we're talking about, Kubernetes deployment is one, into industries that haven't been touched by AI yet. So for example, laundromats, industrial cleaning, funeral parlors, car washes, places that haven't necessarily adopted these workflows for a variety of reasons. And to me, that seems fascinating. The idea of acquainting oneself with these more traditional industries and being very respectful and mindful of the knowledge that they've accrued over sometimes hundreds of years of operating. And then considering this is a unique historic moment where a very powerful technology is being born and then asking to what extent, if any, that technology can help those industries and perhaps to what extent can the technology create new industries. So it's an exciting time to be alive in that regard.
Bart Farrell: And if people want to get in touch with you, what's the best way to do that?
Mike Solomon: GitHub's great. Mike's also at mikesol on GitHub. If you want, I'm always hacking on open source projects. Right now I have this fun project called MVFM that I'm building where I'm trying to re-implement the entire TypeScript ecosystem in TypeScript. So that's fun. I like doing these boiled ocean things. So you can always make a pull request there, raise an issue to say hi. That's probably the best. Otherwise, I'm @stronglynormal on Twitter. You could always at me there after strong normal form in functional programming and languages. So those are two quick ways.
Bart Farrell: Thanks so much for joining. Hope our paths will cross again in the future. Take care.
Mike Solomon: Absolutely. Thank you very much, Bart.