My pipelines from GitLab Commit to ArgoCD got beaten by FTP

My pipelines from GitLab Commit to ArgoCD got beaten by FTP

Host:

  • Bart Farrell

Guest:

  • David Pech

A sophisticated GitLab CI/CD pipeline integrated with Argo CD was ultimately rejected in favour of simple FTP deployment, offering crucial insights into the real barriers facing cloud-native adoption in traditional organisations.

David Pech, Staff Cloud Ops Engineer at Wrike and holder of all CNCF certifications, shares his experience supporting a PHP team after a company merger. He details how he built a complete cloud-native platform with Kubernetes, Helm charts, and GitOps workflows, only to see it fail against cultural and organizational resistance despite its technical superiority.

You will learn:

  • The hidden costs of sophisticated tooling - How GitOps pipelines with multiple moving parts can create trust issues when developers lose local control and must rely on remote processes they don't understand

  • Cultural factors that trump technical benefits - Why customer expectations, existing Windows-based infrastructure, and team readiness matter more than the elegance of your Kubernetes solution

  • Practical strategies for incremental adoption - The importance of starting small, building in-house operational expertise, and ensuring management advocacy at all levels before attempting cloud-native transformations

Relevant links
Transcription

Bart: In this episode of KubeFM, we're back with our previous guest, David Pech. This time, we're unpacking a real-world DevOps scenario where a complex GitLab CI/CD pipeline integrated with Argo CD was outpaced by a simple FTP deployment. We'll walk through what went wrong, from container builds and Helm chart delays to Argo CD sync bottlenecks, and explore why sometimes operational simplicity still wins.

The story highlights issues like image version mismatches, lagging reconciliations, and the overhead of declarative GitOps automation. We'll also touch on the trade-offs between visibility, auditability, and raw deployment speed. Stay tuned as we look at the technical friction behind modern Kubernetes delivery. David also shared some great insights about how the adoption of new technologies works in his home country of the Czech Republic.

This episode of KubeFM is sponsored by LearnK8s. Since 2017, LearnK8s has provided trainings all over the world to help Kubernetes engineers level up. Courses are instructor-led, given online as well as in person to individuals or groups. Students have access to course materials for the rest of their lives. For more information, check out learnk8s.io.

Now, let's get into the episode. David, welcome back to KubeFM. It's great to have you with us. How are you doing today?

David: Hello. It's nice to be here. I'm doing great. The weather is nice.

Bart: Now, what are three emerging Kubernetes tools that you're keeping an eye on?

David: Recently, I've been installing Kyverno in production. I like it a lot, and I was surprised how easy it is. The next thing is Cloud Native PG. I'm watching it closely, and recently there was an interesting split-brain bug that might elevate this project to another level, which is perfect from my perspective. Also, I recently discovered that in kubectl—the CLI typically used for Kubernetes—there are many more keyboard shortcuts and options that I was not aware of. It seems like learning Vim again, partially at least. There are always fascinating projects in the Kubernetes ecosystem to spend time with.

Bart: Good to see that you're busy and very active. We had you on the podcast before, but just for a quick introduction, can you tell our audience about what you do and where you work?

David: Currently, I work at Wrike. I work with infrastructure in a role that combines being an infrastructure specialist and a Java developer. I help the team connect through JDBC libraries to Postgres databases, Kafka services, RabbitMQ, and other services. The company is large enough to have its own set of libraries. Typically, when onboarding an application, you can get a starting template and libraries that solve complex problems effectively, essentially for free. From an infrastructure perspective, this is a very welcome approach.

Bart: And I understand that you are quite certified when it comes to CNCF certifications. Can you tell me about that?

David: Currently, I've earned a new badge called the Golden Cubes Badge. I'm not sure if folks are already familiar with it, but at this point, I got all the CNCF certifications, and it was an interesting journey. I'm still unsure or struggling a little whether this is useful, especially in the trenches, because all those certifications are multiple-choice questions. I can imagine that for many employers, it's not that significant. On the other hand, Kubernetes-related certifications like CKA and CKS are very relevant. I'm actually looking to get re-certified with some of them soon, as it's a welcomed refresher for knowledge about new Kubernetes features over the past three years.

Bart: Very good. Since you said one of the emerging tools you're keeping an eye on is that you recently installed Kyverno in production, I also understand there's a Kyverno certification that's come out recently.

David: Yes, there is a certification, and I can definitely conclude that it was useful. Typically, my process is to go over the documentation of the project. For Kyverno especially, I would say it's very welcoming, easy to understand, and has a lot of options and examples. The learning process for the certification was very useful to me.

Bart: And obviously, you're quite into cloud-native, given the amount of time you spent certifying and also considering getting recertified. How did your cloud-native journey start?

David: Originally, a few years back, I started with AWS, mostly around Lambdas. We are still running some Scala services as Lambdas. There are challenges when running a typical Java application that is slow to start and bulky from a memory perspective, requiring a lot of tooling.

We use AWS Aurora, which is another precursor to cloud native. Even when using it as a service, you can easily clone and experiment on another instance without affecting production. This approach is, from my perspective, a lesson in day zero.

My actual journey started with Docker and Docker Compose. Through Docker and basic containers, we used Docker Swarm for our first applications, which was very welcome. To be honest, at the time—maybe five or six years ago—I preferred Docker Swarm over Kubernetes because of its simplicity. Eventually, we hit some limitations and were forced to move to Kubernetes.

I did many certifications and extensive research. When I started, I felt I didn't understand Kubernetes. I began reading books, and after about 20 books, I'm still not entirely confident that I fully understand Kubernetes. However, my experience is significantly better than it used to be. It's a space where there's always something new to learn, something that can challenge you or potentially cause issues in production that need tuning.

Currently, I work in a company running infrastructure in several US data centers on-premises with numerous technologies. From all those certifications and processes, I can effectively apply about 60% to my daily job, which I find fantastic.

Bart: Very good. Taking a step back from when you got into Cloud Native, what technologies were you using in the past?

David: I started as a typical Java developer. In the past, I began with Visual Basic 6, which probably not many people remember. I was very keen to switch to .NET version 1.0, not .NET Core. I've had some experience in management, which was a bit of a detour for me in software engineering. Currently, I believe infrastructure is the most interesting place to be because the problems, at least at my scale and in the companies I know, seem very interesting and are still growing.

Bart: You mentioned a couple of things already in terms of books and the certification process, but what works best for you to stay up to date with all the changes in the Kubernetes and cloud native ecosystem?

David: Given my interests, I'm not able to keep track of all technologies. I focus on up to five, especially core Kubernetes. I mentioned projects like Cloud Native PG, and I closely follow their changelogs on LinkedIn and GitHub to see what's been fixed, what new features are available, and what they're considering.

My technology tracking mostly involves conference attendance. I particularly enjoy Cloud Native Rejekts, and KubeCon seems like a good annual event. These conferences offer talks with diverse audiences, allowing me to always try something new. Recently, I attended a session on Gang scheduling workload, which seems significant for machine learning at scale—an area I'm not very familiar with, but find interesting.

Bart: If you could go back and share one career tip with your younger self, what would it be?

David: Sometimes I was too hard on myself. I can remember the time when I was running Gentoo Linux. I don't know if many people are familiar with it, but one of its properties is that you compile everything on your own machine. It was very time-consuming to compile the whole kernel and every possible library with each update. But at the same time, I learned a lot. This experience, I think, I could omit in the future and save a lot of time in my next life.

Bart: With that being said, we want to jump into our core focus of today, which is part of our monthly content discovery. We found an article series: "All my DevOps pipelines from GitLab commit to Argo CD got beaten by FTP." You encountered an interesting challenge when supporting a PHP team after a company merger. Can you tell us about the contrasting approaches to cloud technologies between the two companies that collided in this situation?

David: First, I need to provide a disclaimer that I was involved in this. From my opinions, you can probably clearly see which company I'm associated with. I want to emphasize that this is not meant to be a criticism. These are different approaches, and I think both are valid. They have their place and time in the industry. Even though I prefer one approach, I'm not criticizing the other as invalid because I have used both in the past and can understand each perspective.

Let's say we have company A and company B. Company A is the larger entity in the merger, and company B is being merged into it. Company A has a legacy PHP application with no framework, focused on small applications and fast delivery. They have many clients who want quick implementations. Typically, they deploy through FTP—zipping or tarballing everything and transferring it to an IT person who can deploy it on Windows IIS (Internet Information Services).

The companies differ almost entirely in their approaches. Company B runs on Kubernetes, uses Argo CD, follows GitOps principles, and has extensive GitLab CI/CD integration. While company A handles many small projects with limited support, company B focuses on a few long-term projects with strict SLAs.

Their payment models also differ. Company A is paid for one-off implementations, while company B has long-term SLA contracts where incident management matters significantly. Their customer bases vary too: company A serves small to medium-sized clients with limited experience, whereas company B works with large enterprises.

Even their cultural approaches contrast—company A prefers on-site work, while company B operates mostly remotely. And somehow, someone thought merging these two teams was a smart idea.

Bart: Now, one of the aspects that stands out the most in your story is how company A's approach contrasted with modern cloud-native principles. Could you elaborate on their legacy PHP development approach and how it compared to containerized application delivery?

David: Okay, so what did Company A have at the time? They had a custom PHP application without any framework. They couldn't introduce something like Laravel or Symfony—everything from top to bottom was custom-built. There was one person in charge who was the architect, and others were not challenging the design.

As it was a homegrown solution, it had several layers with customizations that were not well thought through. The base was meant to be MVC, but in the views, you could also trigger logic, which was very problematic. It was not object-oriented at all, but based on functions. On one hand, this made the flow easier to understand compared to many Java applications. However, the logic was heavily dependent on function existence. You could override functions, but you couldn't wrap functions inside each other, and there were other limitations.

When copying and pasting, it worked fine initially, but as layers evolved, extensions became difficult to maintain. Typically, the customer was provided with extra components on top of the core, which they called customizations or extensions. Essentially, they added PHP files that overrode something in the core—similar to a C-level approach from the old days.

A significant aspect was that they didn't access the database directly. The entire company was based on a large ERP system, which was the primary data perspective. This framework queried through a custom API (not REST) without touching the database. Customers claimed the main advantage of the ERP system was having a single database to manage everything from accounting to manufacturing to e-commerce. Technically, this was not easy to implement.

The PHP application needed a Redis caching layer. Everything pulled from the API was cached in a similar manner to the API's design. This added another layer of complexity, requiring a deep understanding of how the API worked. The API was not designed around the website's needs but followed a typical CRUD model.

Importantly, the team lacked operational experience. Once they created the application, they essentially "threw it over the wall" to another team for running it. They had very limited feedback on the application's performance. Given that customers were typically small, there was little motivation to tune the system. From an observability perspective, there was practically no monitoring—just some error logging in Postgres.

Bart: From a cloud-native perspective, there were several concerning practices in what you described. What specific challenges did you identify in company A's approach that would make Kubernetes adoption difficult?

David: There are many challenges. It is difficult for customers to understand what is happening. In the traditional approach, where updates come as zip patches containing a few files to replace in the installation, it's hard to imagine a customer adopting something as sophisticated as Kubernetes in their own environment.

If you have a product with increased complexity and no automated testing, this is typically a no-go for most companies nowadays. If a team is not willing to take operational feedback, I don't understand why they would need a more complex, self-healing application environment.

Many of these problems arise from a lack of operational perspective. As a developer, things are much simpler compared to understanding what's happening in production. This approach doesn't scale. For example, the company had annual upgrades that required manual updates for customer-specific customizations. Each year, they noticed the update process was taking longer. While the company was also growing and gaining more customers, the process was not scaling as efficiently as expected.

Bart: Now, you mentioned that company A had attempted to use Docker, but it seems they misunderstood fundamental container concepts that are crucial in Kubernetes environments. What specific container anti-patterns did you observe in their implementation?

David: Typically, when adopting containers, organizations initially mimic their virtual machine approach. They create a Docker image with everything inside—PHP, Apache, Redis, Elasticsearch—all running in a single container. As building these images becomes complex, developers realize they need a smarter approach.

Given some limitations with mounting volumes in Docker on Windows, they developed an FTP solution. They bundled an FTP server inside the container, allowing developers to upload and work with files through FTP or mount it within Windows.

This approach was very disconnected from best practices and made the container resemble a typical virtual machine more than a lightweight, efficient container. It seemed that even the developers didn't fully understand the fundamental differences between containers and virtual machines.

Bart: Given these challenges, I created a proof of concept with Kubernetes and Cloud Native tooling. I'll walk you through the solution, particularly how I used Helm charts, GitLab CI, and Argo CD to create a proper Cloud Native platform.

David: I asked myself, "How can I help, and where do people typically waste time?" Every manual step in this pipeline is redundant. The end goal was to commit and do nothing else, with everything built automatically in the GitLab CI/CD pipeline.

We needed to build pipelines in the GitLab repository for building containers. The service was split into several containers, with all dependencies built as separate containers. I added steps to check code quality, such as introducing PHP linting to track warnings.

We needed to discuss configuration because previously, it was baked into a PHP file. We moved the configuration to environment variables. The Kubernetes installation was done through Helm charts, which are now probably the golden standard. Argo CD was the deployment tool.

I also created separate Git repositories to monitor the Kubernetes cluster and deploy changes with YAML version updates. From the developer's perspective, after committing PHP code, they would get a URL in about two minutes, showing their changes deployed to a completely separate environment. This is now a standard approach.

Bart: After your initial Kubernetes prototype, you expanded the solution to handle customizations. How did you leverage cloud-native concepts like GitOps and CI/CD pipelines to manage the complexities of customer-specific deployments?

David: So, the core component I was talking about was not evolved much, as I explained, because typically they were doing customizations for specific customers. The team was spending most of their time on these customizations. When I had the application core running, I was able to build a Docker image specific to each customer with these customizations on top.

You can imagine I started from a Dockerfile that took the core image, added some PHP extra files as a layer, and then ran this through a GitLab CI/CD pipeline, Argo CD commit, and into a Kubernetes cluster in a special environment. Continuing from the previous discussion, you could get around 20 different URLs, each specific to a customer.

I should mention that the core had a single git repository, and each customer typically had its own repository. You could commit to either one and get the result of updated core and customer repositories in various environments. It was very smooth. One surprising benefit was that you could see how some customer environments might not start after a code change.

Bart: Despite creating what many in the Kubernetes community would probably consider a technically superior solution, it wasn't ultimately adopted. What resistance did you encounter from developers when trying to transition them to container-based workflows?

David: This is the failure part of my story, to be honest. When I created everything, the company B team adopted it right away. They didn't see much benefit because they thought it was a standard they already had in other projects. However, company A was struggling to adopt it.

When I discussed this in depth, I understood that, for example, they don't use the Docker image much. They still have a PHP installation with debugging on their own laptops, with each laptop having a custom, unique environment. I realized they didn't trust Docker at that point.

I understood that I was trying to break into some kind of prison. The team was already declining Docker images before I entered the picture. Now I'm bringing something more advanced and complicated that might have value. There were no complaints about the end value itself.

The issues were that it was hard to understand, had too many moving parts, took too long to build and test an image, and required losing local control. Developers needed to trust that something good was happening remotely, even when they were responsible for handing over patches to the customer.

Psychologically, it was difficult to switch—similar to transitioning from a small team to a larger one. If you don't trust the supporting team, it just can't work this way. I was trying to onboard the developers from company A to this model, but I was not successful.

Bart: Apart from, or we could say beyond developer assistance, what management and customer factors complicated the adoption of your Kubernetes platform and cloud-native delivery approach?

David: I was not familiar with company A's approaches or past decisions, which might have been viable. For example, I've multiple times wanted to break the logic of running website applications on Windows servers—due to my bad personal experience—and move them to Linux.

The company was running mostly on Microsoft tooling, and it was very hard to onboard to something else. They even made their customers follow this principle. For example, with technologies like Kubernetes and Docker that are built on Linux (even though some Windows versions exist), the benefits are not the same. If you try to advocate for a different stack, you already have a problem from day one.

Now imagine that company A had literally no cloud footprint. In this century, it's hard to understand. I even heard a story about their server room being in a city hit by flooding, where they actually had to bolt servers outside when the building was flooded.

If you make customer implementations your primary success metric and evaluate the team long-term on this single metric, this is what you get: a team very performant in deployments but not focused on tracking the number of bugs produced with new upgrades or patches. Unless there's significant customer feedback, the bug count isn't considered a significant metric.

This was very much aligned with the company's culture. While this might sound critical, I emphasize that this could be a viable approach, and I mean no disrespect. When entering a merger from an inferior position, it becomes easy to understand who is in charge.

Bart: The article title mentions FTP beating GitLab commit to Argo CD. In your view, was this truly about the technical superiority of Kubernetes and cloud-native tools, or were there other factors that were more significant?

David: It was mostly a cultural and organizational issue. I would say the technical part was not significant—probably not even 10% of the problem. If your customers are already onboarded on a different technology, breaking that is hard and can take years. If you want to offer them a better solution, it's not necessarily about immediately switching from Windows server to Linux.

Even when you introduce a new version or try to promote it, it requires learning multiple cloud-native technologies. Though these skills might be more valuable for future employment, you might not implement them in your current role. Any change requires significant effort and advocacy. When you encounter an environment operating at completely different levels, transforming it can take years.

You need to decide if the effort is worth it and whether you can influence things from your position. Alternatively, you can merge and join the company's existing trajectory. Both approaches are viable.

In the Czech Republic, technical nuances don't matter much, even from a customer perspective. Customers typically have limited budgets compared to the US. Software as a service is not prevalent, and they often compare solutions from major companies like Apple and Google to custom solutions from small teams. It's a very different experience when you have a limited team building a product that has been developing for a decade.

Bart: It's good to know that the Czech Republic is hype-proof. People are being sensible about how they're approaching technologies. Looking back at this experience, what key lessons would you share with platform engineers who might face similar situations when trying to introduce Kubernetes and cloud-based practices to organizations that are using legacy systems?

David: The team needs to be ready and willing to learn and move. To implement change, the team must first understand that there is a problem to solve. If the team feels confident that they don't have any issues and are satisfied with the status quo, there's no motivation for change.

Secondly, management needs to support the change and advocate for it at all levels. If you're trying to introduce a change and management makes it voluntary, you become a "second-class citizen." Why would anyone change if they don't see a problem, and their manager isn't pushing or advocating for the solution?

If you approach your manager and they respond with a lukewarm "that's nice," it's not sufficient. You need to first sell the idea to them and get them onboarded. They must help with the effort; otherwise, the initiative can be easily abandoned.

It's also essential to check if your customers are ready for the solution, especially if you're not running a software-as-a-service platform where you can directly operate and push the product. Determining whether customers can operate the solution is a critical "go/no-go" decision.

Any change requires significant effort. From my experience, if you don't advocate for change at all these levels, you're not guaranteed to succeed—whether you're switching an operating system version or introducing a new platform.

Bart: You closed your article with an example about a top accounting software company that's still using 32-bit solutions despite customer needs for more memory. How does this relate to your cloud migration experience? And what broader pattern does it highlight about the challenges of modernizing to cloud-native architectures?

David: Maybe just to get back to the example: the company with the 32-bit solution has a hard limit where you can open an Excel file with three gigabytes in memory, and you are not able to get past this. It is very interesting because when you open a file that is zipped nowadays, you have no idea how large it will be in memory. It's typically unexpected, and people are not aware when it happens.

If you reach the support of this company, they can clearly say, "Yes, this is the core of the problem. We have this limitation and are not planning to do anything about it because our software is built this way. We can help you with some tooling, for example, to split large files, but that's it."

I think this approach is very valid if you are retiring in a few years and would like your company to go out of business soon. If you can clearly communicate this to your customers and they are fine with it, why not? But this very well illustrates how hard transitions can be, even at basic levels.

You can imagine: why don't they compile the software for 64 bits? They simply decided not to do it. It's not just about the technical part; it's also about customer expectations and whether they are mostly satisfied. Of course, any change in major areas is very expensive for a software company.

If you have outdated solutions—and we all do in our daily jobs—with certain limitations, and the company clearly states its boundaries, I'm completely fine with that. I don't mean every company needs to modernize or bring something new. You can plan to switch technologies or products in five years to something more modern with different approaches.

When I get back to the discussion about the Czech Republic, you can't compare it to Google or Apple, especially here. Many ERP systems are custom-built, perhaps from the '90s. A large software house might have no more than 100 engineers handling everything: accounting, sales, manufacturing, e-commerce, reporting—you name it. They've likely modernized several times but are still far from the user experience of modern solutions, and that's fine.

Some products will die out, and new ones might emerge, but that's how it is. Customers don't need to be angry about this fact. The major message is: you don't need to move to cloud-native architecture if you decide not to. If you are transparent and it brings benefits, why not consider it?

Bart: I really like that. It's the idea of "start with the end in mind", a famous quote about building objectives. Based on the objectives you have, what systems are you going to need to achieve them? This should be transparent and involve making an honest choice. I think there's a lot to be said for that in a world with so much hype, where people sometimes feel pressured to use something that might not be answering a real question.

I also think you've done an interesting roundabout promotion for the Czech Republic as a country we should be more interested in, particularly in terms of how technical decisions are made. Vendors might have a better approach to responding to issues and having meaningful conversations. I was lucky enough to be in the Czech Republic last year and hope to return again this year.

To finish up, for those working in environments resistant to Kubernetes and cloud native adoption, what practical advice would you give for making incremental progress with containerization while respecting existing constraints?

David: Start small. This is absolutely number one, and you need to have some level of expertise in-house. These are the two most vital parts of the message. We can imagine that building any form of container in any technology is the first step. Then you need to understand how it gets run in any form of environment.

People tend to see some shortcuts—like spinning up a GKE cluster with autopilot and just throwing some YAML into Kubernetes—but that just doesn't work. Once you've encountered the first problem, you have very limited debugging experience. Even if you've hired a fancy consultant and are paying a lot of money, it doesn't work.

People typically think it's easy to run an application on a Linux server. Every IT person with an operations background can do it. But they greatly underestimate how much harder it is to run it inside containers, whether in Docker Swarm, Docker Compose, or Kubernetes. It is very different.

The operational parts need to be in-house, at least 80%. If you are not willing to make this investment and transition in mental model, don't go there. Just stay where you are, or maybe pay someone to completely outsource this part.

One key takeaway is that without operational feedback or experience, what rational motivation would a team have to improve in these areas? Why would you introduce observability tools if observing isn't part of someone's job? A lot of technologies are completely redundant.

In the Czech Republic, many companies develop software but are often far from operating it. So if you are not willing to progress to the next stages, don't onboard or build it.

Bart: Speaking of doing things, the last time we talked you were expecting a baby. How's this round of parenting going? I need to shout out to my wife that without her...

David: I would be nowhere without her help. I have two healthy girls. One is currently six months old, and she has a nice voice, especially during the night. That's all I can say.

Bart: What's next for you?

David: Currently, I might start my first open source project that would tackle Postgres configuration. It might be called pgautoconf, but I'm not sure if I'm willing to start it just yet. Stay tuned, and hopefully there will be some development soon.

Bart: What's the best way for people to get in touch with you?

David: LinkedIn. Just ping me there; I'm there frequently.

Bart: It was what worked for us. That being said, it was wonderful having you back for a second episode with us on KubeFM. I'm wishing you nothing but the best with both your personal and professional projects and looking forward to hearing about what you decide to do with this Postgres extension. Thank you so much for joining us, David. Much appreciated.

David: Much appreciated. Thank you very much.

Bart: All right, cheers.

David: Goodbye.