How we are managing a container platform with Kubernetes

How we are managing a container platform with Kubernetes

Host:

  • Bart Farrell

Guest:

  • Ángel Barrera Sánchez

In this KubeFM episode, Ángel Barrera discusses Adidas' strategic shift to a GitOps-based container platform management system, initiated in May 2022, and its impact on their global infrastructure.

You will learn:

  • The initial state and challenges: Understand the complexities and inefficiencies of Adidas' pre-GitOps infrastructure.

  • The transition process: Explore the steps and strategies used to migrate to a GitOps-based system, including tool changes and planning.

  • Technical advantages: Learn about the benefits of the pull mechanism, unified configuration, and improved visibility into cluster states.

  • Developer and business feedback: Gain insights into the feedback from developers and the business side, and how they were convinced to invest in the migration.

Relevant links
Transcription

Bart: In this episode of KubeFM, I got a chance to speak to Angel Barrera, who is a senior platform engineer at Adidas. We spoke about Adidas' journey in transforming its container platform management through a strategic shift to a GitOps-based setup. This was initiated in May 2022. This move marked a crucial evolution in handling Adidas' extensive global infrastructure, which spans multiple continents and supports thousands of developers. Before GitOps, managing configurations across dozens of clusters was time-consuming and complex, often leading to deployment delays. The new approach aims to streamline operations, enhance resilience, and improve visibility into configuration statuses both globally and locally. Check out this episode as we dive into how this transition has reshaped Adidas' technological landscape, driving efficiency and maintaining its leadership in the fashion industry. Stay tuned! We unpack the details and challenges of this monumental shift and prepare for the next phase of innovation at Adidas. This episode of KubeFM is sponsored by our friends at ControlPlane. CNCF Flux serves as the backbone of modern cloud-native, GitOps-powered application continuous delivery. Through its declarative approach, automation, state reconciliation, and seamless Kubernetes integration, Flux is adopted by thousands of organizations worldwide. However, regulated industries have additional security and compliance needs when operating open-source software at scale. This led ControlPlane to hire Flux core maintainers and build Enterprise for Flux, an enterprise-grade distribution equipped with minimal, distroless images, build assurance, streamlined vulnerability management for all Flux instances, and 24x7 365 enterprise support. ControlPlane and Enterprise for Flux ensure the longevity and sustainability of CNCF Flux while providing tangible value to regulated industries worldwide. All right, Angel, thank you so much for joining us today. Could you tell me about three Kubernetes tools that are catching your attention?

Ángel: Sure. So for example, the one that I'm looking more closely at these days is Virtual Cluster. This is a technology that I was always curious about, especially multi-tenancy models in Kubernetes. I think we all know about namespaces in Kubernetes as a way to segregate the Kubernetes cluster somehow. But with Virtual Cluster, you can create control planes on top of a console plane. This is pretty interesting for some use cases, like A-B testing different Kubernetes versions, or providing different users or tenants the possibility to create a newer Kubernetes release to test their applications, or the other way around. It's interesting if they have a super old version of whatever component that they need specific unities released to spin up this virtual cluster just for this use case and then unlock the main cluster upgrade process. So Virtual Cluster is something that I'm keeping an eye on. I think it's really interesting. What else? The second one could be Crossplane. Crossplane is really interesting in the infrastructure area because it tries to fill the gap between the platform users and the platform owners. The platform users could request some infrastructure components, databases, networking attachments, or whatever from the platform owners in a declarative way. The platform owners can create these kinds of modules that can look and play here and there and can be reusable and so on. Crossplane is something that I already tested two years ago. It was a super gross business at that time with some problems with the etcd and all the CRDs, but I have a really good friend working at Upbound, the company behind Crossplane, and I know that it has evolved a lot over time. Probably the third one I would say is Service Mesh. Whatever Service Mesh may be, it can plug some eBPF programs to interact with these packets. For some use cases, it could be really interesting. One of them could be observability and tracing, which could run at the kernel level instead of the container or application level. So for me, these three technologies are really interesting these days. Very good.

Bart: Can you tell us a little bit about your background, what you do, who you are, and where you work?

Ángel: Absolutely. So I'm Ángel Barrera. I live in Madrid, Spain. I'm currently working at Adidas. It's a super big retailer, known by everyone in the world. My role there is senior platform engineer, level two. I don't know how to explain it in the end. My role is as an enabler for different developers. We have around a thousand developers working across the globe. You can imagine how difficult it is to manage this container platform at Adidas. It's super challenging. So yeah, I'm a platform engineer working there, enabling different teams.

Bart: If I may ask, when did your job title become platform engineer? And what were you before that?

Ángel: At Adidas, I just joined as a platform engineer. Eventually, I got the senior keyword as a prefix. Before joining Adidas, I was working at a startup where I was the man who did everything. Now, I'm a platform engineer at Adidas.

Bart: And how did you get into Cloud Native?

Ángel: That was nice. I don't remember exactly when it was, but I guess it was 2016, 17, or even earlier. I was working for one of the largest banks in Spain. At that point in time, I was a Java architect or a Java expert working in the backend architecture for the bank. At that time, we were transitioning from SOAP web endpoints to JSON, REST, RESTful technologies, and at the same time, we were dockerizing this architecture to make it compatible with this container orchestrator. That is where I found Kubernetes and OpenShift and these technologies along with the cloud. AWS was my first cloud, let's say. I was part of the decision at the bank, for the platform of the bank, to decide what node to use. If it was OpenShift for Kubernetes or DC/OS or Mesosphere at that time. It was really nice discovering the cloud, all the capabilities of the cloud, and the container orchestrator. I knew Docker before because, in the end, as a developer... we were used to using Docker every single day. But for continuous deployment or cloud and cloud natives, it was that time when I was a Java architect, Java expert, working in a super large bank in Spain a long time ago. Did it help you jump-start your cloud-native career? Yes, we built a project or a SaaS with a friend of mine, Paul Rosselló, who is working in Giant Swarm, a super nice company providing managed services, managing these clusters. We met a lot of people at that time. I remember those days with a lot of love and passion because we built something that was super interesting and super challenging. It was like a namespace as a service, as a SaaS. We had to build a hard multi-tenancy model on Kubernetes when there was none before. We met people from Google, AWS, IBM, and Docker. It was really interesting, and different companies were also interested in our model. They were right, fortunately. The model of sharing a namespace in another Kubernetes cluster was not the right one for multiple reasons. The winner was to create this multi-tenancy model based on different clusters. It was nice to fight against giants at that time.

Bart: Kubernetes Ecosystem moves very quickly. How do you keep up to date with all the changes that are going on? What resources work best for you? Are they blogs, YouTube videos? How do you stay on top of things?

Ángel: That's a good question. The CNCF and everything around the CNCF ecosystem is always super fast and changing a lot. It's hard to keep track of everything. What I used to do is follow some key accounts on Twitter or X platform, whatever. For example, Learnk8s communities, some CTOs from different companies like Darren Shepherd from Rancher and Henning Jacobs from Zalando, different key accounts on Twitter. It's nice to follow. Also, from Sysdig, they used to write about what's new in Kubernetes, version one dot whatever. They write a nice blog post around what's new in Kubernetes with the key content of the releases. If you read the Kubernetes release notes, it contains all the information, but it's probably too much for regular users. If you want to get to the important bits, Sysdig writes a nice recap of what's new and what's been released on Kubernetes. So, Twitter mostly, sometimes LinkedIn, but LinkedIn is more for job seeking. Blog posts like Sysdig and other blog posts from, for example, Uber or Lyft and these kinds of startups also have a nice technology blog. It's pretty nice to read them. Absolutely.

Bart: If you could go back in time and share one career tip with your younger self, what would it be?

Ángel: So I'd say push as much as you can while you are young. In your 20s or 30s, you should push as much as you can and pursue your goals because as you grow both in your family and job, you will get more responsibilities, meaning you cannot do all that you were doing before. Push as much as you can when you are young because your brain is not the same in your 20s or 30s. This is probably relatable to me because I feel exhausted after 11 years working on this. But in the end, it's really rewarding when you see the performance improving in your company or in a product that you're building. It's really rewarding, but it's also challenging to keep up to date and maintain this level as you grow older.

Bart: All right. So for the main topic of today, as part of our monthly content discovery, we found this article that you wrote titled, How We Are Managing a Container Platform. The next questions will be taking a closer look at this. This is the first time we'll be mentioning a specific date on the podcast, but something happened on May 10th, 2022. It's a very specific date, but one that you're very clearly attached to. What happened?

Ángel: No, I mean, it's specific, that's true, but this is the first commit we performed on the GitOps repository when we decided to go all-in on the GitOps approach. Before we went all-in with this approach, we were testing it out, trying different approaches, different project structures, and different things before saying, "Okay, this is what we want, and let's start doing it in a proper, professional way." And that was the day, the 10th of May 2022. That's really specific, but this is the first commit saying, "Okay, this is the bare minimum project structure." And we committed more changes eventually on this repo. And today, absolutely.

Bart: Let's take a step back and explore the status quo before you started the migration. Could you describe the infrastructure and practices before investing in GitOps?

Ángel: So... This is, I mean, 2018. Two people were in charge of creating the container platform for Adidas. Fortunately, these two people had the right ideas in mind. The right mindset. What was their right mindset? They thought about having everything as code in different repositories. At that point in time, there was no Flux, no DevOps, no multi-cluster configuration. They had to develop something to solve the issue of managing many different clusters for a large company with all the constraints that come with it. They started on-prem with five to ten clusters, so it was not that many. All on-prem as an engine, they created this configuration structure with one Git repository per cluster and one centralized Git repository hosting common configuration for all the clusters. Eventually, they had different branches for different purposes, like the configuration for different components. For example, the ingress was in a branch, or in the shared configuration repository, they had different environments like development, production, and so on. Eventually, Jenkins ran pipelines that merged the configuration and then applied it to the actual clusters. That was working fine. Kudos to the team because in 2018 there was nothing better, to be honest. Kudos to the team because it was working until 2022. That was... the project structure configuration of how we were managing clusters at that time. Two people doing their best with the right mindset, developing this mechanism, merging configuration, and applying it to the cluster. This is more or less what was done at that time.

Bart: And in terms of the deployment strategy before the migration, was it all GitOps based?

Ángel: No, no, no. As I mentioned, it was a push model. Jenkins was responsible for merging the configuration and applying it in the clusters. So it was not GitOps based. But as I mentioned, since the team at that time had the right mindset, it was easy enough to get all this configuration as it was in code. And then create this project structure for GitOps. So at the end of the day, everything was as code, but the deployment process is now radically different.

Bart: I imagine the process grew organically to this point rather than being architected from the very beginning.

Ángel: Exactly. So eventually, as I mentioned, the team started with on-prem with five to ten clusters, I don't remember exactly, but then AWS came on the scene at Adidas. We had the possibility to spread these huge clusters into smaller ones across the globe. Instead of having just clusters installed in Germany at the headquarters of Adidas, now we have clusters all over the world, from America, Europe, China, and other regions. Instead of having 5 to 10 super big clusters, now we have, I don't know the exact numbers, but less than 100 clusters spread across the world. The configuration was working fine with this lower number of clusters, but when you have this amount of clusters, it becomes unmanageable. We discovered that we had to switch, and I joined the company in 2021. I realized that we were struggling a little bit with the cluster configuration, so I talked to R&D about exploring this alternative.

Bart: So what happened next? Did you gather around a whiteboard to sketch what the deployment process would look like? What were your plans?

Ángel: I sat down with one person at the beginning of the content representation journey. With her, we set the requirements and constraints that we had to have in place for this DevOps approach. For example, it's on the blog post, but you know that Adidas is a global company, and you have to take care of many regional-specific things. This has to be easily overwritable. For instance, I remember one use case: China. The New Year in China is completely different from the rest of the world, so you could expect China to have more restrictions or different freeze periods at certain times while the rest of the world doesn't have this freeze, and vice versa. We had to be flexible enough with the GitOps approach, as we were with the legacy custom management process, to have this overwritable configuration in place in GitOps. We sat down together, set the requirements, and the required features that we had to have in GitOps as we had in the legacy custom management process. We drew a plan and set some KPIs to measure the success of the migration. What we wanted to avoid with the legacy approach was opening 40 or 50 PRs whenever you had to change something on every single cluster. With the legacy approach, as I explained before, we had a repository per cluster. If you had to change something for every single cluster, you would probably need to open 50 PRs with a pull request review. It's a massive amount of time, not only for reviewing but also for opening these PRs. We also wanted to avoid human errors while creating PRs because we are humans, and we fail. We could miss a comma or a dot and mess it up. We wanted to avoid that as much as possible because the process was not scalable enough, and we wanted to make it scalable. We set KPIs like the number of PRs has to be reduced, and the number of misconfigurations has to be reduced. A problem we had before was that Jenkins was not powerful enough to run 50 pipelines at the same time, leading to misconfigurations in certain clusters. If we had a maintenance window where you could only push code or configuration for 13 hours and the Jenkins queue was full, your pipeline would be stuck because it was not in the maintenance window. We had a lot of misconfigurations here and there, leading to operational time to resynchronize manually. Imagine doing that on hundreds of clusters. It was not ideal, let's say. We wanted to improve all these metrics, absolutely.

Bart: Sitting down and establishing these metrics, creating a plan, a lot of things can look good on paper, but it's a different thing when it's put into practice. When putting these ideas into practice, did you change tools? Did you get rid of the infrastructure repo? What was that process like?

Ángel: So what we did is, for sure. First, we got rid of Jenkins to deploy the configuration. We still use Jenkins to validate the configuration, validate that everything is looking good, and it will output the diff from the configuration that we want to apply from the configuration that is already applied on the cluster. So we already have Jenkins but only for the CI part, let's say the integration part, but not the deployment part. We got rid of Jenkins by creating a migration plan, and the migration plan was painful, but now it's working fine, so we are super happy about that. Also, new clusters were created with Flux, but we had to migrate the old clusters to this Flux or this GitOps approach. That was painful because, in the end, you don't want to cause any disruption, but it was pretty smooth. At the end of the day, everything was Helm charted, so if you know the state of the Helm chart and the values being used... As I mentioned, everything was as code at that point in time, but the deployment mechanism was slightly different. It was easy to synchronize instead of using a push model with a pull model in Flux. It's always risky because you don't want to mess it up in production; you don't want to impact the business, but it went super smooth.

Bart: What are the advantages of the pull mechanism that you transitioned to?

Ángel: Absolutely. It's incredible the benefit that we unlocked. For example, one key benefit that we have now that we didn't before is the possibility to check what is going to be applied in an easy way. Before, as I mentioned, we had this mechanism of merging configuration in different repositories in a super ad hoc way, and it wasn't that easy to understand what is going to be applied. You open a PR with a few changes, but you are not 100% sure if this is what is going to be applied or something else because the pipeline didn't pass before. Just a simple example. But now we have to sustain with a specific pull request. We have the specific diff that is going to be applied because it runs the CI integration checking what is going to be applied compared with the cluster. This is one benefit that we have now that we didn't have before, and this was key to sell this migration to our managers. Something else that we gained was confidence in the process, meaning that we wouldn't rely on Jenkins to run all the pipelines to apply the operation to the cluster. Instead, now all the clusters pull the configuration from their Git repository, and it's much faster, much more reliable, and whenever there is a synchronization issue, we get a heads-up. We have this alerting in place now that we didn't have before. Also, we have a dedicated dashboard to showcase in a super simple way the container platform state, with red indicating that something is missing and green indicating everything is fine. So it's really, really worthy. It was the effort of my way into this GitOps approach. In the end, standardizing and creating standard processes with upstream tooling makes everything easier than using something super specific for your business. Having customized, having flux, having these tools that you have a lot of resources for, it's much easier than building your own. Building your supporters, building everything, then exposing the metrics somewhere, building the dashboards. That's a lot of effort for something that is already done. This is something that we improved a lot by standardizing our way of working. Okay.

Bart: A unified configuration also makes it easy to inspect the state of all clusters at once.

Ángel: Absolutely. Our managers and stakeholders are happier than ever now because they know upfront what is going to be changed and they are more confident with our teams than before. Before, we had some conflicts, let's say, because we wanted to deploy something due to a component upgrade. System critical update or whatever, but in our team, we were not fully confident because we didn't rely on Jenkins and all the stuff. We couldn't transmit this confidence to the stakeholders, so they wouldn't allow us to deploy these changes whenever we wanted. Now that we have full visibility on what is going to be changed, when it is going to be changed, and how it is going to be changed, we can go to the stakeholders and say, "Okay, this change is going to affect this, this, and that in this way at this time, and it's going to affect only this region or whatever." It's really easy for the stakeholder to understand what is going to be changed and what will be the impact on their business. Also, something that we built around is this change log. As with any other large company, everything has to be audited. Everything has to be registered in some ticketing system. In our case, it's Jira, but it could be any other. Everything has to be correlated and linked to the project management setup. We have this changelog saying, "Okay, at this time, on this day, this person created this PR or merged this PR in main and was applying this change." It contains links to the PR with the diffs, so everything is now auditable—something that we lacked in the past but now is possible. This is only possible if you have the right tool in place with standard processes and tooling. For sure, business and security are happier than ever with our current setup. They understand what is going to be changed, and we are 100 percent confident in what is going to be changed. Now, what you described up until now is a better deployment process that unlocks several benefits for the team, but as we say, the devil is in the details. We haven't talked about what happens in those clusters and how they are affected.

Bart: Can you guide us through how you provision those clusters and how they're plugged into the GitOps process?

Ángel: It's pretty straightforward, to be honest. Here, what we do is create a managed cluster from the GKE or EKS, whatever provider. We are using one, but it doesn't matter here. We create this cluster from an API call, and then eventually, we pull some secrets—in this case, git repository secrets—to access the git repository. In the end, we have BitBucket privately and so on, so we use private tokens to read the cluster configuration. We inject these secrets into the cluster, and we install Flux with the usual commands: flux install, whatever. We create the two main objects, which are the git repository and the kustomization. Eventually, the cluster gets in sync, and that's mainly it. It's true that we have room for improvements. For example, in order to have a cluster attached to the network, the Adidad network, we had to run some things manually, but we are working on that these days to solve this gap. It's something that still needs to be standardized. As I mentioned, having everything standard makes things much easier. We are working on that, and I'm confident we will have that done in the coming weeks or months.

Bart: And what about your multi-tenancy model? Can you talk about that and if it has changed since you migrated?

Ángel: Not really. I mean... We always had two different multitasking models depending on the use case. For example, we have power users, if that could be a name, where you get full control of the query to the unit cluster. Specific teams or expert teams like the monitoring team or the API team probably require cluster access if they want to install a new controller that requires cluster admin permissions before pushing it globally to the different clusters. They can request to have a full cluster for them. Then we have the most common scenario, which are the developers of the different applications at Adidas. They just get access to a namespace. We create the role bindings and all the stuff to grant access to the specific namespace. They just have access to different namespaces; they don't have access at the cluster level, just at the namespace level. So that didn't change with the cluster configuration at the end of the day. They don't care about cluster configuration at all; they just care about deploying their stuff. For sure, we have the power of influencing these teams to use GitOps, but they will deploy in any way. In the end, they are responsible for their deployments and their applications. So it's up to them to follow our guidelines or our recommendations. Or just run and install from their laptops, and that's all.

Bart: It's been two years since the date that we started out with, May 10th. What's the developer feedback been like after using the platform to deploy to production?

Ángel: If we talk about developers, internal developers, and us as a team, because at the end of the day, our users, our tenants, don't... understand what we are doing with their customer management. But if we are talking about us and also other platform teams, absolutely they are happy. One sign that they are happy is that they want to be on board in our customer management. For example, I think that this is a common pattern in large companies. We are continuing to frustration that we are part of platform engineering. In platform engineering, we have... different teams like the monitoring team, the API team, the security team, and they realize how good our cluster management is. They want to be onboarded in our cluster management to deploy their stuff. For example, if they want to roll out a new version of Prometheus per cluster, before they had a deployment strategy as any other developer in the company, so they could choose whatever they wanted. They realize that our way of working is more flexible, scalable, and reliable, which is key in this company. They are being onboarded in the DevOps repository and are using the same mechanics as we are in container orchestration. They are creating PRs, reviewing the changes, and following the processes that we are following because, in the end, it makes sense. It makes a lot of sense to standardize all the processes and the way of working in the different platforms. At an organizational level, we often talk about getting all stakeholders aligned, as you talked about previously, discussing the technical and business aspects.

Bart: What about the business side throughout this whole story? Did they buy into the migration, and was it difficult to convince them to invest in it? What was that like?

Ángel: That was clear that we had an issue. We had some old years here and there because of misconfiguration. So we had to change, that was clear. And we were already different approaches. And we just think about this cleverly. As I mentioned, if you had to perform a change on 50 repositories at once, and also you had to review these 50 PRs with two people because we followed the four eyes principle, if you count the number of hours to create the PRs and the review of the PRs, if you take like 15 minutes per person per PR, it's like 30 minutes per PR multiplied by 50. It's 24 hours or so, more or less. 34 hours to review one change, and 34 hours of people working is a lot of money. And then if you sell in this way to the manager saying, okay, I will reduce the cost from 34 hours of working to 30 minutes, it's a good selling point. But not only that, the money is important and the effort is important but also building processes more reliable in a way that everyone knows what is going to be applied and when it's going to be applied and what is going to be the diff and everything could be auditable and we gained this flexibility everything around this process the evening process it was easy to sell first because we had an issue cause some outages because of configuration not being applied correctly. And second, because we will optimize the process and we will gain more or we will increase the reliability of the process and also we will get more visibility on what is going on in the container platform. Before was kind of a blind scenario where you push a button and you just hope that everything will work. But now it's easier to understand what is going on. And so yeah, they buy the solution. Pretty quickly.

Bart: Very good. Congratulations. And what would you do differently if you had to go back in time and do it all over again?

Ángel: As I mentioned, we lack or miss some automations on the infrastructure level. We play around during this GitOps approach with Crossplane in order to make this attachment to the network. But at that point in time, Crossplane was not that mature. It was causing some issues in the etcd, in the control plane. We held this integration, this automation, until Crossplane was mature enough or was performing better. If I had to go back in time, I would find other solutions to attach the clusters to the network or to create other surrounding infrastructure. Now we have to create some things manually that I don't want to do anymore because in the end there are solutions out there, but we need to integrate these solutions into the current GitOps approach and the cluster management approach. It's possible, it's ongoing. As I mentioned before, it's ongoing. I know the team is working on that, but it's really challenging because you have to have everything in place before creating the Crossplane modules. We have to deal with different departments and different teams to have everything ready to integrate with Crossplane. This takes time, and that's why I said that if I could go back in time, I would push a little bit more for this initiative. Now we're going to get to the very controversial part of the interview.

Bart: We want to wrap up with a little bit more about you, Anukala, as a person. You're currently working fully remotely, and you did so in your previous roles as well. Do you see yourself going back to the office in the future?

Ángel: Never say never, okay? That's the first rule. But it's something that I don't think is going to happen anymore. At least in tech roles where you can perform your role remotely, I don't see the benefit of going to the office every single day. For sure, going to the office could bring some benefits. I'm not saying that not going to the office is the best option, but it has to be beneficial for both the company and you. If you are performing better in the office but lack some work-life balance... It's hard. But you have to find the balance. It depends a lot on the company because it's not the same for a large company that used to work in the office 100% of the time as it is for a startup that was born remotely and didn't have an office to go to. It's also different having hybrid models where you are pushed to go to the office. It's already a controversial topic, but I'll say that working remotely gives you the work-life balance that everyone deserves. That's true, but you also have to provide the same or even more value to the company. If that is not happening... The company can push you to go to the office for whatever reason. And this is what you have to find, right? For sure, when you are applying to a position, you have to ask these kinds of details. You have to ask if the people working in the company used to work remotely, or if they work remotely by default, or if they just go to the office to meet as a team and then... Work remotely the rest of the year. So, as a TL;DR, going to the office has to make sense for both the company and you as a professional. But going to the office just for the sake of being there is nonsense in 2024. I would say that.

Bart: Very good. Strongly agree. What's next for you?

Ángel: What's next for me? I'm still an individual contributor at Adidas. Before joining Adidas, I played the role of... engineering manager, tech lead in different positions. I miss this part of growing people and managing different projects. I love tech for sure. That's why I don't intend to leave the tech field at all. But I miss this engineering manager role in my life. Currently at Adidas, I'm an individual contributor. The next step will be a manager position. I'm not sure if it's going to be this year, next year, or probably two years after this one. But I miss this role because I love to... teach people. I love to grow people, follow up with them, and keep track of the project. I also enjoy designing and implementing different features. Leveraging the team to build the product is something that I miss a lot.

Bart: Also, understand that your team at a local level has grown?

Ángel: My locative has grown a little bit. I have a baby, 10 months old already. And I teach him a little bit about life. It's amazing how fast they grow. I love him a lot. I cannot imagine my life without him anymore. It's amazing.

Bart: And I'm sure in no time, he'll be learning GitOps and multi-tenancy and all the things we spoke about today.

Ángel: I already have these books for Sophie and the communities.

Bart: Very good. If people want to get in touch with you, what's the best way to do so?

Ángel: So I say Twitter, probably, X, whatever.

Bart: All right, Ángel.

Ángel: Absolutely. Thank you very much, Ángel. It's a pleasure.

Bart: All right. Take care.