Troubleshooting a validation webhook all the way down to the kernel

Troubleshooting a validation webhook all the way down to the kernel

Host:

  • Bart Farrell

Guest:

  • Alex Movergan

This episode is sponsored by Learnk8s — become an expert in Kubernetes

How hard could it be to debug a network issue where pod connections time out?

It could take weeks if you are (un)fortunate like Alex.

But Alex and his team didn't despair and found strength in adversity while learning several Kubernetes networking and kubespray lessons.

In this KubeFM episode, you'll follow their journey and learn:

  • How a simple connection refused led to debugging the kernel syscalls.

  • How MetalLB works and uses Dynamic Admission webhooks.

  • How Calico works and assigns a range of IP addresses to pods (and what you should watch out for).

  • How to use tcpdump and strace to debug network traffic.

And as a bonus, Alex shared his knowledge of onboarding engineers and how to perfect the process.

Spoiler alert: this episode goes into a great level of (networking) detail, but the solution turned out to be very simple.

Relevant links
Transcription

Bart: In this episode of KubeFM, I got a chance to speak to Alex, who's a DevOps manager, and he had the experience of onboarding some DevOps engineers, and in the process, noticed something was going wrong, which led to the process of building a webhook, which then took them all the way as far down as the kernel itself. Now, in order to unpack that process, we'll be hearing from Alex about all the things that went into troubleshooting, making corrections in order to build best practices. That being said, we also would like to say thanks to our sponsor, Learnk8s, which is helping engineers all over the world troubleshoot the journey towards expertise when it comes to Kubernetes. The courses are 60% practical, 40% theoretical. very much hands-on instructor-led courses that can be either done in person or remotely from the privacy and comfort of your home. You will have access to all the material for the rest of your life so you can squeeze all that wonderful Kubernetes knowledge and make the most of it. So if you want to know more, check out Learnk8s.io and see if it's something that can help you level up in your Kubernetes career and accelerate your journey. But now, let's get into the episode with Alex. All right, Alex, if you had to install three new tools on a brand new Kubernetes cluster, which tools would they be?

Alex: All right. Yeah, I've seen these coming. Well, the one thing that we use a lot for managing many Kubernetes clusters is Flux. and we started adopting it when it was Flux 1. Now there is a Flux 2 and we're in the migration stage. We even baked it inside of our Terraform module that we use for Kubernetes provisioning. So that would be my tool number one for infrastructure is called approach for Kubernetes management. Prometheus stack, I guess that would be number two, because we need to know what's going on inside the cluster. And the tool number three is Kyverno. That's something that we adopted recently as a replacement for both security policies. And we also want our Kubernetes cluster being secure.

Bart: Now I have to ask the devil's advocate question. Why not Argo? Why not OpenTelemetry? And why not (OPA)[https://www.openpolicyagent.org/]?

Alex: Very good question. So Argo and Flux. So we basically, we need evaluation of both of the tools. And we try to onboard the one that is best fit for us. when Argo seems to be more application-centric approach, when Flux is more infrastructure in a way that. So basically the way we do our Kubernetes clusters and we are having another big article comments on that subject is that... We have a repository with our infrastructure pieces that would be your default network policies that are unique for us that we know that we should have there. There would be some the templated applications for instance, I don't know if we have postgres that we be preparing a certain way, we want that to be the same across the organization so we install this Helm chart is predefined values from what this one aspect this this one centralized place. We have this repository where you can basically almost drag and drop the tools and the stuff that you want to install to your cluster. And also you can change them in one place that would be propagated to all your clusters. We have tens of Kubernetes clusters inside Altanar. So that helps us to manage the infrastructure aspect. Then when it comes to application aspect, the way we deliver application is we found it more convenient using CI-CD systems. So basically the application is installed via Helm from our CI-CD system. It doesn't mean that we wouldn't onboard Argo in future, but so far there was no strong need in this area. OpenTelemetry, I don't think that it is a... that this is the choice you need to make, either one or the other. It comes perfectly together with Prometheus and OpenTelemetry and Jaeger and all these observability tools. the application has to be ready as well. I mean, it is becoming a part of cloud native standard. But what is now, I think this is what monitoring is slowly transforming into. If you have very good observability, you basically don't really need to have the monitoring as a standalone separate layer that comes together with a very good observability. And OpenTelemetry is a great tool for that. But it also takes the application to support of all these tracing headers and be able to do that. And what was the third one you mentioned? Yeah,

Bart: I don't know. Very good answer so far. So yeah, you chose Kyverno and not OPA, Open Policy Agent.

Alex: That's actually, I don't know why, because that wasn't the choice I made personally. That was something that I trusted to the one of our senior tech leads. And I know that he did. do the analysis of why we choose this product or that product. And together with the security team, Kyverno was the best fit. But I can't say why.

Bart: Oh, that's okay. We'll bring him on for a future podcast. No worries. So now just to get a little bit more about your background, who you are, tell me a little bit more about who you are, where you're based, and the kind of work that you're doing.

Alex: Yeah. Originally, I'm from Siberia, from Russia, and I started my career in technical support. And it was a very interesting product that would... that helps people manage their hosting. And since hosting includes a lot of parts, such as DNS, mail servers, databases, Apache Engine, PHP, and all this, I didn't only have to deal with the product itself, but there's a bunch of open source tools. And I love troubleshooting, like deeply. Maybe partially because it gives you a quick ender thing. You found the problem, you solved it and you're happy rather than having a project that might last a few months. So that's where I started my career. And then I moved to Malta and that's where I'm located nowadays. Malta is a little island in the Mediterranean. Not many people know where it is. It's close to Sicily. And that's where I am right now. I am driving a team of DevOps engineers. I started as a second DevOps engineer in the whole company, and now we grew to having 15 people under my supervision. So that's quite a challenge now.

Bart: Yeah, sounds like it. And what was it like for you getting into cloud native, starting with Kubernetes? When did that happen? And what were the circumstances that led to that?

Alex: All right, yeah, that's a very good question. So basically, I joined the company where it was like in sort of a post startup stage. It was already about 60 people, was financially stable, and there was a product that was already generating money based on .NET standard running on Windows VMs, IIS, and I came to build the whole automation around it. And being a Linux engineer to the core of my nature, running a bunch of Windows servers was not something I expected to do. But that's where I learned Ansible, Basic DevOps approach, and that was the first iteration. And meanwhile, while we were doing this automation and bringing the current product that was in production up to their modern standards, Dev team redeveloped the whole product on .NET Core, and thanks to Microsoft adopting the.NET Core that can run in Linux, that's when we realized we can actually try to containerize and put the application to Kubernetes. So there was that, a few products in parallel, and we started big already, like from day one. So the first installation of our support book we had was already based on three different Kubernetes clusters. The reason is because the infrastructure is hybrid, so we have some part running on the VMs on our servers in our data center. And the other part, we utilize Google presence basically all around the world, so we have a few Kubernetes clusters dedicated to the front-end and caching database applications hosted in Google.

Bart: And it's an ecosystem that moves quite quickly. How do you stay up to date with Kubernetes, cloud-native technologies?

Alex: Well, one part is being open-minded, travel, go to Kubernetes conferences, and finally traveling next year to KubeCon in Paris. And I've been to a couple of smaller ones in Germany. And that's how you get to know about all the new technologies. Reading Medium and other internet sources, that's another thing. And I guess being curious, that's the key point. So that's one thing you know about the new technology, and one thing when you just can't sleep on this, you go and try it.

Bart: I really, really like that. Like you said, we've recorded quite a few episodes, but really emphasizing that part about mentality, I think is a great one. With that in mind, if you could go back to when you started using Kubernetes, is there any advice that you would give to your previous self in terms of things that would be good to focus on, in which order you need to learn things, anything that you would share with your previous self when you were first starting out with Kubernetes?

Alex: Well, the best way to learn something is to try. I still believe that. So reading and watching videos and getting to understand this whole concept is a very good thing. But going there and trying it on dev stage and on production eventually is what makes you really feel the product, not just understand the theory behind it. And I guess don't be afraid and just go and try it. That's advice I would give. Like that, you know, the only thing earlier was into getting into containers from, from being a sysadmin guy who likes to have root and manage VMs. Now container that you can't really log in is that easy, whatever you change, it gets reverted. But as soon as you restarted and all this stuff, that was a new world for me at the beginning.

Bart: And I think it's like you said, the importance of being open minded and willing to try stuff is really valuable advice that a lot of people can benefit from as we're an industry that moves so quickly. What you're doing today will probably change in a few years time. So being open to that change with that in mind, you know, is with your role as a DevOps manager. One of the things that a lot of people struggle with when they get into a management role is having to leave aside the technical aspects. Do you find that you still have time to code? Or are you spending most of your time validating people's holiday requests or that they want a new laptop or an SSD? Do you still have time to focus on the technical side and code?

Alex: Well, luckily, my team has very, very high technical standards requirements to be in. So I just can't really let it go because one, two years past and I will be out of game. So I have to stay up to date with all the technologies and at least have an idea how it works, try it somewhere, try to deploy it myself or not. Or if we have a critical time being able to go on troubleshoot. So I'm trying to stay on the technical side as much as I can. And maybe that's also something that makes me not a very good manager per se when it comes to managing people. But yes, I'm still there, I'm still in the game, and I still do some technical stuff myself. What also helps me is having some pet projects, some side projects. For instance, recently everybody is talking about eBPF, and evaluating that part, especially Cilium, especially after Google announced that Cilium is soon going to be the default network layer for the Kubernetes and Google. I enjoy that a lot.

Bart: Good. And then, like you said, a fantastic way to stay up to date is to put those things into practice in whether it's a sandbox environment or a personal project. Going back to the part about management, something that's both technical but also has to deal with managing people is onboarding. And so onboarding is very often not done well and as a result is very costly because of not investing the time and energy in making sure that people know what they need to know, that they're focusing on the right things. They're not focusing on things that aren't of their concern. What's a process that you use for onboarding a developer?

Alex: So yeah, that's something that we are still developing and improving with every new engineer. Because every new engineer that hasn't seen our stuff for a long time, he always finds the things that can be done better. So I'm always very open to the feedback of newbies for the first one or two months. And another good practice that I've always had in my previous companies and all the places I worked before is before you go and support something or before you go and build something, like help build something that's already there, try to do it in your sandbox from scratch. So that's like, as I said, hands-on experience is the best way to understand how it works. We'll give you also a lot of practices with the tools and the standards that we use. And our product on its own is not a simple one. It has more than 100 different microservices, different data streams, a lot of external dependencies. So the onboarding process takes about six months for an engineer to feel comfortable. But the very first onboarding that we do, we do it like in a sandbox with a small project. He has to follow the guidelines and build it from scratch. So basically, use Terraform to deploy a cluster. And so both will... to install Kubernetes on top of it. And then Helm, like create a pipeline, build application in the Helm chart. So basically it gives a developer a chance to try all the tools that we use inside and being able to troubleshoot any issues he can encounter during this process. But when he comes to the real battlefield, he's already armed and has an idea where to go and what to look for.

Bart: I like that. And so you talked about some of the tools, but if they're deploying locally or on the remote cluster, what are the tools that are installed that are necessary to understand the incidents that they're facing? What are they getting hands-on experience with when it comes to that?

Alex: The first one they get is basically an endpoint to vSphere. where they can go and deploy VMs using our Terraform module. So that would be them using Terraform. Then Terraform invokes kubespray under the hood to install Kubernetes on top of that. Ideally they also pre-configure the Flux repository so as soon as the cluster is there, Flux is installed and all the system components pulled and installed on the cluster. That often fails as well if you pre-configure it incorrectly or specify the incorrect values. So that's another layer of fun and troubleshooting. So when the cluster is up and running, the next stage would be taking an application and putting it in Docker and creating a Helm chart for that. A Helm chart is a separate story. We also use templates for templates, basically. So we have a centralized template repository for all Helm primitives, like deployment, service, and ingress. And then we reuse them across many charts. We still can redefine each of the primitives if it's required. So we can do that. in most cases it is not so learning how that is factored inside the company is also another thing they do and then they go to ci cd system and try to try to build their application using our cst templates and then make a deployment so as soon as they see the hello world in the cluster they've just deployed we can say that the very first stage of onboarding is complete we want them to do their readme and create documentation on all the steps how they've done them and then there is a team review basically, of the work that the engineer did.

Bart: You know, this, depending on someone's level of experience, this could be somewhat challenging. Is this something that new team members are completing? Like you said, in the six-month period, what have you noticed there?

Alex: Well, their task is for two weeks.

Bart: Okay, all right. So is this something that, but then so they're completed just fine without any major hiccups?

Alex: Well i guess we don't really take like completely newbies. We have a program where we take students from the university and we don't expect them to do that on their onboarding process that's actually part of their internship before we make a decision to hire them usually it takes them about two to three months to complete the same setup tasks and that's where I help them explain in every step. We often touch parts that are like cloud native directly related like what's a cloud native application, how you would manage your database migrations, would you use ORM or would you use the manually written SQL queries, why would you do that, when it's good, when it's bad. So we do small things but we try to look at them from the big enterprise perspective. You are now one engineer here, you did that. But if it was 10 of you, then you would have done the same thing in the same way because that's how it's structured. Basically, that's what they do.

Bart: It sounds like a great model and also that you're using them to bounce ideas off. And it looks like, from what you mentioned in the article, that it helped you become aware of things you weren't necessarily previously noticing. Can you tell us about that, about the issues that you noticed and how it wasn't a usual mistake perhaps, but something that was...

Alex: So basically using kubernetes and administrating kubernetes those are two different jobs and i think it's true for most of the technologies as soon as they work people don't really know how they work inside and how they're structured inside unless you absolutely have to go and study that for us kubernetes even though we run it on the vms and we deploy these kubespray it tend to work most of the time so that particular issue was there was a learning curve even for experienced engineers so basically what happened there was usually like the network troubleshooting things when somebody's timing out or so the connection is refused to somebody something is not listening on the right port that's quite often it's a very common issue i've seen it many times and i think i even have an algorithm in my mind how would you tackle this issue down step by step And as I explained in the beginning of the article, that's basically what we tried to do. First step to troubleshoot any issue is to try to reproduce it. So for those who read the article, if you need a small reminder, so basically the problem was with a webhook. It was a validation webhook that the Kube API couldn't call. And because it was part of the bootstrap in the cluster, the automation would fail in that step. And that's where we try to travel through that. So since it was a test cluster, if you think you do, you would just remove all the network policies because 99% of the time it's what causes the problem. But in this specific case that did not help and that's where I already was a little bit interested in the case. So after removing the network policies, the Kube API will still timeout all in the webhook. But the locally webhook was available just fine. Next, basically, my favorite tools for troubleshooting often comes to tcpdump and strace. And there is a fantastic container called netshoot that has all the things that you need for network troubleshooting. And that's how we started. So basically, try to reproduce the issue. The webhook is working locally, but it's not working from API. um getting into their The best way to mimic the whole path is basically call from the same container But since, due to security practices, the kube-api server has nothing but the kube-api binary in it You can't really even log into this container So what we did is we added a sidecar to that container It is a little bit different sidecar to the normal sidecars because the Kube API server container is not a normal container. So we had to change the YAML manifest on the master node. And then we faced the same behavior. We couldn't really call the URL from that container.

Bart: Yeah, one thing just... What we have here in the script in the notes is talking about the MetalLB controller. If you want to talk about that at all in terms of the debugging process, how you realize and confirm that that wasn't the issue. Yeah, right.

Alex: Speaking a little bit about Webhook and what Webhook was about. It was a Webhook for a MetalLB. And we use MetalLB for the on-prem cluster as an addition to our ingress. So that helps us to create internal load balancers without having to have an external load balancer providing this layer. I'm not going to explain how metaLB works, but I think it's a pretty cool tool for use cases like ours. But in order to install that, there is a webhook that we need to call, and that's what was failing. You know there are two types of webhooks, validating and notating. There was a validation webhook, so there was basically... code when a specific object was created. So we were trying to create an IP address pool for MetalLB and because this object belongs to MetalLB that's why the webhook was failing. We tried to reproduce the issue. We created a sidecar with a national container and we faced the same problem that we can't really connect to the webhook from that container. What is special about Kube API is that this specific container is also attached to the host network. So we thought maybe that's the clue. We tried to remove that, but it didn't really change that much. So the next step, we try to see where the traffic is getting lost. So we launched the TCPdump on the same container. And we can see that when we try to run and we see the packets are coming out but nothing is coming back so that's why basically it's timing out then we did the same on the other side so we deployed the sidecar on the metal into the metal container and then we launched tcp dump there and that's where the interesting part starts because there we not only see the incoming traffic but we also see the replies coming back and that is not something i often saw in my life like usually If traffic is lost in the middle, then you go and debug there. It must be firewall or something. But here, the traffic reached the application. It sends the packet back, but the packet never arrives. So clearly something has messed up on the Kubernetes networking side. And that's something that took us a while to brainstorm. We had a lunch break. It wasn't only one troubleshooting that. It was three other guys with much better networking knowledge than I have. And then we started just playing around. Instead of trying to curl the webhook, we were trying to see what the IP addresses of the containers are. Then we will try to ping it and that's where we got another clue. So we tried to ping... I think we tried to ping from metallb or Kube API and that's where ping command failed saying that the IP address we specified is incorrect. even though it is absolutely correct, the IP address from the right IP address range. Then we had to dig really deep if something is wrong with the curl and this container, so that's where strace came to help us. And we actually had to go deep down into the network functions in the kernel that curl utilized to actually send the packet. curl and PnD, they both were failing on this in the same way, saying that the IP address is not correct. And that's where we started to see the networking layer, try to figure out how Calico works on the Kubernetes cluster and how this part works. So that was quite interesting. We knew that Calico would take your pod allocation range and assign a small chunk of that on every node. And because it was a sandbox. Even though it was a sandbox, it had quite a few nodes. I think it had seven nodes together with the master. And we realized that the network range that we allocated wouldn't really be enough to cover all the seven nodes. That's where we thought we are on the right track. So when we start comparing the subnets that we have for different worker nodes, I think there was, let me look in the article, I don't remember, it was 20 slash 26 or slash 25. Basically, what happened was that due to the incorrect IP address allocation, some nodes had overlapping IP addresses. And what happened was that the IP address that the pod had for the Kube API was from the range that on the node with the webhook container was marked as a black hole. So when the container was trying to reply to the traffic that came from this IP address, everything went to the black hole, so there wasn't really a valid IP address for that container. Then we were trying to go even deeper and we realized that actually we did assign one network range and one mask for the node. But in reality, we had a different one, even I think a bigger one, created in the Kubernetes. So that's how we started to get back to the root cause, how this happened. And apparently we didn't use the correct variable. It usually comes to a very simple answer. We didn't use the correct variable in our code. And kubespray, instead of using slash 25 ranges, we provided in the code. It was picking up the default one. No, I think we were providing slash 24 and it was picking up the default one, slash 25. from Ansible values for the kubespray.

Bart: Good. Now one question I want to cover is, you know, why is there no shell in the Kubernetes API server container?

Alex: Well, because you don't need one. You're never supposed to log in. That's due to security reasons. The best container is the one with the container that has one binary, nothing else in it. And it's not true. It doesn't run as root.

Bart: And that's why, as you kind of mentioned previously, the curl command failed.

Alex: It wasn't even there, like you can't even run it from the container, you can't even log in. Like if you try to run bash, bash is not there, sh is not there. But then when we spin up a sidecar container that has bash and curl and everything in it, it still runs from the same namespace level in terms of Linux and the same IP address and same networking stack. So it's as close as you can simulate of what is actually happening inside the Kube API container.

Bart: For the people that are connecting from home, can you explain a little bit about what tcpdump is and how it can help with curl commands that are failing?

Alex: Well, the best analog of tcpdump that more people are familiar with would be Wireshark. That's something that you can use to analyze different types of network traffic. We do Wi-Fi and normal TCP, and it has a lot of protocol-specific parsers that can... show you easier what is inside the packet. The TCPdump is the same thing that helps you to capture network traffic on Linux and then you can analyze it or dump it as a file and then analyze it with Wireshark. That's one of the essential troubleshooting tools when it comes to networking.

Bart: Right. And so in your case, you know, you started out with debugging the webhook and ended up tracing syscall all the way to the kernel. Did the wrong argument lead to more findings in this case?

Alex: Well, it definitely helped us understand better how the Kubernetes networking is done and how the networking paths are formed on each node and how Kubernetes manages the whole networking namespace and split it between the worker nodes and the master node. That was quite an interesting experience.

Bart: And in terms of tools or practices that could have helped to prevent this, what are things that you've extracted as learnings?

Alex: That's a very good question. I can't really say that there was a very obvious mistake. From one side, you can't really validate every single argument of a third-party tool such as KubeSpray before you use it, even though the right approach would be very time-consuming. On the other side, you do need to know the tools that you use. It's very cool when you take a quick start tutorial, try it and it works, and you go with that introduction, but it's all fine as long as it works. The minute it breaks is when the knowledge becomes essential. And we not only got to know more about the Kubernetes network layer, but we also had to go and dig deep into their kubespray. and see how Ansible is structured there. And I might be mistaken, but perhaps we even submit a pull request to have this issue covered better or something like that. That was this part of the plan. We try to contribute to open source as much as we can, especially when we benefit from activity. So I think it's everybody.

Bart: I mean, it's a nice, it's a nice, you know, contribution to open up the conversation and see what other folks might, uh, have to offer on it. And also to see if other people have gone through similar, similar challenges, you know, this is quite a journey and wouldn't say this is a typical onboarding process or experience to say, uh, to say the least, um, were there any steps that you decided to take after that to improve the next onboarding?

Alex: Well, actually, no, because our Kubernetes installation didn't really change much. We had to improve the part that covers how you should choose your network for your cluster better in our onboarding guide. But rather than that, not much. I still believe it is a very good exercise. And those issues, like when you do everything by instruction and it just works, you didn't really learn anything except how to follow steps. You learn most of the stuff when something does not work and then you have to troubleshoot and try and dig deep.

Bart: I like that. Giving people the opportunity to learn something greater than just following instructions, which is something that most of us were taught since we were children. Is there any advice that you would give to folks out there who are creating or reviewing their onboarding processes?

Alex: Pick for feedback review your onboarding guides with every new hire because that's that's feedback that helps to make it better and it's most valuable i think it's a great point not just in terms of onboarding but also for overall experience as an employee is that your organization is listening to you and wants to know about how they can improve as establishing you know trust and respect towards employees that they have you know insights to offer that can be beneficial for everybody so that's a good way to open up that dialogue you

Bart: What was the reaction that you received to the article that you wrote? What did people have to say? Was there anyone who had experienced something similar? Did anybody disagree? What kind of feedback did you get?

Alex: I still feel like a little bit like the article doesn't really deserve that much attention. Yeah, I was glad to share our technical insight and what we've learned and provide some pieces of code. But in reality I did have some feedbacks. A friend of mine who now lives in the US tagged me and said that this is one of the best things he has read for like a few years about technology and Kubernetes and troubleshooting. Well maybe he just should read more interesting stuff and my article is not that good but I'm glad he enjoyed it anyway.

Bart: That's good. I mean, if it helps one person, then mission accomplished.

Alex: On why we actually write an article, it all started as an HR initiative, to be honest. And I was very skeptical on that. Like, why would I spend my time writing articles? But then I realized that it's fair enough when we had an interview with an engineer who applied to us just because he read one of our articles. And he said, I want to work with guys like that. That wasn't like an eye-opening moment for me. And that's where I decided that we have to continue sharing the knowledge we have. And then surprisingly, our articles, they hit very top positions in Google without actually specifying the company that we work in. So I still don't understand how it works. And even though my article is published in Medium as well as in our corporate blog. The one that comes in top search when you try to search for webhook troubleshooting Kubernetes is the one from our corporate website. So that's also very beneficial for the company and for the brand improvement.

Bart: Absolutely. And it's understandable that in the beginning, some of these initiatives might seem questionable. Oh, I don't really want to take it because it takes time to write these things. You know, a lot of people don't understand that. A lot of what we're doing with this podcast is celebrating the efforts that have been made in order for those things to happen. Do you think that you'll be writing more articles in the future?

Alex: Yes, definitely. We already have one for a few of them. The Flux2 migration is one becoming because apparently it happened to be way more difficult and time-consuming than we thought. It's not very technical, but it's maybe helped people to onboard that on them. But if they have many clusters and they have many things that they use Flux for. Another one is coming about MongoDB, because we use that a lot and we use it not in the standard way as a simple database. We use it as a data cache layer. We have a replica set in our data center where we write data and then it gets replicated across the globe in the US, Europe and South America. So when people call our API, they get data from the closest point of presence and it's not static data that we can cache with CDN. So this is sort of a data CDN that we had to invent. This Mongo is also running in Kubernetes with local SSD that are made in RAID. So I think it's a quite interesting know-how how you can configure that. And what's the benefits you can get from this specific configuration. So I think it's a use case worth sharing with the world.

Bart: It's good. And like you said, the value of sharing knowledge about getting those things out in the open. In terms of if people want to get in touch with you, what's the best way to do so?

Alex: That's a very good question. LinkedIn, I guess, would be the one. It's very easy to find me because my surname is very unique. I don't have any people with the same surname in the world. Very cool. And you'll get me or my relatives.

Bart: Okay. And can I just ask, so your surname, is it Siberian or what's the origin?

Alex: The origin is Jewish. Okay. Yeah. And it all comes back to their... through a second world war times when the Jewish villages back then in the theater of Ukraine. And then when the second war started, my grandfather had to move his family to Siberia to survive. And that's how I ended up being born there.

Bart: Okay. All right. But yeah, like I said, it's definitely the first time I've seen this surname. And I will let you know if for whatever reason I encounter somebody with the same one, in which case I imagine there was probably very few degrees of separation between you and them. But very nice to meet you, Alex. I really enjoyed hearing from your perspective. I think a lot of other folks will too. Keep up the amazing work and we'll be in touch.

Alex: Thank you. Thank you very much, Mark.

Bart: All right. Cheers.