Running K3s on Tractors
Mar 11, 2026
Running Kubernetes in the cloud is well-documented. But what happens when your cluster is mounted on a tractor, powered by its battery, and operating in an orchard with unreliable connectivity?
Jordan Karapanagiotis, Software Engineer at Aurea Imaging, shares how his team replaced a monolithic Docker setup with K3s to run AI inference on NVIDIA Jetson devices at the edge — and what broke along the way.
In this interview:
Why K3s was a better fit than ROS and C++ for edge computing in agriculture
How hard power cuts from a tractor's engine affect cluster recovery and pod state
What happens to networking when your device connects to a mobile hotspot instead of a stable network
Practical advice for teams starting with Kubernetes at the edge: understand the networking layer and how the cluster interacts with the OS
Keep it simple, learn the networking, and design for the unexpected — your cluster isn't in a data centre.
Relevant links
Transcription
Bart Farrell: Jordan, welcome to KubeFM. For people who don't know you, can you tell us about who you are, what you do, and where you work?
Jordan Karapanagiotis: Sure, thanks Bart. I'm Jordan, originally from Greece. I currently live in Amsterdam and I'm working for an agritech startup named Aurea Imaging. Basically, we are deploying edge devices to orchards that do all the AI inference on the edge. We process most of the data there, we do the heavy lifting there, then we send the data to the cloud and visualize it in some sort of portal that gives insights to the grower. That's more or less in a nutshell.
Bart Farrell: Okay. Not your average job, but an interesting one nevertheless. In terms of Kubernetes tools that you've worked with or that you find interesting, is there anything you'd like to share about regarding that?
Jordan Karapanagiotis: actually, we came across K3s, this lightweight distribution of Kubernetes, about three years ago in KubeCon. We saw a similar use case with us. It wasn't in the AgriTech space, but it was still quite similar in terms of the application. We were also at the same time thinking to redesign our architecture. Basically, we based our architecture before that, to give you a bit of context in a Docker container with a single Python service, sort of microservices using Python's multi processing library, but we didn't have enough registration, we didn't have control over our app. We didn't know the state of our app. So given that our system had constraints that are not really typical of cloud native servers. For example, K3s was an ideal case of using Kubernetes in such applications. We eventually tried it. We had a proof of concept. Of course, there was quite some learning curve for the team to learn as we weren't all Kubernetes experts. But hey, the alternative would be to go with ROS, Robot Operating System, and C++ and all these embedded tools that also the team didn't really know. So it played out really well at the end. So we basically made, split it properly, our application, our code base into microservices, did use the K3s single node cluster and deployed basically all of our applications there. And that works pretty well until now. It's the third year that we are using this architecture. It's pretty easy to scale, it's pretty easy to debug, to identify issues, to troubleshoot what's going wrong, to take the logs and see from which service does the bug occur. It's pretty easy to identify the root cause of issues.
Bart Farrell: On the subject of upgrades and change risk, so Kubernetes upgrades get a lot of attention, but changes underneath the cluster can still be scary. When something changes at the node or OS level, How confident do you feel that it won't break work?
Jordan Karapanagiotis: Well, there's always a chance that something might break, of course. Recently, we've been playing a little bit with different OS. We use NVIDIA's JetPack, which is based on Ubuntu. But all this, as we're quite dependent on NVIDIA's ecosystem, we have to keep up to date with the industry standards. We have to use the latest tools for all the GPU-enabled inference that we do. So upgrading NVIDIA's toolkit means also upgrading the OS. The cluster remains the same of course and our application has to run. So apart from some routine upgrades that we do in the application, we haven't seen much problems. The main issues come when something changes in the networking, which is a bit more tricky. It's a little bit more related to our application that is not a typical cloud server. Networking changes quite often. It can be that we connect our device to hotspot and things might completely break. So we have to adjust our Kubernetes cluster to be robust against that.
Bart Farrell: And on the subject of running Kubernetes outside the cloud, more teams are running Kubernetes in places that don't look like a hyperscaler. If you've worked with clusters outside the cloud, what feels harder or more fragile compared to managing environments?
Jordan Karapanagiotis: So the main difference that I would say compared to classic cloud applications that we have is our device is connected to a tractor, to the tractor's battery. So that's where it gets its power from. So our main shutdown mode is a hard power cut, basically, when the driver switches off the engine. And that's the tricky part. That's, again, only in our application. But I think it's quite a challenge that we have to deal with. How does the cluster deal with this? shutdowns, how do the pods get? connect back to the initial state. And networking, of course, can be tricky sometimes when IPs change, when DNS priorities change. So there are some edge cases that we have to take into account.
Bart Farrell: And just as a piece of practical advice for people that are thinking about, you know, working with Kubernetes at the edge, are there any things that you would say, you know, knowing what you know now, you would definitely recommend if you're starting out with this, these are some good things to keep in mind?
Jordan Karapanagiotis: Start simple. Of course, as with every software advice. Build a simple cluster, understand a bit of the networking layer of Kubernetes. I think that was the most tricky part for us because basically everything else is pretty well documented. You can have access to all the API documentation of Kubernetes. Pretty standard components. If you have worked with it in the cloud, I think it's pretty standard. It's mostly about how it plays with its surroundings, how it's bound to the network of the OS. And basically the most important thing is how does your cluster interact with the system, with the OS.
Bart Farrell: And what's next for you, Jordan?
Jordan Karapanagiotis: Next for us as a company, we're not focusing so much on the edge side at the moment. We're mostly focusing on scaling our data pipeline, creating a data lake. And then we want to have during the spikes of our traffic, which always secure in spring as we are heavily dependent on seasonality. Orchards bloom in April in the Netherlands usually. We're trying to streamline this process to be much smoother for the customer to have results faster. So we're basically redesigning the cloud side, still using Kubernetes there, but not in the same way as the edge side.
Bart Farrell: And if people want to get in touch with you, Jordan, what's the best way to do that?
Jordan Karapanagiotis: Reach out to me through LinkedIn at Aurea Imaging. I can also send you my email if needed. So everyone can find me through Aurea Imaging. We're like 20-25 people working there. On LinkedIn you will find my name, my face.
