Intuit Announces NumaFlow: Stream Processing in Any Language
Feb 5, 2026
Stream processing platforms like Apache Flink, Spark Streaming, and Kafka Streams are powerful but JVM-focused and not designed for Kubernetes.
Running them at scale means dealing with resiliency issues during node rotations, manual autoscaling, and limited language support.
NumaFlow is Intuit's answer: a Kubernetes-native stream processing platform that lets you write pipelines in any language (Go, Python, etc.), handles autoscaling automatically, and treats pods as first-class citizens.
Built by the team behind Argo Workflows, NumaFlow fills a gap that Argo couldn't — continuous stream processing instead of fire-and-forget jobs. Version 1.7 adds distributed throttling and MonoVertex for simpler event-driven workloads.
Transcription
Bart Farrell: Who are you, what's your role, and where do you work?
Sriharsha Yayi: My name is Sri. I work as a staff product manager on the Intuit, Kubernetes platform team.
Derek Wang: My name is Derek Wang. I'm a core contributor to NumaFlow and work for Intuit.
Bart Farrell: What news do you want to share with us today?
Sriharsha Yayi: Today, we are going to introduce about NumaFlow. It's a new open source project from the Intuit platform team. We are also the creators of Argo. Some of our learnings from Argo Workflows is what has led us to create NumaFlow as a new project, primarily focusing on real-time data processing that is native to Kubernetes.
Derek Wang: So, we just released a 1.7 version, which we use this really smartening feature in the map feature in MonoVertex.
Bart Farrell: What specific challenges does NumaFlow address?
Sriharsha Yayi: So, some of our learnings from Argo was primarily that people who actually wanted to do streaming workflows. And as we extended beyond the use cases of streaming workflows, we started learning that a lot of the platform engineers and also ML engineers, and everyone were really interested in doing stream processing that is more native to Kubernetes. And that was actually a challenge with the existing platforms out there, like maybe Apache Flink or some of the other stream processing platforms as such. So, these were some of the key pain points that we were aiming to solve with NumaFlow as a platform.
Derek Wang: Something to add to this is, like, actually all the developers behind this project have some Argo Workflows developer background. And there's a common question in Argo community saying, "Hey, can we use Argo Workflows to do streaming data processing?" Obviously, the answer is no because the Argo Workflows is something like, you fire and forget. Once you get the job done, pods terminated, there's no idea to stream process. That's why we started the project, and we want to provide a lightweight Kubernetes approach to this processing for our customer.
Bart Farrell: How does this announcement change the landscape based on what existed before?
Sriharsha Yayi: So, like Derek has mentioned, our journey started out with Argo Workflows to start with. But when he actually started diving deeper into the stream processing problem statements that we had at hand, like I can break it out into multiple paths to start with. Let's say, for example, if you pick up the persona of application developers, they actually want to do the stream processing that is very lightweight and that is much more easier than the existing platforms out there. Probably they want to do stream processing using Go or maybe any other language that they were interested in. That was not possible with the existing platforms. That's one, and then the second is like, let's say if you talk to even the ML platform teams or ML engineers as such, they also wanted to do stream processing or probably kind of do inference on streaming data and things like that. These are the use cases that they tend to work with. But the existing streaming platforms or the stream processing platforms were, again, always geared towards like data engineers, again JVM focus, and completely Java-focused in itself. What they wanted, that is something that is more native to Python that can definitely work on Kubernetes too. And similarly, if you also talk to the platform engineers which have been running these stream processing platforms at scale, like the existing ones, they always had challenges whenever node rotations happen, there were resiliency challenges and things like that. And also autoscaling was one of the key pain point that platform engineers always used to complain when it comes to the existing platforms out there. So, NumaFlow, in short, really drastically changes things across the board, where it allows you to create stream processing jobs in any language that you like. Autoscaling is taken for you, and that is also something that is native to Kubernetes as such, which is never seen before, in the industry.
Bart Farrell: Where does NumaFlow fit into the cloud native landscape?
Sriharsha Yayi: So, NumaFlow, you can, primarily compare it with the likes of maybe Apache Flink or, Spark Streaming or Kafka Streams, that is primarily operating in the stream processing world. But, NumaFlow is primarily Kubernetes-native while others are not.
Bart Farrell: Can you break down the pricing for NumaFlow?
Sriharsha Yayi: That's a really good question. in short, I can definitely say that NumaFlow is all open source. It's free of cost. Anyone who is interested in trying out, Kubernetes-native, stream processing platform, they can definitely check out our GitHub repository and try it out as you wish.
Bart Farrell: What are the advantages that NumaFlow has when comparing it to other solutions that are in the market?
Sriharsha Yayi: So, let's start with some of the use cases where, either Intuit or even the community is trying to use, NumaFlow as a product, right? like, I can break it down into like multiple use cases to start with. Like one is, simple event processing use cases. The second is like real-time data analytics or stream processing use cases. And then lastly, doing like some kind of inference on streaming data. So, at Intuit, like, we have been covering all of these use cases across the board. but when it comes to the community, we are also seeing a lot of interaction, especially on the stream processing use cases or even on the inference on streaming data. So, like some of the examples that I can definitely pick is like, people actually want to do, like, IoT data processing, whenever they receive events real time. They want to process all of that information so that they can do some kind of predictive maintenance and things like that. And we have also seen use cases where teams actually wanted to do, edge data processing as well, like where they really want to have these pipelines be deployed on, low resource, low footprint, and data processing needs. they can definitely use NumaFlow as well. So, we have seen the community use cases across the board, right? From like huge real-time data analytics to, small low footprint data processing on the edge at the same time.
Bart Farrell: Looking ahead, what developments can our audience anticipate from NumaFlow?
Derek Wang: In terms of the future and, we actually just released the 1.7 version, which brings the awesome feature of distributed throttling. And we also have map feature supporting MonoVertex and which makes you easier to use MonoVertex to do simple mapping capability.
Sriharsha Yayi: And, moving on to the future roadmap and things like that. So, lately, we are seeing a lot of, use cases primarily around agent development and things like that. But one of the use cases specifically around agent development is that some of the LLM calls generally take a long time. So, people want to use NumaFlow pipelines in order to do some kind of inference on streaming data or, asynchronous agents and things like that. So, event-driven agents is what we would call it as. So, we do have some exciting features that are actually coming in that specific space. So, definitely looking forward to some of the things that we are going to release very soon.
Bart Farrell: If people want to know more about NumaFlow, what's the best way to check it out?
Sriharsha Yayi: You can definitely find out as, on GitHub. and if you are interested in connecting with the community or any of us, you can definitely join our Slack channel too. It's on our GitHub repository that you can find it there. And, if any of us or any of you is interested in discussing about the use cases or want to connect to us, my name is Sri. You can definitely join the Slack channel and then reach out to me, and you can also reach out to, Derek, on the community as well.
Derek Wang: Yeah. You're more than welcome to make contribution to the project. Could be a bug fix, could be a documentation stuff. Yeah. Just let us know and reach out to us if you have any question.
