Building a Carbon and Price-Aware Kubernetes Scheduler
Host:
- Bart Farrell
This episode is brought to you by Testkube—the ultimate Continuous Testing Platform for Cloud Native applications. Scale fast, test continuously, and ship confidently. Check it out at testkube.io
Data centers consume over 4% of global electricity and this number is projected to triple in the next few years due to AI workloads.
Dave Masselink, founder of Compute Gardener, discusses how he built a Kubernetes scheduler that makes scheduling decisions based on real-time carbon intensity data from power grids.
You will learn:
How carbon-aware scheduling works - Using real-time grid data to shift workloads to periods when electricity generation has lower carbon intensity, without changing energy consumption
Technical implementation details - Building custom Kubernetes schedulers using the scheduler plugin framework, including pre-filter and filter stages for carbon and time-of-use pricing optimization
Energy measurement strategies - Approaches for tracking power consumption across CPUs, memory, and GPUs
Transcription
Bart: Data centers already account for over 3% of global electricity consumption, and the number is rising. In this episode of KubeFM, Dave Masselink, the founder of Compute Gardener, explores how Kubernetes is being used not just to scale workloads, but to scale more responsibly. He dives into practical strategies for reducing energy consumption across clusters, including CPU power capping, workload scheduling based on energy profiles, and leveraging Kubernetes Node Feature Discovery (NFD) to better match applications to hardware.
Dave also unpacks the importance of platform observability, describing how his team builds telemetry pipelines with tools like Prometheus and Grafana to track sustainability metrics in real time. He walks us through what it means to manage infrastructure with a carbon-aware mindset and how Kubernetes native tools can help operators optimize performance while minimizing environmental impact.
We're excited to partner with Testkube for today's episode. Testing applications in Kubernetes doesn't have to be complicated or slow. Testkube is your go-to platform for cloud-native continuous testing. With Testkube, you can effortlessly orchestrate, scale, and automate tests right within Kubernetes, catching issues early and deploying with confidence. Try it out today and see why engineering teams trust Testkube to keep their testing simple and scalable. Visit testkube.io to get started.
Now, let's get into the episode. Dave, welcome to KubeFM. What are three emerging Kubernetes tools that you're keeping an eye on?
Dave: Thanks so much for having me, Bart. I'm looking forward to this discussion.
The first project I'd point to is the Kepler project. It's an acronym for Kubernetes Efficient Power Level Exporter—a Prometheus exporter for tracking power consumption down to the container level. We'll talk more later about why this is especially interesting and relevant. They just had a big ground-up rewrite and new release, so I'm trying to get caught up.
Another project is CloudNativePG, which is a cloud-native approach to PostgreSQL. Throughout my career as a software engineer, I've deployed PostgreSQL databases dozens of times, including a few times in Kubernetes with StatefulSets. I appreciate the patterns CloudNativePG is bringing to simplify things in our complex Kubernetes stack.
The last project I'd mention is KubeEdge. As its name suggests, it's about edge computing. At a high level, it provides a cohesive view across cloud and edge nodes, synchronizing metadata and giving you a unified cluster perspective. I'm just starting to dig into what it can provide, but it seems pretty interesting.
Bart: And just on one of the things we were interested in is content. This is how we found you through the blog that you wrote. When it comes to leveling up or getting hands-on experience and getting your questions answered about these tools, what are the resources that you use to learn? Are they blogs, podcasts, or videos? We recently took a look at Data on Kubernetes, which is a recurring theme in our monthly content analysis, and a lot of times people struggle to find resources about that specifically. Of course, you can go to the official documentation, you can watch talks from KubeCon, but what works best for you, Dave?
Dave: It's an all-of-the-above strategy. Definitely lots of talks from KubeCon and other conferences. It is really hard to keep up, honestly. It's a very fast-moving space, and I'll usually feel like I need to deep dive into something to really learn it. Even just a week or two with my head down, looking at one thing in particular, I'll look back up and the landscape has shifted a little bit. There are new projects gaining traction and new patterns that people are marking that I used recently.
Trying to keep on top of things, besides documentation for whatever you're working on and Stack Overflow questions and answers, the CNCF Slack community is one place where I try to hang out, especially for projects like Kepler, where a lot of discussions occur. Reading articles from The New Stack has some great content, and even XDA Developers—though more small-scale and home lab-focused—has been a great inspiration for weekend projects.
Bart: Good. Now, we've talked about the things that you're interested in, but for people who don't know you, can you just give us an intro about who you are and what you do?
Note: In this transcript, I noticed Compute Gardener is mentioned as Dave's startup, which could be a potential link of interest. However, since this is an introductory request for Dave to speak about himself, no specific technical terms require hyperlinking at this moment.
Dave: Hi everyone, I'm Dave Masselink. I'm a software engineer and, as of earlier this year, working full-time on a startup project called Compute Gardener. Previously, I'd worked at Google's X-Labs, Intuit (the financial software company), and Tesla SolarCity years back.
Bart: Fantastic. How did you get into Cloud Native? How did that journey start?
Dave: It was a journey. It started for me back in the 2016-2017 timeframe when I was at SolarCity, which was then acquired by Tesla. I was in the energy side of the company, on the grid systems team. We were starting to deploy infrastructure that supported our solar inverters and other field systems for our customers. We were beginning to see infrastructure as code patterns. My colleagues were working with Mesosphere, Terraform, and Kubernetes. That was where I first came across Kubernetes.
As luck would have it, in the summer of 2018, I was part of the largest layoff round to that point in Tesla's history. I was one of the 13% let go in June 2018. But as is often the case with these things, it was a blessing in disguise. It allowed me to have an intensive job search and career focus, ensuring I was moving in the directions I wanted and keeping my technical skills sharp in a new domain.
In particular, I ended up at Intuit, the financial software company, and really dove into cloud native there. I joined a team as a front-end engineer, mostly building internal developer platform tools to make it easier for Intuit developers to get up and running quickly with a project. At the time, I didn't realize how many great resources I had around me. The Argo team, including Jesse and Alex, was right next to me. That was where I first got deeply into the cloud native world.
Bart: And what were you doing before getting into cloud native? What kind of stuff were you working on?
Dave: I briefly mentioned that my front-end web stack is where I've been mostly working throughout my career. I started off in embedded systems and fortunately got pulled up the stack where I would help somebody instrument a solar field and get data into a database. They would then ask, "How do we see it? How do we access it? Can we visualize it?" I had to learn on the fly how to do those things and found that the front end space—building React and Angular apps, HTML and CSS—is an awesome place to be. It's a highly leverageable skill set, and outside of the cloud native world, it's probably the most fast-moving dynamic that I've been a part of.
Bart: And now, if you could go back in time and give yourself one piece of career advice, what would it be? And maybe we want to keep in mind you've got a startup now, too.
Dave: If I could go back in time, I'd probably do something important. But if I could only tell myself something, I would advise not fearing or being hesitant about having a circuitous career—a career that's not a straight line.
When I first started my career, I was afraid of deviating from the path. I feared that jumping off the career ladder would ruin my prospects if I didn't keep taking step after step upward. I've definitely learned a lot since then.
Interesting people and jobs aren't all there is to life. However, many of the most interesting projects I've seen have come from people with wildly varying perspectives due to lateral career moves. Even people who took large stretches off for teaching or family care, and then jumped back in 10 years later, always bring a new perspective you can't even imagine.
I've tried to be less and less afraid of that non-linear path.
Bart: So, we're looking at an article you wrote titled "Building Carbon and Price Aware Kubernetes Scheduler". Data centers are becoming a significant part of our global energy consumption. Can you help us understand the scale of the environmental challenge that led you to build the Compute Gardener scheduler?
Dave: Absolutely. The best estimates are probably about 4%, whether you're looking globally or in the US, of electricity that goes to the cloud. This includes classic data centers, smaller on-premises data centers, and network infrastructure.
The US has had some great achievements in energy efficiency in the last few decades, with a shrinking total energy consumption. However, we're at a different moment in time with generative AI. All of those previous gains are starting to be lost as energy demands are growing.
The projections beyond the current 4% suggest it should triple to 12% in the next three to five years, according to the US government. This increase is not because the rest of the energy pie is shrinking, but because everything is growing. The potential addition of these emissions to the system made me realize we have to take action. There's nobody tasked with fully optimizing the electric grid, which means we all must contribute. We need tools to empower and enable us.
My pursuit of Compute Gardener started as a side project. I had come across Electricity Maps API, a company in Denmark that aggregates and provides carbon intensity data. I installed a new power meter in my house to track real-time power use and related carbon emissions. I was quickly surprised to learn that, at least in California, the carbon intensity swings wildly. This discovery motivated me to explore the low-hanging fruit in energy efficiency.
Bart: The Compute Gardener Scheduler integrates carbon awareness into Kubernetes scheduling decisions. How does this actually work at a high level for people who might not be familiar with it?
Key terms I've hyperlinked:
Compute Gardener: Direct link to their website
Kubernetes: Core platform being discussed
Note: While the transcript doesn't provide extensive details about the specific mechanism, the mention of carbon awareness suggests this relates to the emerging field of green software and sustainable computing. The scheduler likely uses techniques similar to other carbon-aware computing initiatives like the Kepler project to make environmentally conscious scheduling decisions.
Dave: Using a strategy that is relevant outside the Kubernetes context, there are a couple of different shifting strategies: time-based shifting (temporal shifting) and space-based shifting (spatial shifting). The time-based approach is what we've focused on so far. The goal is to shift a workload to a cleaner time or place for execution. I'll be able to tell you more about exactly what Compute Gardener does as we continue talking.
Bart: And you mentioned using real-time carbon intensity data from external APIs. What exactly is carbon intensity, and how do you measure it for different power grids? You could potentially use the Electricity Maps API to retrieve this information.
Dave: It's a metric that attempts to show the cleanliness mixture of the grid for all resources contributing at a moment in time. Roughly speaking, it indicates whether the grid is very clean or very dirty. The unit carbon intensity uses is grams of carbon dioxide per kilowatt hour, which means for a unit of energy, we're talking about how many grams of carbon dioxide get emitted.
I was very surprised as I dug into this and saw some of the data in my region, Northern California CAISO region. In the middle of a sunny day, it's probably about 50 grams of carbon per kilowatt hour. In the middle of the night, I was seeing values up to 250, 300, or 350 grams of carbon.
This is not a signal that the utility passes through for you to act on, but it is a reality nonetheless. It motivated me to build tools that make it easier for folks to use this signal.
Bart: Now, beyond carbon awareness, you've also built in time-of-use pricing optimization. How does this time-of-use pricing work and why is it important for Kubernetes workloads?
Dave: Time of use is not important for all workloads. Let me quickly explain what time of use is: It's a pattern or program implemented by many utilities that allows you to opt into a different rate structure. For example, electricity might normally cost 15 cents per kilowatt hour on their standard rate. When you opt into time of use rates, during peak windows you pay more for electricity, but during off-peak windows, you pay less.
To use rough numbers: during peak windows, you might pay 20 cents per kilowatt hour, and during off-peak times, only 10 cents. This rate setup is most common in smaller commercial and residential contexts and isn't always relevant in data center environments. Often, data centers procure large amounts of energy upfront through power purchase agreements, so while the power still contributes to emissions, the costing structure differs.
At Compute Gardener, we work with this fixed schedule by delaying jobs that arrive during peak windows until off-peak times. We track the energy used and can estimate money saved. This approach is especially relevant for on-premises, smaller-scale, home lab contexts, but it's becoming increasingly applicable in cloud environments. Some cloud providers are now focusing on advertising the carbon intensity makeup of their energy.
Bart: Let's talk a little bit more about the technical implementation. You built this as a Kubernetes Scheduler plugin using the scheduler framework. What were the key design decisions that you made?
Dave: Probably more mistakes than decisions. When I first started, after I roughly realized what the project should be, I discovered other people had explored similar things in the past. It wasn't surprising—there were white papers and academic papers from a few years ago that were pretty close to what I thought should be done, using the scheduler extension concept.
I started there but quickly learned that's an older pattern that doesn't work as well as the scheduler plugin framework. The plugin framework is really interesting. While it sounds like just add-ons and ancillary things, it isn't. Everything you appreciate about the Kubernetes scheduler—its ability to do resource constraint scheduling, network constraint scheduling—are individual implementations imported as plugins. This makes it easy to build your own scheduler, pull in what you want, and implement just the differentiating parts you care about.
The scheduler framework has a dozen different places along the filtering pipeline where you can implement plugins. We have to implement two. The first is the pre-filter stage, where you're considering a pod in isolation with some global conditions. This is when you first hear about the pod that's just been created and can check certain conditions—like whether you're in a time-of-use peak window or a high carbon intensity period. If so, you wait.
The filter stage is different. Here, you're considering nodes and which node to bind the pod to. You get a function called with a node-pod pair. If we have data about nodes' hardware and the efficiency of that hardware, we can attempt to place the job into more efficient hardware at this stage. These are the two most important stages we needed to implement.
Bart: I think one of the biggest challenges must be accurately estimating energy consumption for different workloads. How do you model power usage across CPUs, memory, and GPUs?
Dave: It is one of the most difficult parts. But before talking about that, I'm going to say that it is not necessary to track energy in order to have real-world positive impacts on carbon emissions. You can pick up this open-source scheduler and have no tracking whatsoever, and as long as the jobs are being shifted to cleaner times, it will have a net benefit for everyone. It just is the case that you cannot prove that to anybody for purposes of recognition or anything else.
In order to be able to track and prove this, which we also want to do, it requires us to do some energy tracking. It's different in different components of compute. For GPUs, these are such energy-intensive devices that energy was kind of considered from the ground up. For instance, NVIDIA has a piece of software called DCGM (Data Center GPU Monitor) that gives a pretty good estimate of full card power, not just the core. You can apply a power usage effectiveness ratio on top of that and have a pretty good estimate of what the GPU is pulling.
It's different for CPU and memory. The history of computing is such that there weren't real-time power features built in from the beginning. For those, we have to do something a little different. We essentially have an estimate, a model, a way to determine based on what we do have visibility over—CPU utilization, memory use—how we map that into energy usage. That's some math and estimation.
Since the blog post was published, we have started integrating Kepler as a data source for power as well. Thus far, it's looking like it'll be a nice accurate approach, although there's a fair bit of overhead, so we're not sure if everybody will want that. We'll probably be supporting both a standardized community exporter like Kepler and our model, which is based on the Green Software Foundation's software carbon intensity methodology.
Our model goes a bit further by taking into account dynamic CPU frequencies. Rather than just a linear model saying at 100% utilization here's the power, at idle here's the power, with a straight line between—which is honestly a pretty good estimate—you can get a little closer using the power law method for estimating. It sounds more complex than it is. It's just saying it's not a line, but a power relationship, an exponential relationship. That's what our model is able to do, and we're building out to support more advanced exporters as well.
Bart: Given that performance is critical for any scheduler, how did you ensure the carbon and price checks don't add significant latency to scheduling decisions?
Dave: The latency and performance for a scheduler is a critical consideration. We put careful thought into taking everything out of the scheduler's critical path. What you don't want is to say, "We just heard about a pod, do we want to schedule it?" and then check the API in real-time, which could take too long and potentially fail the request.
Instead, we have background processes that periodically check and cache the necessary data, making it always accessible for the scheduler's decision point. We've been able to reduce the P95 latency to less than 100 milliseconds, and we believe this is a solid foundation to continue improving from.
Bart: And for developers who are interested in using Compute Gardener, how do they configure their pods to be carbon aware? What control do they have over the scheduling behavior?
Dave: One of the goals was to make it quite simple to get started. Once the scheduler is deployed with a Helm deploy inside the cluster, only one line needs to be changed in a pod or job spec: the scheduler name. Currently, when you see scheduler name in pod specs, it's almost always set to "defaults" (the default Kubernetes scheduler). But if you specify a different scheduler, Kubernetes will honor that and send the job to the deployment representing that scheduler.
This approach is very much opt-in. No compute jobs that aren't specifically focused on or don't care about the scheduler will ever be affected. Only jobs that explicitly opt-in will be touched.
In terms of developer control, after adding the scheduler name line, you can also override default settings. The default carbon intensity is 150 grams, and the default max delay is 24 hours. For a semi-critical job that doesn't need to run immediately but should run relatively soon, you can adjust these parameters. For example, you might choose a higher carbon intensity and reduce the max delay from 24 to 12 hours by using custom annotations on the pod or job.
Bart: One thing that's important to keep in mind is that this is not necessarily just an initiative about wanting greener energy for the sake of greener energy, but actually looking at results. You mentioned some very impressive projected results, 50% carbon reduction for deferrable workloads. Can you break down exactly where these savings come from?
Dave: Absolutely. At the top, as with anything like this, "your mileage may vary" is very important to say. The reason is that the carbon emission arbitrage opportunity exists because of wild swings on my local grid. This isn't exactly the case everywhere, especially as we have good renewable penetration throughout the day. I want to point out that not everybody in the world can install this and immediately see 50% carbon emission savings.
To drive the distinction: there are no energy savings here, at least in the example we're discussing with 50% carbon emission savings. The exact same amount of energy occurred either way. The same jobs were run and took the same amount of time. The difference is when we ran them and their carbon intensity at that time.
In California ISO, there are wild swings—five to seven times over the course of the day—in what you could be emitting. Our test setup was a regular cron job, hourly or more frequent, where we would run benchmark jobs. During high-intensity periods, the scheduler would delay them until we got below the carbon intensity threshold.
Because of our grid's wild swings, we achieved savings in the high 50s. While 50% is impressive, this was due to our specific setup. It's not a completely unique pattern for jobs, but it isn't universal either, so it's worth pointing that out.
Bart: Energy visibility seems to be a key benefit beyond just optimization. What insights have users discovered about their workloads that surprise them?
Dave: They often don't have a baseline for surprise. This is something nobody's been tracking in cloud compute. When I go back to a team and say, "This service is taking this number of kilowatt hours per month, and here's what the emissions look like. This namespace is doing whatever," they initially look at me and say, "Okay, let's try to bring that down." It less a surprise about where they start, and now they have a number to track and optimize.
What surprises me most—although it really shouldn't, since we are humans and social competitive animals—is something I've experienced a few times. When I share with a partner company what their service, team, and emissions and energy use look like, they quickly want to know, "What about that team over there? How do we compare?" It's a good-natured, competitive cross-team behavior that benefits us all.
Bart: Looking at the broader ecosystem in terms of cloud-native technologies, how does your approach compare to other carbon-aware computing initiatives like KEDA (Kubernetes Event-Driven Autoscaling), a CNCF project, or cloud provider sustainability features?
Dave: Really quickly, I'll talk about cloud providers first, because I think it's a little distinct. So cloud providers, like AWS as an example, a few years back put in their billing system a mechanism where you can look at the likely emissions impact of your usage. This is always post-fact—always from the last billing period. So it's not something you can operationalize and use today for immediate decisions. But I'm really glad that they have it and provide any insight to their customers.
The other thing is when it comes to cloud provider solutions, that's a single cloud provider. It works for some people. Some people are AWS ride or die, while others want to be cloud agnostic and want tools that work across platforms. Our goal is to provide a solution whether you have some compute in AWS, some in GCP, some on-prem—you'll want one place to look at all of that.
In terms of other tools in the space, at a high level, I think we're all rowing in the same direction with complementary tools. KEDA, like mentioned, is about auto-scaling and uses the carbon intensity signal for triggering scaling decisions. There's another project called Kube Green that does something different. They have a mechanism for snapshotting running pods and putting them to sleep.
Their use case is for workloads that only interact with infrastructure during working hours (Monday through Friday). Why should you pay for infrastructure when it's not in use? They make it simple to define schedules where everything you opt into goes to sleep during certain stretches.
These are different approaches and strategies, but the most important thing is they can all work together. The cumulative savings stack, so you don't have to pick one tool or approach—pick many.
Bart: So you're actively seeking production validation partners. What kind of organizations or workloads would benefit most from deploying the scheduler?
Dave: We are looking for teams on the small to medium scale that we can best support. We're seeking forward-thinking organizations with a hybrid cloud setup, having infrastructure in both public cloud providers and on-premise environments. The on-premise infrastructure is particularly valuable as it's easier to validate energy use. Accurately estimating energy consumption will be a relevant portion of our partnership, and having on-prem hardware would be a good approach to start with.
Bart: The roadmap also mentions exciting developments like carbon credits integration and cross-cluster scheduling. Could you paint a picture of where carbon-aware computing is headed?
Dave: Let's see. I'll take cross-cluster scheduling first, as we touched on it before. We've really focused on temporal scheduling so far, but we're starting to shift because there's a whole different stack of carbon credits savings potentially if you can move from one cluster or cluster region to another that's much cleaner.
However, the reason why we didn't focus on that initially is that most compute doesn't operate in a vacuum. It operates on data, and you need that data there for the compute to occur. When you're inside a cluster and just saying, "Let's wait till later," you let the data sit there. But if you have to move the job to another cluster and you don't already have the data there, you can have time delays related to that, perhaps egress fees in the worst case, and other things that start to make you question whether it's even worth making the shift in the first place. These are the thorny questions and corner cases we're working through right now.
Beyond that, carbon credits are probably one of the potential programs in our future that I'm most excited about. I'm an energy nerd and have been tracking carbon credits and marketplaces for a while. For those who might not be as aware, the most classic example of a carbon credit is where somebody will go and plant many acres of trees and say these will be a carbon sink for the next 30 years. They are then given carbon credits by marketplace authorities that they can buy and sell.
The reason it comes into play for Compute Gardener is that once we have visibility over what our customers are doing to mitigate carbon emissions—knowing what job came in, its intensity level, when it ran, and how much energy it consumed—we'll have the telemetry needed to validate actions for carbon credits.
As a software engineer, I won't be able to build that program alone. We'll definitely need to bring in policy and legal experts in the near future. But I'm very excited about it. I think it will shift thinking a bit. Right now, software engineers, DevOps engineers, platform engineers—whatever we're calling ourselves these days—are expensive, and our teams are cost centers in organizational speak.
I've been in enough high-level organizational conversations to know that if a team is bringing in some money, even a little, it changes people's thinking. It's not just a cost center anymore, but potentially a revenue center. It's not an offsetting revenue stream, but one that could potentially grow and get people thinking about the "green" they actually care about—and I'm rubbing my fingers together as if there's a dollar bill here.
Bart: I've never done that before in a podcast.
Dave: At the last minute, I wanted to explain what I'm doing, as not everybody is watching. We often have to tie carbon savings back to dollars and cents savings if possible. It's not always straightforward because Compute Gardener is focused on limiting carbon emissions as opposed to limiting energy use—often the same amount of energy is used. Where we can introduce a new dimension of potential revenue, that's what we're trying to provide for our customers.
Bart: For teams interested in contributing or adopting Compute Gardener, what's the best way to get started? And what should they know before diving in?
Dave: Start small if you want. Making any change to infrastructure can be scary and risky. Start with your least important job running on your least important dev cluster and try it out. Let Compute Gardener run for a little bit of time. Make sure that you understand what it's doing and why it's holding jobs back at certain times and why it's not at others.
But really, I would say: jump in. We have a simple Helm chart for the install process. The only requirement is an ElectricityMaps API key, which is free for a single grid region, so you won't need to pay any money.
Beyond conversations in GitHub, I'm always up to talk with folks personally over the internet about this topic and hear what tools they're using that are similar or different. We're all trying to figure out that optimal stacking as we empower ourselves to make these changes.
Bart: I've got one bonus question: Currently, we're both in California, but I'm normally based in Europe. In terms of what you're working on with Compute Gardener, what considerations are being made for the European market when it comes to sustainability, efficiency, green energy, and things of that nature to make Compute Gardener more scalable? Have you interacted with users in Europe and heard their concerns to contrast with things that are more focused on the U.S. market?
Dave: We have not coordinated with too many customers in Europe. That being said, especially in the green software and clean software communities, Europe is leading the way. The US is behind in terms of participation and real steps forward. Even though I'm less familiar with the energy markets and cloud markets in Europe, those are often the more forward-thinking teams and places where there are no current political question marks around whether something should be done. I would appreciate the aspect of getting to work with potential European partners.
Bart: Fantastic. I met folks from LeafCloud a few years ago. So I've been hearing about what they've been doing. Also working with some folks in the UK that are interested in this topic. So it's something worth exploring further. Dave, what's next for you?
Dave: Focusing on Compute Gardener, I'm trying to gain traction with the project, find forward-thinking teams to partner with, and implement some of these patterns. I want to start proving the value of it. Beyond that, I'm personally planning to attend KubeCon this year in Atlanta. It will somewhat surprisingly be my first time ever, so I'm really excited about that. Hopefully, I'll get to meet you in person, Bart, and lots of the listeners.
Bart: Likewise. For people who want to get in touch with Dave, what's the best way to do that?
Dave: LinkedIn works. GitHub project page. I'll probably link both of those or our Compute Gardener website. Any of those should work.
Bart: Absolutely. Dave, thanks so much for joining us today. I look forward to speaking with you soon and seeing you on the ground in Atlanta. Take care.
Dave: Thanks, Bart. See you later.