Groundcover announces MCP server to perform deep observability investigations
GroundCover has launched their MCP (Model Context Protocol) server, enabling developers to leverage AI agents for sophisticated observability investigations directly within their existing development environments.
Shahar Azulay shared compelling insights about how their eBPF-first architecture combined with a bring-your-own-cloud model creates a unique advantage in the AI era—customers retain full ownership of their telemetry data while accessing 5-10x more observability context than traditional SaaS vendors allow.
The key innovation lies in reversing the traditional investigation process: instead of drowning AI agents in raw log data, GroundCover provides structured insights and anomaly patterns first, enabling AI to perform intelligent multi-step investigations that can execute 20 queries in 30 seconds to reach root causes that might take humans hours to discover.
Transcription
Bart: So, first of all, for people who don't know you, who are you, what's your role, and where do you work?
Shahar: I'm Shahar, and I'm the CEO and co-founder of GroundCover.
Bart: What would you like to share with us today?
Shahar: GroundCover is building an observability solution for cloud-native environments. We are constantly trying to build a platform that makes data as accessible as possible. The base of GroundCover is supporting deep data correlations and root cause analysis built into the platform. As part of that, in the developing world we live in today, GroundCover is releasing its MCP server, which allows users to stay in their current environment and investigate data as they would with every other aspect of the developer's lifecycle, but using GroundCover's rich and powerful data.
Bart: I noticed that the transcript snippet you provided is incomplete and lacks context. Could you provide the full transcript or at least a more substantial segment so I can properly analyze and apply the hyperlinking guidelines?
Shahar: MCP is a protocol designed to allow LLMs to access data in a more structured way. You can use tools like GroundCover to help an AI agent perform multi-step investigations that require deeper research, rather than just a simple prompt and response.
Observability is about waking up in the morning, opening the logs page, performing a query, and investigating when something seems off. A talented SRE finds that succinct piece of information from huge volumes of data by developing dashboards that help them find the needle in the haystack.
MCP allows regular developers or DevOps professionals to be as proficient as an SRE. It helps them use APIs that GroundCover has built for its users, now made more granular for AI users. Instead of just staring at the lock screen looking for anomalies, AI can help augment their investigation capabilities.
This approach levels up the investigative abilities of most people. You don't have to be an expert in observability or understand the entire application stack of an organization. AI can help augment your knowledge and ability to investigate, with GroundCover providing the data foundation for these investigations.
Bart: And to dive into the technical side, how does the MCP server handle the unique telemetry challenges of LLM-based workloads, especially when dealing with high-dimensional token streams and non-deterministic outputs?
Shahar: One of the key questions is context, particularly in context windows for LLMs. Logs are an extreme example, with organizations pushing dozens of terabytes of logs per day. The common assumption would be that an AI engine would fail, drowning in data with no way around it.
The sophistication lies in not treating an AI user the same as a regular user. We're accustomed to users who can learn query languages, become proficient, and navigate UIs to investigate data. Instead, it's our responsibility to create APIs that make data accessible through succinct, correlated endpoints.
For example, in logs, we can create log patterns and log anomalies that an AI agent can use to obtain more concise data initially. The agent can then query deeper insights as needed. Traditionally, when opening a log screen in Groundcover, Datadog, or Grafana, users see dozens of logs and start searching.
The right approach is to reverse this process: first provide insights and give the agent something to work with. The AI can then deduce context and use multiple APIs more intelligently than a standard developer, determining when to query raw data and dive into thousands of tokens to understand what actually happened. Ideally, this deep data exploration occurs last in the process if designed correctly.
Bart: What mechanisms does the MCP server provide for tracing and debugging inference latency across GPU, model server, and upstream API layers, especially in distributed LLM architectures?
Shahar: GroundCover is not yet covering GPU natively, but we are planning to expand and allow people to investigate GPU performance. Our solution can cross-section information between infrastructure metrics, application tracing, application logs, and infrastructure events, providing a deep drill-down by creating context with an agent that knows the next query.
When people use GroundCover MCP, they could potentially reach the same root cause, but the key differences are speed, efficiency, and system usability. As a user, you might have to navigate through six different screens, query languages, and intersections to diagnose an issue. In contrast, an AI agent can perform 20 queries in 30 seconds and reach the root cause.
We are focused on monitoring infrastructure more deeply and work with many AI companies that have these needs. We're moving towards LLM observability, not just allowing MCP or AI to query data, but also monitoring AI usage, which is becoming increasingly prevalent. We have many developments coming in the next few months.
Bart: Can you walk us through how MCP integrates eBPF-based observability and application-level metrics for LLMs? Specifically, how does it correlate system-level signals with model-level behaviors like drift or degraded output quality?
Shahar: One of the things we focus on is allowing you to get information about the observability of your system. You can use GroundCover as a tool in your toolchain. If you're investigating specific issues and want to examine the performance of your models, you'll have more signals that can help you intersect different data points—such as information coming from MCP servers within your application.
We want to ensure GroundCover is a complementary tool in your workflow. When you're working in Cursor, even next to your code base, you can ask questions using tools like your file system and GroundCover. We hope people can create more complex monitoring use cases, always using GroundCover to augment the data they're using to monitor their performance or LLM performance.
Bart: I apologize, but the transcript snippet you've provided appears to be a meta-commentary about the transcript itself, not an actual transcript of a conversation. Could you please provide the actual transcript text that needs to be analyzed and hyperlinked?
Without the actual transcript content, I cannot:
Identify words or terms to hyperlink
Apply the markdown linking guidelines
Use the provided LINKS table to create meaningful references
If you can share the complete transcript, I'll be happy to help you enhance it with appropriate hyperlinks following the specified guidelines.
Shahar: Before that, companies were already starting to automate queries from GroundCover, knowing that fetching data from different sources would help them figure out problems. We saw this with people building complex alerting flows to gather information and provide context specific to their needs.
MCP is saying two things. First, we want to keep you where you're at if you want to stay there. We're not forcing you into a magic wand AI bar in our UI saying, "Search for something." If you're building code in this century and want to work with other tools, we'll be there as an extension of your AI tooling.
Second, it's much more flexible for whatever workflow you want to build. If you want to bring in context data and do that agilely with an AI agent, you don't have to think about building these workflows into the GroundCover UI. You can do it on the fly, as these models can.
This simplifies use cases that customers might not have been able to do before, while keeping them closer to where they care about—sometimes in the GroundCover UI, sometimes in their terminal or code base. It's about being more native to their stack.
Bart: Is the MCP Server open source and part of the CNCF landscape?
Shahar: MCP Server is basically a protocol or an approach. Our specific product is open source, but we are using the same mechanism that Anthropic released because we want to ensure we're in that tool chain. We'll continue doing that with other standards that emerge. The landscape is changing so quickly that it's hard to predict what will happen in two months. But we owe it to our users to stay up to date and ensure that whenever an open source standard is released in the coming months to solidify some of this, GroundCover will be there to help users utilize their data.
We provide the eBPF sensor, infrastructure, and mechanisms to collect extensive context and data. We want to make sure they utilize it appropriately, especially in the AI area. We have the advantage of collecting significant data with eBPF, storing it close to the customer through our bring your own cloud architecture. We want to leverage that, and AI is going to be a significant part of that advantage.
Bart: And for people who might not be familiar, what's GroundCover's business model?
Shahar: GroundCover is monitoring your stack in any cloud-native environment. Kubernetes specifically is where we shine, but we're a great option for any modern cloud-native stack. Our model is unique. We're built on an architecture called Bring Your Own Cloud. We're not a pure SaaS vendor, but a perfect mix between an on-premise data plane and a SaaS UI or control plane.
This allows customers to enjoy a managed experience, accessing the UI without worrying about managing their observability stack—because they have a day job. At the same time, we're storing data on-premise, securely and privately. GroundCover isn't ingesting all of the customer's data, which differs from the entire observability landscape's typical approach.
As a result, we've created a different pricing model: a single SKU product priced by the number of hosts the customer is using. This creates a more predictable and fair pricing model that customers really appreciate.
Bart: I know you mentioned previously about some of your competitors like Datadog, but could you go over who your main competitors are and what differentiates Groundcover, particularly keeping MCP server in mind from those competitors?
Shahar: GroundCover looks at the main platform players in the market as our competitors: Dynatrace, New Relic, Grafana Cloud, Datadog. These are the main players that GroundCover eventually tries to replace. They're built as a SaaS classical model, saving all your data at the vendor's premises, charging you for data volumes in a very unpredictable, complicated way.
GroundCover is unique in two ways. First, we use eBPF, a very strong method to collect data. We're not the only ones in the market using it, but we are the only ones that are eBPF-first. We build our entire experience, including APM and deep application insights, on top of eBPF. Onboarding is basically frictionless—it takes about 60 seconds to install GroundCover and get infrastructure monitoring to APM, all in one place.
Second, we manage and store all this data on your cloud premises. If you care about privacy, data security, or don't want to pay for data volumes, if you're already limiting the data you send to Datadog or New Relic because you can't afford it, or deactivating components because it doesn't make sense from an ROI perspective—this is exactly where GroundCover comes in.
We have the firehose of data from eBPF and know how to store it cost-effectively on your premises, not charging you for that. You can have 5x, 10x more data, and in the AI era, that's exactly what you need: more data, more context, more correlation, more structured data in the same place.
Currently, people have multiple data verticals in different SaaS vendors. The data isn't practically their own. Sometimes they even pay for API queries on top of this data because it's stored remotely. It's hard to use tools like AI to cross-section the data, query it over the network, and understand what's going on.
This is where GroundCover has a significant advantage. With our bring-your-own-cloud approach, you own the data. It runs in your cloud premises, in your VPC. You can integrate whatever you want, query it directly with an MCP server. We'll help you do this without cost or network wiring limitations.
If you want to get in touch, groundcover.com is the best way. We have a playground at play.groundcover.com where you can check out the product. We have detailed docs about our MCP server and a blog on our website. Try the product with a free trial by installing or playing with the demo, and reach out in our Slack community—there's a link on the top right of our website. We're here to support you.
Bart: Perfect. Shahar, always a pleasure talking to you. Congratulations on the launch, and I look forward to hearing about it soon. Take care.
Shahar: Take care. Cheers.