Traefik Labs announces innovations in API management and Kubernetes networking

Traefik Labs announces innovations in API management and Kubernetes networking

Guest:

  • Sudeep Goswami

Traefik Labs announces a comprehensive API runtime infrastructure that streamlines how organizations manage and expose their APIs. This transforms the traditional weeks-long deployment process into minutes.

The solution offers a unique three-tier journey from open-source proxy to full API management, complete with AI gateway capabilities and mock API infrastructure.

What makes this particularly compelling is their cloud-native, GitOps-first approach that includes automatic service discovery, distributed rate limiting, and certificate management — all designed to be fully declarative and code-based, setting it apart from the competitors.

Read the announcement

Relevant links
Transcription

Bart: The host is Bart Farrell . The speaker is Sudeep Goswami (works for Traefik Labs).

Sudeep: I'm Sudeep Goswami. I'm the CEO of Traefik Labs.

Bart: Sudeep, what would you like to share with us today?

Sudeep: We have a lot of exciting things that we've been building over the past few months, and I'd love to talk to you about that. Traefik Labs has been around for a few years, starting with the open-source project Traefik Proxy, which became extremely popular with over 3.3 billion downloads and over 50,000 GitHub stars. We have been busy building an API runtime infrastructure to which we can expose any API securely and share that with developers and other consumers of APIs, and that has many different parts to it that we can go through during the rest of the conversation.

Bart: In terms of the problems or the pain points that you're tackling, could you explain a little bit more about that? What are the things that you're helping engineers make their lives easier?

Sudeep: In a nutshell, what we do is allow DevOps and platform engineers, as well as software developers, to expose their microservices and APIs in a matter of minutes, which is a process that typically takes weeks or sometimes months to set up from an infrastructure perspective.

Bart: In terms of the before and after leading up to this point, and then now after the announcement, what are the next steps in terms of context, and what is the situation looking forward?

Sudeep: Let me take a step back and walk you through the capabilities of our API runtime infrastructure. First and foremost, it starts with our ability to get installed in a Kubernetes native environment and auto-discover all the services that are running. This gives our operators, specifically DevOps and platform engineers, full visibility into every microservice running in that environment. From there, they can pick and choose which ones they want to expose and how, whether they want to expose it just as a service or put an API wrapper around it and expose it to the outside world. We don't just stop there. We can provide additional layers of authentication and authorization using a common open standard framework like OIDC. We can also provide distributed rate limiting, distributed certificates management, making it easier for them to have a scalable infrastructure to expose those services or APIs.

If customers or users need API lifecycle management, we provide a rich experience for both API publishers and consumers. On the API publisher side, they care about the ongoing lifecycle management of those APIs, change management, versioning, observation, and granular access control and security. On the other hand, API consumers want to easily see all the APIs they have access to, or perhaps request access to, and have an easy way to consume those APIs, usually through a developer portal that is intuitive, allowing them to view everything, test the API, and get instructions on how to use the APIs with an JWT Token or API key.

We also provide a full mocking infrastructure that mimics the production API infrastructure, giving developers an easy way to bring in an OpenAPI specification file, and our infrastructure does the rest, creating mock APIs against that OpenAPI spec file, deploying that mock API on infrastructure including API gateways and developer portals, and giving them a link for other developers to log in and view the mock APIs and consume them, which really accelerates the development velocity of different teams working together.

Lastly, we are launching an API AI gateway, because you can't have a discussion about APIs without considering the AI side of the equation, as AI can be and is starting to get consumed as an API. Our API infrastructure can easily integrate AI traffic passing through an organization, turning any AI endpoint into an API that you can manage through its lifecycle, version, do change management, observe, but on the back end, what they are is an API call to an AI LLM. In a nutshell, we're trying to make it extremely easy for users to publish and consume APIs, whether that API is a microservice, an application, or an AI, and we are part of Traefik Labs.

Bart: With this in mind, is this open source and part of the CNCF landscape?

Sudeep: Part of this is open source, making it easy for people to get started on this journey. We talk about a three-part journey. The first part is an application proxy, which allows you to auto-discover services and expose those services easily to the outside world. It has routing capabilities, load balancing capabilities, and auto-discovery features, all of which are fully declarative. This means users can configure everything using a GitOps operating model. Being cloud-native and Kubernetes-native, you can declare everything as code and use GitOps. This part is open source, specifically using Traefik Proxy.

When users want to enable additional capabilities, such as an API gateway, they may want to enable authentication, authorization, rate limiting, certificate management, or integrate with third-party services like HashiCorp Vault or Azure Key Vault. That's where they need the paid functionality of an API gateway. We have made this journey seamless. Starting with the open source, users just need to execute three Helm chart commands, which takes less than a minute. We supply them with a license key, and they execute these commands to upgrade in-place, preserving all previous configurations.

The third step of this journey is to go from an API gateway to API management. We have made this journey seamless as well, with just one Helm chart command. Within 30 seconds, you have all the capabilities of the API management platform, including API versioning, API security, API observability, and a developer portal, all of which come with the API management capability, an upgrade on top of the API gateway, and can be further customized with standards like OpenAPI specification.

Bart: And what is Traefik Labs' business model?

Sudeep: It's a software subscription. The pricing model is an annual subscription, and you can subscribe annually or for multiple years. You pay for the software on an annual basis. The deployment model is on-premises. We have a data plane that can sit anywhere the customer chooses, whether that's on a public cloud, such as AKS and GKE, or in their private cloud. We are fully flexible in this regard. We do have a small control plane that runs in the cloud, but it's only for basic administration capabilities, and no traffic actually traverses the cloud. It's a fully on-premises model, which is where customers prefer to deploy most often.

Bart: When looking at other actors

Sudeep: other companies that might be in the ecosystem, such as Traefik Labs, Kong, or other companies that use OIDC, OpenAPI specification, GitOps, HashiCorp Vault, Azure Key Vault, or Traefik Proxy

Bart: In terms of your competitors, who are they and what differentiates Traefik Labs compared to them?

Sudeep: The biggest competitor that comes up often in discussions is Kong. Two others that come up in addition to that are public cloud providers, such as AWS, which have API gateway features but not API management, and Azure API management. The common thread across all of them that really differentiates Traefik is the ease of use we provide. We are able to do this because we are cloud-native and Kubernetes-native from the ground up. Everything we do is fully declarative and code-based. As our slogan says, we promote doing less ClickOps and more GitOps. In contrast, other solutions are very UI-driven and ClickOps-driven. This differentiates us in an age where DevOps and platform engineers prefer to have a CI/CD pipeline and use GitOps. Our technology and solutions allow for this.

Bart: Last but not least, what can they expect next from Traefik?

Sudeep: We have many plans to continue making things easy for our users. The mission has always been to remove friction for our users in having a seamless and flexible API infrastructure. We've been removing friction along the way and will add more capabilities around our API gateway, API management, API mocking, and API gateway. The foundation is there, with many capabilities already in place. We will continue working closely with users to find additional friction points and solve them at Traefik Labs.

Bart: Very much look forward to hearing the next steps. Before we finish, is there a way for people to get in touch with you, perhaps through Traefik Labs?

Sudeep: So you can go to our website. It's Traefik Labs, T-R-A-E-F-I-K.io. We have a bunch of information there. We have great ways for you to watch videos. You can read a bunch of blogs that we have written. And if somebody wants to get hold of me personally, it's my first name at Traefik Labs. So it's Sudeep, S-U-D-E-E-P at T-R-A-E-F-I-K at Traefik Labs.

Bart: Thank you very much, Sudeep. We look forward to talking to you soon.

Sudeep: Take care. Thank you, Bart.