Tintri introduces advanced Kubernetes data management with new VMstore CSI driver
Tintri, a storage solutions provider, makes its strategic entry into the Kubernetes ecosystem with three major announcements at KubeCon North America.
The company unveils its VMstore CSI driver with built-in AI capabilities, an instant ransomware recovery solution, and the GLAS-DP security suite - all designed to address critical storage and security challenges in cloud-native environments.
What makes this announcement particularly significant is Tintri's unique data-aware approach, which combines autonomous AI operations with metadata analysis to provide intelligent storage management and instantaneous ransomware recovery capabilities.
Transcription
Bart: Who are you? What's your role? Where do you work?
Phil: I'm Phil Trickovic. I'm the SVP of Tintri and I run the company.
Bart: What do you want to share with us today?
Phil: Today we've got three, possibly four, really big announcements that we're very excited about, announcing here today at KubeCon.
Bart: And what are they?
Phil: The first one is our native CSI drivers. This is our entry into the CNCF community. We've spent three years watching the evolution of this technology, identifying the issues that need to be addressed, and we feel that we've really delivered an outstanding intelligent CSI driver. For those familiar with Tintri, our secret sauce is that we are data-aware at a very low level. We've integrated those data-aware features with our AI engine, which works autonomously, into the CSI driver that we're releasing today at this show.
The second announcement is a robust ransomware recovery offering that has been used, in our opinion, far too often this year to recover from attacks like those detected by CrowdStrike. It's an instantaneous recovery solution that uses a metadata image built on the back end, integrated with our GLAS-DP suite, which is the third announcement and part of our security enhancements within our security suite. This allows our customers to identify a CISO or someone responsible for incident recovery in the event of an attack. We've made this announcement because we've noticed an increase in ransomware attacks in our space, particularly among large customers with old 3-tier architectures that are unable to recover from these types of attacks. Our solution offers 100% recovery up to the millisecond before the attack happened, along with behavioral detection, all of which is autonomous based on our CXOS AI engine. This is the big announcement for the show.
Bart: In the Kubernetes world, what problems is Tintri solving?
Phil: Ease of use, ease of deployment, and scale. We still feel that Kubernetes is developing and has not yet hit its stride, but it's going to. To us, this is the future. We are fully invested in Learn k8s and this architecture, and we're taking it down a level below the abstraction layer to do some really cool things with the I/O paths on the backside of the arrays and the storage devices, possibly utilizing CSI drivers. A lot of cool stuff is happening there. We're going full blast on it.
Bart: I know you mentioned at the beginning that this is three years of development. Can you give me some context of before this announcement as well as after?
Phil: Before this, we were, and we still are, of the opinion that VMs are not going anywhere. They have a very large place in our app stack that's not going to change, or it's going to change very slowly over 10, 15, 20 years. We've been monitoring that and watching progress. With progressive web apps, we're looking at server on chips within our appliance device to eliminate the middle tier of software, of servers. We've primarily serviced virtual environments, such as VMware, Hyper-V, Proxmox, and Nutanix. We're seeing migration from these applications due to economics and performance capabilities, as well as the intelligence that can be built into those stacks if you can shrink that middle layer. The after picture will be very distributable, with a big push to edge devices, building intelligence into these edge devices, including GPU and TPU acceleration at the edge for very specific function on-chip tasks, which will be the next vision in two to three years.
Bart: Taking that a little bit further, you mentioned your AI engine. In previous KubeFM episodes, particularly the last one, we heard a lot of AI hype. This time, it seems like we're seeing AI getting more concrete use cases on the ground. It sounds like you've got some experience with that. Tell me more about your experience building bridges between AI and the Kubernetes ecosystem.
Phil: Experience and market-wise, Tintri has been an AI solution. Talking about the hype, it's a hype cycle, but now we're seeing foundational and formational applications come up and deliver value. What problem is it solving? This is the first time I've been to three of these events, and I think this is the first time that you can actually point to a huge set of problems that this solves for businesses, including monetary incentives to go this direction. Since 2012, around 2015 and 2017, we started to see the need for intelligent applications, particularly in VDI workloads. If you look back, I think it was 2016 or 2017 when we did an AI VDI project with the Bank of New York. So, it's not a new development to us. The implementation and messaging around how to do it is what's still gelling. It's still a confusing market, especially for people working with legacy, standard legacy applications. We want to clear that up, bring some ease of use into the deployment of it, so that these developers can focus on making the apps better.
Bart: Is Tintri's technology open source and part of the CNCF landscape?
Phil: It was not open source. We are starting to open source some of these components with the CNCF Foundation. We just joined formally this year. We're moving in that direction because most of these upfront applications should be open source.
Bart: What is Tintri's business model?
Phil: Very simple. We're a 100% channel company. We have a direct sales model, but it's executed via the channel. We have about 32 field reps and a large technical consulting staff, but all of our sales are done through channel partners, VARs and OEM integrators.
Bart: And who are your main competitors?
Phil: I don't have any competitors. If you give Tintri an honest look, there aren't competitors. Competitors, probably ignorance or we're used to doing things this way, is what has been our biggest challenge. Closed minds.
Bart: Above all, what do you think helps Tintri stand out in this competitive ecosystem? There are a lot of different players. What's your special sauce?
Phil: The special sauce is that we've been data-aware since day one, but really data-aware and actionable autonomously. We have automatic QoS, monitoring I/O patterns and requests from hosts over time, gathering metadata, and analyzing it to take action in real time. This is policy-based or AI-based auto-tuning of the workload, a feature that's been there since day one. We're expanding this into Kubernetes, particularly in the progressive web app space, where we see a big performance problem for end users.
Bart: And what should we expect next from Tintri?
Phil: All right, the prediction is that we're looking at a disaggregated system specifically for Kubernetes. We have a software-only version called TCE that can run on any x86 type architecture. We're moving into that space to disaggregate these components as well as function on chip, with a lot of backend functions that could reside on silicon to streamline the software stack, below the how, not above.