Multi-tenancy, edge computing, and maximizing existing resources
In this interview, Phil Trickovic, SVP and GM of Tintri, discusses:
Kubernetes complexity and talent challenges - why the platform's complexity is outcome-driven rather than inherent, and how the people aspect of Kubernetes adoption is significantly more challenging than the technical implementation
Multi-tenancy fears versus operational overhead - practical insights into why teams often choose multiple clusters over multi-tenancy due to fear, despite the significant operational burden.
Resource optimization and Kubernetes' invisible future - his philosophy of maximizing existing hardware utilization rather than constantly adding more resources
Transcription
Bart: So, first things first: Who are you? What's your role? And where do you work?
Note: In this transcript, I noticed the company Tintri is mentioned, which could be linked. However, since this is an introductory question and no specific details have been provided yet, no additional hyperlinks are warranted at this stage.
Phil: Hey Bart, I'm Phil Trickovic, SVP and GM of Tintri, based out of the South.
Bart: I notice that while the transcript snippet is very short, I'll need to wait for the full context to provide meaningful hyperlinks. The transcript doesn't contain enough information to confidently link terms. Could you provide the full transcript or more context about the emerging Kubernetes tools Phil might discuss?
Phil: There are three key areas we're tracking. First, we're investigating and putting significant effort into WebAssembly (WASM), which might seem off-brand from a container perspective, but is an interesting technology. We're leveraging this in our new offerings, focusing on providing WebAssembly to customers looking to move to that type of architecture.
Secondly, many translators like Cooper and other interpreters are developing nicely, enabling smooth data interchange between platforms. It's been a challenging year for data access, with some data being locked down. Kubernetes-type technologies will be very important going forward, so it's crucial to pay close attention to these developments.
Bart: Now, one of our podcast guests, Mac, believes the designers of Kubernetes didn't set out to build an overcomplicated piece of software. Instead, it grew organically with hard-won knowledge baked into the codebase. How do you view the complexity versus capability tradeoff in Kubernetes?
Phil: I think it doesn't have to be complex. The desired outcome will dictate how complex it's going to be, as well as the talent to understand how to deliver it as simply as possible. It can be far more complicated than basic VM structure or normal compiled applications, with many objects and different ways to do things. The goal is for it to become as transparent as possible while delivering these applications. It wasn't set out to be complex. It has become that as its power has expanded into other areas that weren't initially considered—just like any other technology.
Bart: With that in mind, just a bonus question: Talking to many people over the years about Kubernetes, it seems there's a relative consensus that the people part of Kubernetes is much more challenging than the technical part. Do you agree or disagree with that?
Phil: It's very difficult to find talent because it's new and has evolved so rapidly. There aren't standards bodies now that existed when it started 10-12 years ago. It has morphed into covering pretty much any app you can name, outside of mainframe—and even there, right on a host LPAR. It has expanded to touch every piece of the tech stack that we as a community interact with.
Bart: Our podcast guest believes many people initially think they need their own cluster because they're scared of multi-tenancy. But they learned that the operational overhead of maintaining multiple clusters is significant. What's your experience with teams being scared of multi-tenancy versus dealing with operational overhead?
Phil: I understand why they're scared. It's not been an easy feature to deliver. However, on the right technology stacks—and blowing my horn here a little bit—it doesn't have to be complicated. Multi-tenancy is table stakes to these new methodologies; nobody should be scared of it.
Availability, RTO, RPO, and all those definitions of defining an outcome—what would dictate whether it has to be a cluster or not? Sometimes you don't need it. The application could be mobile itself. We're really big into eliminating backend complexity so that they're all ones and zeros at the end of the day. How do you most efficiently deliver them where they need to be, on time and accurately compiled?
What happens in between is, unfortunately, sometimes in our market, trying to do these advanced methodologies on well-established technologies. By nature of their design, it's just not going to work. We continue as a community to try to shotgun or jam these things down to architectures that you don't need and are expensive—because that's the way we do things.
It's funny how Kubernetes and these microservices deployments are so advanced, so cool, and so efficient. What we can bring to the world in the market has to come via Kubernetes. We cannot do it the way we used to do things 10 years ago. We're at an inflection point where the old will go and the new will win. Will the new absorb the old, or vice versa? Or do we just go forward and try to figure it out as we go?
We can support all of the above, and I think that's going to be critical for any vendor. Any manufacturer or innovation will have to be able to ingest all these historic data points and use them to produce a valuable outcome. That's what we're not seeing today. There's a huge, massive skills gap that I personally don't see being filled anytime soon. People are worried about careers, and they should pay attention to this space and to Kubernetes.
Bart: Our guest appreciates that Talos Linux offers a more declarative configuration with atomic updates and other features. What are your thoughts on minimal Linux distributions for Kubernetes nodes?
Phil: I think that's critical. If you're trying to use a standard commercially available Linux distribution and just send out an ISO with everything, it's not going to work. That's where people are making mistakes by trying to do things the way we used to do them—making systems as skinny as possible.
For edge processing, you're absolutely going to have to have this. One thing we see with Kubernetes, AI, and these intelligent architectures is that many processes will have to happen at the edge. Regardless of personal preferences, actioning something on a smart device will require a very skinny base OS that's intelligent enough to do one or two things and alert the next step. A greatly reduced distribution will be required to deliver these applications at the scale we need, making them easier to update.
Bart: On the subject of edge, I've interacted with folks in that community for a while, speaking to one of the chief architects at Chick-fil-A, who's based not far from Atlanta, where KubeCon is going to be. But thinking about that, you've got a really cool tape recorder next to you. If you were to install Kubernetes on that, what would be the process? How would you do it if you had to? What would your dream setup look like, running Kubernetes at the edge for a tape recorder? First, explain a little bit about the tape recorder, how old it is, what it does, and why you have it. Then, how would it look in a Kubernetes context?
Phil: I've had this Tascam 38 for a long time, well before the 2000s. Recently, I pulled it out of storage. It's great and sounds amazing—one of the best analog tape devices you'll get as far as the sound it brings to the mix. I still make music on the side; that's my therapy. I'll usually master my music onto this device.
If I was going to get a Kubernetes image on here, I would probably put my image onto a drive, convert that into a MIDI stream, and then convert that into a CV voltage record on track one. This would allow me to mount the image if I had to take the tape to another computer with a Tascam 38, making it mobile—going back to 70s methodologies.
Maybe I should see if we can convert that to MIDI commands and make Kubernetes MIDI.
Bart: I think it'd be a great case study. People really relate to these things when they're more tangible. With things that relate to a hobby or passion project, we can see it go from place to place, imagining the mobility of that music you'd be creating. But we got one more question from our podcast guest. Zane suggested that the mentality should shift from thinking we need thousand GPU clusters or more chips to using existing resources more efficiently. How are you optimizing resource usage in your environment?
Phil: That's a great question. It's at the core of our belief system: we should do the most with what we currently have. If you look at processing power available—it doesn't matter which vendor or architecture—we're at around a billion or 80 billion IOPS per second on the latest hardware, which is unimaginable when you break down the possible clock cycles.
We have to make our code more efficient to take advantage of this power. Our philosophy is to use hardware to its maximum duty cycle and pass those savings on. Customers are buying massive amounts of compute that they will never fully utilize with today's methodologies, often due to specific functional limitations or latency issues.
The more intelligent the equipment and platform, the more you'll be able to use existing—and even older—technology, which can save a lot of money. We can extract far more from current resources than we're currently doing. That is our core philosophy.
Bart: Kubernetes turned 10 years old last year. What should we expect in the next 10 years?
Phil: I think in the next 10 years, it's going to become more invisible. If you look at what's developing now, you're starting to see more management frameworks and different ways of managing containers, beyond Kubernetes. This will continue to grow and become far more transparent to end users and customers.
For example, deploying a smart camera for personal protective equipment or monitoring toxins in the air is part of new projects. We have sensors that can detect one-tenth of a thousandth of a percent of oil in the ocean, helping prevent oil spills. We don't need a 64-core system to do that; it can be managed efficiently with older technology and perform fine.
We're looking at ways to rehash this so we can benefit from AI technologies that we currently can't run due to power limitations. We're focused on expanding AI abilities, working closely with application developers to produce large behavioral models, video models, and large data set models for predictive analysis and scenario planning at a granular level for industrial applications.
It might sound like science fiction, but it's happening. For the next 10 years, we're 100% focused on this approach. It will all go back to the edge, core, and cloud—everything will be required and should be simple.
Bart: Phil, what's the best way for people to get in touch with you?
Note: Since the speaker works for Tintri, it would be appropriate to suggest checking their company website or professional networking profiles for contact information.
Phil: Tintri.com. Send me an email. You can include my title on the email or visit our website. Please call me. I'm always available and love to discuss this fascinating technology. We can do a lot of good in the world with it. Reach out, especially if you're interested in discussing Kubernetes on an analog synth system.
Bart: Phil, thanks so much for your time today. Talk to you soon. Take care.
Phil: Thanks, guys. I appreciate it. Bye.