kinvolk logo | Blog

How the Service Mesh Interface (SMI) fits into the Kubernetes landscape

How the Service Mesh Interface (SMI) fits into the Kubernetes landscape

Today, the Service Mesh Interface (SMI) was announced at Microsoft’s KubeCon keynote. The SMI aims to provide a consistent interface for multiple service meshes within Kubernetes. Kinvolk is proud to be one of the companies working on the effort. Specifically, we’ve worked to enable Istio integration with the SMI.

A look at Kubernetes Interfaces

Kubernetes has many interfaces, and for good reason. Interfaces allow for multiple underlying implementations of the technology they target. This allows vendors to create competing solutions on a level playing field and helps to guard users from being locked in to a particular solution. The result is increased competition and more rapid innovation; both a benefit to users.

To give context, let’s look at a couple of the important interfaces used in Kubernetes.

Container Network Interface

One of the first interfaces that found its way into Kubernetes was the Container Network Interface (CNI), a technology originally found in the rkt project. Previous to the existence of this interface, you had to use Kubernetes’ limited, built-in networking. With the introduction of the CNI, which standardized the requirements for a networking plug-in around a very simple set of primitives, we witnessed an explosion in the number of networking solutions for Kubernetes, from container-centric open source projects such as flannel and Calico, to SDN infrastructure vendors like Cisco and VMware, to cloud-specific CNIs from AWS and Azure. CNI was also adopted by other orchestrators, such as Mesos and CloudFoundry, making it the de facto unifying standard for container networking.

Container Runtime Interface

The Container Runtime Interface (CRI) was introduced to enable the use of different container runtimes for Kubernetes. Previous to the introduction of the CRI, adding an additional container runtime required code changes throughout the Kubernetes code base. This was the case when rkt was introduced as an additional runtime. But it was obvious that this was not a maintainable solution as more container runtimes were introduced. We now have, with the CRI, many additional container runtimes to choose from: container-d, CRI-O, virtlet, and more.

Container Storage Interface

While relatively new, the Container Storage Interface (CSI) has achieved similar success. It defines a standard approach for exposing block and file storage systems to container orchestrators like Kubernetes. Unlike volume plug-ins, which were “in-tree”, meaning they had to be upstreamed into the main Kubernetes codebase, CSI drivers are external projects, enabling storage developers to develop and ship independently of Kubernetes releases. There are now more than 40 CSI drivers including for Ceph, Portworx, and all the major cloud providers.

And Now: Service Mesh Interface (SMI)

Service Meshes are becoming popular because they provide fine-grained control over microservice connectivity, enabling (for example) smooth transition from an older release of a service to a newer one (e.g. a blue/green or canary deployment model). Linkerd was probably the first such solution, but it has been followed by Istio and many others.

With the growing proliferation of solutions, all deployed and managed in slightly different ways, it was clear that a similar standard interface for enabling service meshes – a Service Mesh Interface (SMI) – would bring value to the Kubernetes community, in much the same way that CRI, CNI and CSI did. Kinvolk was among group of development teams that contributed to this effort, along with the other participating companies. Specifically, we developed the plugin driver that enables Istio to be deployed via SMI.

The Service Mesh Interface promises a common interface for various service meshes. This should make it easier for users to experiment with alternative service mesh solutions, to see which works best for their use cases. As we have found with our own recent testing, due to their differing implementations, each solution has its own unique performance and behavior characteristics. We are hopeful this will lead to greater user choice, and flourishing of new projects in the ecosystem just as happened with other areas where Kubernetes enabled open extensibility.