Navitas Experts Share Container, Cloud, and Kubernetes Insights

According to a recent Gartner study, 50% of global enterprises will be running containers by 2020 and more than 20% of enterprise storage capacity will be allocated to container workloads – this is compared to only 1% today. This rapid increase in container usage can lead to client questions around what is required to deploy this option and what solution may be the best fit for their business model.

In the world of container orchestration, application use cases fall into three broad categories: legacy packaging, refactoring existing applications, and building new service-oriented applications. Ideal container persistence approaches vary among the three scenarios and not all container management platforms offer storage functionality; Platforms as a Service (PaaS) tend to include built-in storage options while Cloud containers as a Service (CaaS) leverage standard cloud storage while on-premises, hybrid, and multi-cloud CaaS solutions usually require third-party persistence integration.

Part of the confusion around containers is that they are ephemeral in nature, frequently transferring between hosts. As a result, the relationship between containerized workloads and any persistent data must be carefully managed with supplementary software ranging in complexity from a simple plug-in to a full-fledged storage solution. As container popularity and usage increases, the variety of storage persistence for containerized application options will also flourish.

Storage vendors are in the beginning stages of offering container support within existing products, as additional vendors enter the marketplace with solutions targeted specifically at cloud-native applications. Navitas research assists technical professionals in narrowing down requirements and selecting the correct storage approach for their container deployments.

Select the appropriate container storage approach for your workload early in the container adoption journey to facilitate future demand.

Monolithic applications don’t need container-aware storage and are able to leverage existing tools, whereas cloud-native, “greenfield” projects benefit from more recently developed solutions. Storage options span traditional on-premise solutions – like iSCSI, Fiber Channel (FC), NAS – and cloud native options – such as AWS EBS, Azure File, AWS EFS etc. Another option to consider is distributed storage with a Software Defined Storage (SDS) construct – from GlusterFS, ScaleIO, Ceph, Quobyte, Portworx, etc.

Typical Container initiatives start out as stateless workloads and gradually transform into stateful applications as an organization matures in their container journey, and most organizations can attest that data is the key asset. So, even the stateless containers must transcribe the data persistence to a store or a database. The traditional data storage solutions mentioned above may not scale for container workloads where the churn is in line with typical VM-based solutions.

According to a Datadog survey on Docker adoption, containers churn 9x faster than VMs, resulting in an average life span of 2.5 days versus the 23-day average of VMs. Unsurprisingly, containers deployed without orchestration are viable 5.5 days on average, while orchestrated containers tend to expire within a day. Therefore, choosing the proper tool can not only be a challenge but can result in CapEx and OpEx savings, if done wisely. This is another area where the benefits of working with trusted partners, such as Navitas, proves to be advantageous.

Navitas can help you navigate the technical waters of SDS, API & Kubernetes.

Infrastructure and Operations (I&O) previously held the role of storage administration and provisioning in the VM world. But, in the container world, storage is being managed as Software-defined Storage (SDS) which is further distributed and maintained as a compute orchestration function, such as Kubernetes; making this dynamic provisioning desirable for organizations with container workloads.

In the Kubernetes ecosystem, this is achieved with core Application Programming Interface (API) constructs like PersistentVolume (PV) and PersistentVolumeClaims (PVC). A PV is a portion of storage in the cluster that has been provisioned by an administrator; a resource in the cluster just like a node. PVs are plugins – like Volumes – but have a lifecycle independent of any individual pod using the PV. This API object captures the details of storage implementation whether NFS, iSCSI, or a cloud-provider-specific storage system.

A PVC is a user storage request and is similar to a pod. However, pods consume node resources and PVCs consume PV resources. Pods can also request specific levels of resources (CPU and Memory) while PVCs can request specific size and access modes; meaning, they can be mounted once as read/write or mounted many times as read-only.

Storage is abstracted and provided as an API option to applications running in a Kubernetes cluster. But the I&O personnel in Kubernetes configures necessary storage classes and persistence storage plugins to seamlessly enable this functionality for applications running inside the cluster. Storage options continue evolving and are now able to provide snapshots and backup options; the upcoming Kubernetes version 1.13 will offer some as alpha features.

When selecting a storage solution, it is important to work with a partner that is heavily invested in opensource storage projects within the Kubernetes community. Navitas experts understand this complicated ecosystem and are experienced in identifying storage architecture adaptable to our clients’ future needs, as we monitor advancements such as those being undertaken by Kubernetes.

Contact our team to learn more about your options.

Leave a Reply

Your email address will not be published. Required fields are marked *