German

Kubernetes in the Wild

Oct 14, 2020

Kubernetes in the Wild

Extending Kubernetes Outside of the Data Center

Join ZEDEDA at the one day conference “Computing on the Edge with Kubernetes” hosted by Rancher on October 21, 2020, centered around everything edge.

By 2021, Gartner forecasts that there will be approximately 25 billion IoT devices in use around the world. Each of those devices has the capacity to produce immense volumes of business-critical data. As the number of use cases for this data expands, the need for lightweight, agile applications to analyze data closer to the source has become apparent.

Edge computing is becoming possible as devices become more powerful and widespread and developers are now looking to replicate the benefits of cloud-native applications outside of centralized data centers for reasons such as bandwidth savings, latency, autonomy, security, and privacy. However, this comes with unique challenges due to the increasing diversity of hardware and software the closer you get to the physical world.

Docker had already begun to show the value of containers as a means of creating lightweight, nimble applications when Kubernetes was launched into the open-source scene by Google back in 2014. Since then Kubernetes has created quite a buzz in the open-source community, with eighty-four percent of CNCF survey respondents reporting running containers in production.

But despite its rapid growth, Kubernetes has posed numerous challenges to developers and system architects, particularly at the edge. And many developers are finding out the hard way the requirements for running Kubernetes at the edge are very different than those for running Kubernetes in a traditional data center or centralized cloud.

What are the challenges people face when running Kubernetes at the edge?

As with anything in the now, “edge” has taken a variety of different meanings depending on the use-case. (I recommend checking out Linux Foundation’s recent whitepaper on edge taxonomy — it’s a great read and helps clarify key edge categories based on inherent technical tradeoffs). At the upper range of the edge continuum, a close evolution of the Kubernetes that runs in the cloud can be used to cluster workloads on racks of servers located in on-prem and regional edge data centers. At the other extreme, constrained devices like a microcontroller controlling fail-safe systems in a car leverage a real-time embedded operating system and Kubernetes isn’t a fit.

Meanwhile, in between constrained devices and the data center is the IoT compute edge, characterized by hardware that has enough memory to support application abstraction through containers or VMs but is deployed in the field, outside of a physically-secure data center. This hardware could take the form of a gateway device with 512–1GB of memory deployed on a wind turbine or oil wellhead, or a small cluster of servers running in an accessible closet within a retail store.

Compared to data center resources, IoT edge computes nodes have highly diverse form factors including IO, are deployed at a greater scale, and have unique security requirements that need to be taken into account in order to support Kubernetes. ZEDEDA is focused on enabling orchestration of compute at the IoT edge and we’re working with the community to bridge Kubernetes to comprehend these requirements.

What trends will shape Kubernetes at the edge?

Resource optimization

Support for scalability and portability will become even more aligned with edge use cases as the number of nodes, devices, and sensors increase massively over the next few years, enabling high levels of automation across the edge continuum while taking into account inherent technical tradeoffs such as available resources, security and time- and safety-critical operation.

Multi-cloud

Tools will evolve to streamline workload and resource management across the edge-to-cloud continuum to support heterogeneous clouds with a single pane of glass for orchestration.

Innovation driven by low-cost edge hardware

The lightweight and scalable nature of cloud-native apps will align with advances

in hardware capacity, including use cases enabled by low-cost compute such as the Raspberry Pi.

Open ecosystem

Open-source software and communities will drive cloud-native development across the edge continuum and crossover with other trends including IoT, AI, and 5G. This will enable an open edge ecosystem of interoperable applications and services that further accelerate innovation.

To learn more about the future of Kubernetes on the edge, including how ZEDEDA is helping to extend the power of Kubernetes outside of the data center, join us at the “Computing on the Edge with Kubernetes” Conference hosted by Rancher on October 21, 2020. Sign up here

RELATED BLOG POSTS 

BLOG

Recap from Computex 2019 and Microsoft “IoT in Action”

BLOG

Is “cloud-native edge computing at hyperscale” just techno-babble?

Get In Touch