German

Extending Kubernetes to the IoT Edge

April 10, 2020

Extending Kubernetes to the IoT Edge

When Kubernetes was first released onto the open source scene by Google in 2014, it marked an important milestone in application development for cloud environments. Docker had already begun to show the world the value of containers as a means to create modular, nimble applications that could be built using a microservices architecture. But containers were difficult to deploy at scale — particularly at the massive size of many cloud-native applications. Kubernetes filled that need by providing a comprehensive container orchestration solution.

Now, as the edge is emerging as the next frontier of computing, many businesses and developers are looking to replicate the benefits of cloud-native development closer to the data’s source. Kubernetes will play an important role in that process, but, for reasons we’ll explain in this post, it isn’t as easy as simply copy/pasting the traditional version of Kubernetes across the board.

Which edge are we talking about?
Of course, there are many different “edges” in edge computing, and it’s been easier to extend cloud-native principles to some more than others. For instance, for a service provider that creates a micro data center comprised of a rack of servers at the base of one of their cell towers, it’s fairly straightforward to use the same version of Kubernetes that runs in the cloud. These deployments are very similar to those found in a traditional data center, and so developing and running applications is also similar, including the way containers are used and orchestrated.

At the other extreme of the edge spectrum are devices like microcontrollers, which use deeply embedded code with tight coupling between hardware and software due to resource constraints in areas like memory and processing. These classes of devices also often leverage a Real-time Operating System (RTOS) to ensure determinism in responses. Given these constraints, it’s not technically feasible for these devices to leverage container-based architectures, regardless of whether there’s a benefit for modularity and portability.

In between these two ends of the edge computing spectrum are a large number of devices that make up what we consider to be the “IoT edge”: small- and medium-sized compute nodes that can run Linux and have sufficient memory (>512MB) to be able to support the overhead of virtualization. Compared to embedded devices that tend to be fixed in function, these compute nodes are used to aggregate business-critical data for enterprises and need to be able to run an evolving set of apps that process and make decisions based on that data. As app development for edge use cases continues to evolve and grow, companies will see the most benefit from making these IoT devices as flexible as possible, meaning they’ll want the capability to update software continuously and reconfigure applications as needed. This idea of being able to evolve software on a daily basis is a central tenet of cloud-native development practices, which is why we often talk about extending cloud-native benefits to the IoT edge.

Challenges of extending Kubernetes to the IoT edge
While Docker containers have been widely used in IoT edge deployments for several years, extending Kubernetes-based cloud-native development to the IoT edge isn’t a straightforward process. One reason why is that the footprint of the traditional Kubernetes that runs on servers in the data center is too large for constrained devices. Another challenge is the fact that many industrial operators are running legacy applications that are business-critical but not compatible with containerization. Companies in this situation need a hybrid solution that can still support their legacy apps on virtual machines (VMs), even as they introduce new containerized apps in parallel. There are data center solutions that support both VMs and containers today; however, footprint is still an issue for the IoT edge. Further, IoT edge compute nodes are highly distributed, deployed at scale outside of the confines of traditional data centers and even secure micro data centers, hence they require specific security considerations.

In addition to the technical challenges, there’s also a cultural shift that comes with the move toward a more cloud-native development mindset. Many industrial operators are used to control systems that are isolated from the internet and therefore only updated if absolutely necessary, because an update could bring down an entire process. Others are still accustomed to a waterfall model of development, where apps are updated only periodically, and the entire stack is taken down to do so. In either case, switching to a system of modularized applications and continuous delivery is a big transition, if not premature. It’s not always an easy change to make, and it’s common for organizations to find themselves stuck in a transitory state — especially for those that leverage a combination of partner offerings instead of controlling all aspects of app development and technology management in-house.

Net-net, there’s a lot of buzz today around solutions that promise to make cloud-native development and orchestration at the edge as easy as flipping on a switch, but the reality is that most enterprises aren’t ready to “rip and replace,” and therefore need a solution that provides a transition path.

Enter: Project EVE and Kubernetes
Simplifying the transition to a cloud-native IoT edge is the philosophy that underpins Project EVE, an open source project originally donated by ZEDEDA to LF Edge. EVE serves as an edge computing engine that sits directly on top of resource-constrained hardware and makes it possible to run both VMs and containers on the same device — thereby supporting both legacy and cloud-native apps. Additionally, EVE creates a foundation that abstracts hardware complexities, providing a zero-trust security environment and enabling zero-touch provisioning. The goal of EVE is to serve as a comprehensive and flexible foundation for IoT edge orchestration, especially when compared to other approaches which are either too resource intensive, only support containers, or are lacking security measures necessary for distributed deployments.

With EVE as a foundation, the next step is to bridge the gap between Kubernetes from the data center and the needs of the IoT edge. Companies such as Rancher have been doing great work with K3S, taking a “top-down” approach to shrink Kubernetes and extend the benefits to more constrained compute clusters at the edge. At ZEDEDA, we’ve been taking a “bottom-up” approach, working in the open source community to meet operations customers where they’re at today with the goal of then bridging Project EVE to Kubernetes. Through open source collaboration these two approaches will meet in the middle, resulting in a version of Kubernetes that’s lightweight enough to power compute clusters at the IoT edge while still taking the unique challenges around security and scale into account. Kubernetes will then successfully span from the cloud to the IoT edge.

A exciting road ahead, but it will be a journey
Kubernetes will play an important role in the future of edge computing, including for more constrained devices at the IoT edge. As companies look to continually update and reconfigure the applications running on their edge devices, they’ll want the flexibility and portability benefits of containerized app development, scalability, clustering for availability, and so forth. But for now, many companies also need to continue supporting their legacy applications that cannot be run in containers, which is why a hybrid approach between “traditional” and cloud-native development is necessary.

We at ZEDEDA recognize this need for transition, which is why we developed EVE to make sure that enterprises could support both technology approaches. With this flexibility, it affords companies that are still working through the technical and cultural shift to cloud-native development with the room to manage that transition.

As cloud-native applications begin to make up a greater percentage of workloads at the edge, though, we’ll need the tools to manage and orchestrate them effectively. This is why we feel it’s important to put a strong effort now behind developing a version of Kubernetes that works for the IoT edge. With EVE, we’ve created a secure, flexible foundation from which to bridge the gap, and we’re excited to continue working with the open source community to extend Kubernetes to this space. For more information on EVE or to get involved, visit the project page on the LF Edge site.

RELATED BLOG POSTS 

BLOG

Recap from Computex 2019 and Microsoft “IoT in Action”

BLOG

Is “cloud-native edge computing at hyperscale” just techno-babble?

Get In Touch