Data Center Virtualization is Not for the Edge

August 26, 2019

DSC_0030 by kattanapilot (CC BY-NC-ND 2.0)

Image credit: DSC_0030 by kattanapilot (CC BY-NC-ND 2.0)

The data center is no longer the center of data; in fact, data has “left the building!” By 2020, it is expected that the amount of data created at the edge will be on the order of hundreds of Zettabytes (ZB), and it will eclipse the amount of data processed in data centers — estimated to be in the tens of ZBs. There are several reasons for this shift, including that transport bandwidth and storage costs will limit the amount of data moved to the cloud from the edge, and the fact that real-time, closed-loop applications require high responsiveness (low latency) and local authority at the edge. As more and more data gets generated at the edge, edge computing will become smarter/intelligent to address the ever-changing needs of businesses. The cloud won’t completely disappear, however: it will continue to play a pivotal role in uncovering deeper insights using artificial intelligence (AI) on relevant data sets.

Edge computing standards and frameworks are still to be determined, which means that organizations face the challenge of integrating a multitude of assets, gateways, protocols, applications, and legacy technologies into a solution that works. What further complicates the edge are scale (distributed sites with a global footprint) and security (no perimeter). As a result, organizations today end up with higher costs and increased complexity through “use case”-based, siloed solutions (vendor lock-in).

Sound familiar? This same problem once existed in the data center, where you had a fixed set of compute resources (CPU, memory, storage, networking, I/O) dedicated to solving a particular use case. But with the advent of virtualization technologies in data centers, organizations are now able to run multiple applications and processes at the same time on any piece of hardware, without wasting resources, and at a much lower cost. This model eventually paved the way for the concept of cloud computing, where an organization could use servers without actually owning one. Talk about burst efficiencies and economies of scale!

With so much data set to be created at the edge, can virtualization solve the issues at the edge the same way it solved them for the data center? Why not take modern-day virtualization technology, put it on an edge gateway, and break away from the shackles of non-optimal solutions riddled with limitations?

The concept is a good one, but data center virtualization software is not built for the edge. At a very high level, it has the following shortcomings:

  • It’s built for data centers and high-end CPUs, making it extremely resource-intensive and bloated. If you install a data center hypervisor on an edge gateway, you will be left with very few resources to run edge applications because of the large footprint and high CPU/memory utilization requirements. Furthermore, the management system built for data center virtualization assumes high-speed networks and tends to be highly interactive in operations and control (okay for the data center, where network connectivity is typically 10Gbps). Compared to beefy servers in the data center, edge gateways have limited resources and often have to operate under conditions where there is no network connectivity.

Teeside Offshore Wind Farm by Paul (CC BY-NC-ND 2.0)

Image credit: Teeside Offshore Wind Farm by Paul (CC BY-NC-ND 2.0)
  • Data center virtualization software resides on highly protected servers in a very localized area, with high-security defense systems in place. To update, upgrade, patch, or even control a server, the expectation is that you can reach out to it physically or through a network (command and control operations) and do what you need to do with it. This is not the case at the edge. At the edge, many times you cannot “reach” the gateway due to network or physical limitations, and also by design, for security reasons. Think about it: if there is local access or remote console capability, it provides a way for nefarious actors to hack into the device either onsite or if the remote device is physically stolen. Security at the edge requires re-architecting the entire software stack to provide a layered security model around the data, application, virtual machines/containers, network, and people (physical access). It is enhanced with built-in “phone home” capabilities for updates (only outbound connections permitted) through an eventual consistency (end state) model for a fail-proof desired state.
  • It doesn’t provide a unified orchestration layer for virtual machines (VM) and containers. In the data center or in the cloud, VMs and containers are managed separately and add separate management layers, which becomes expensive at the edge. The edge requires a single, unified orchestration layer for VMs and containers to be completely flexible and agile for cloud-native applications and legacy applications running today (historian or SCADA systems).
  • It’s not designed for geographically-distributed gateways at scale. By scale, I don’t mean hundreds…I mean tens of thousands (large-scale deployments) of gateways. Data center virtualization is built for a high number of VMs on a low number of servers (host). The edge requires a very high number of gateways (nodes) with very few apps to start with per gateway.

Data center virtualization software can’t just be copied and pasted to edge deployments; the solution needs to be designed for the edge. ZEDEDA is pioneering edge virtualization for just that reason.

Only with edge-specific virtualization can you:

  • Get devices up and running quickly. Dropship and instantly provision devices with all OS and system software automatically downloaded from the cloud.
  • Use any gateway, deploy any app or multiple apps, and connect to any cloud with a multi-cloud strategy, thereby doing away with vendor lock-in and providing the ability to run brownfield and greenfield applications on the same hardware.
  • Provide infinite scale and remote management with a single click, enabling mass deployment of applications from an app marketplace. Device installation requires no IT expertise or pre-configuration for simplified deployment, so there is no limit to the number of gateways that can be deployed.
  • Ensure device integrity, eliminating hardware spoofing and detecting anomalies in your software stack with TPM hardware root of trust. Manage gateways remotely (ports can be enabled or disabled) when the system is compromised to protect against intruders or malware. Easily meet compliance and regulatory requirements, reducing data breaches and leakage with role-based access controls, cloud security, and centralized management.
  • Future-proof your deployments with a flexible and scalable solution, based on an open architecture, that makes it possible to easily add new applications on demand to meet the needs of real-time and connected operations.

To solve edge-specific challenges, ZEDEDA pioneered edge virtualization and delivers a SaaS solution that enables complete visibility, control, and protection for the enterprise and industrial IoT edge. There are five requirements needed for success at the edge: zero-touch provisioning; freedom of any hardware, any application, and any cloud; IoT scale; zero-trust security; and cloud-native agility. Data center virtualization just won’t cut it at the edge.

RELATED BLOG POSTS 

Get In Touch