The data center is no longer the center of data; in fact, data has “left the building!” By 2020, it is expected that the amount of data created at the edge will be on the order of hundreds of Zettabytes (ZB), and it will eclipse the amount of data processed in data centers — estimated to be in the tens of ZBs. There are several reasons for this shift, including that transport bandwidth and storage costs will limit the amount of data moved to the cloud from the edge, and the fact that real-time, closed-loop applications require high responsiveness (low latency) and local authority at the edge. As more and more data gets generated at the edge, edge computing will become smarter/intelligent to address the ever-changing needs of businesses. The cloud won’t completely disappear, however: it will continue to play a pivotal role in uncovering deeper insights using artificial intelligence (AI) on relevant data sets.
Edge computing standards and frameworks are still to be determined, which means that organizations face the challenge of integrating a multitude of assets, gateways, protocols, applications, and legacy technologies into a solution that works. What further complicates the edge are scale (distributed sites with a global footprint) and security (no perimeter). As a result, organizations today end up with higher costs and increased complexity through “use case”-based, siloed solutions (vendor lock-in).
Sound familiar? This same problem once existed in the data center, where you had a fixed set of compute resources (CPU, memory, storage, networking, I/O) dedicated to solving a particular use case. But with the advent of virtualization technologies in data centers, organizations are now able to run multiple applications and processes at the same time on any piece of hardware, without wasting resources, and at a much lower cost. This model eventually paved the way for the concept of cloud computing, where an organization could use servers without actually owning one. Talk about burst efficiencies and economies of scale!
With so much data set to be created at the edge, can virtualization solve the issues at the edge the same way it solved them for the data center? Why not take modern-day virtualization technology, put it on an edge gateway, and break away from the shackles of non-optimal solutions riddled with limitations?
The concept is a good one, but data center virtualization software is not built for the edge. At a very high level, it has the following shortcomings:
Data center virtualization software can’t just be copied and pasted to edge deployments; the solution needs to be designed for the edge. ZEDEDA is pioneering edge virtualization for just that reason.
Only with edge-specific virtualization can you:
To solve edge-specific challenges, ZEDEDA pioneered edge virtualization and delivers a SaaS solution that enables complete visibility, control, and protection for the enterprise and industrial IoT edge. There are five requirements needed for success at the edge: zero-touch provisioning; freedom of any hardware, any application, and any cloud; IoT scale; zero-trust security; and cloud-native agility. Data center virtualization just won’t cut it at the edge.
About Author
ZEDEDA, Marketing
ZEDEDA, the leader in edge orchestration, delivers visibility, control and security for the distributed edge, with the freedom of deploying and managing any app on any hardware at scale and connecting to any cloud or on-premises systems. Distributed edge solutions require a diverse mix of technologies and domain expertise and ZEDEDA enables customers with an open, vendor-agnostic orchestration framework that breaks down silos and provides the needed agility and future-proofing as they evolve their connected operations. Customers can now seamlessly orchestrate intelligent applications at the distributed edge to gain access to critical insights, make real-time decisions and maximize operational efficiency.
Posted on Dec 21, 2020 by Roman Shaposhnik
Posted on Jan 28, 2021 by ZEDEDA
Posted on March 16, 2021 by Jason Shepherd