If you’ve tried taking your reliable cloud-native Kubernetes (K8s) workflow and pushing it straight out to a remote oil rig, a container ship fleet, or a manufacturing floor, you already know the result: operational frustration and budget-killing truck rolls. K8s was designed for the predictable, resource-rich, secure reality of the data center. The edge is nothing like that.
If traditional cloud orchestration is like meticulously scheduling a high-speed rail system on perfect tracks, ZEDEDA is building a fleet of off-road-capable taxis that can run on pothole-filled roads, ensuring that every vehicle knows its mission, stays secure, and achieves its desired state, regardless of road conditions.
Our recent webinar, “The Unbreakable Stack: Resilient Edge AI with GitOps and Kubernetes,” led by VP of Product Management Sachin Vasudeva and Senior Manager of Product Evangelism Robert Bush, dove deep into the architectural requirements for bridging this gap. The solution, ZEDEDA Edge Kubernetes App Flows, offers a comprehensive, three-layered stack that delivers true, secure GitOps-driven continuous delivery (CD) across vast, distributed edge environments.
Here is the technical breakdown of how ZEDEDA makes Kubernetes practical and resilient at scale, and thus a future-proof platform for your edge AI projects.
The Edge Reality: Why Cloud Assumptions Crumble
For those moving from cloud to edge, we need to jettison three key architectural assumptions:
1. Connectivity can be Terrible (and Intermittent)
Cloud infrastructure assumes fast, reliable, full-duplex communication. At the far edge, especially in sectors like maritime shipping (600,000 containers across 600 ships) or remote energy production (drill rigs), bandwidth can vary drastically, from tens of megabytes down to 250 kilobytes. Connectivity can be unreliable, challenged by latency, and often drops off completely, leaving the device in a disconnected state.
A deployment solution must anticipate this: it must be designed for offline resilience and eventual consistency. Trying to maintain a constantly chatty K8s control plane under these conditions generates massive data backhaul costs and operational friction.
2. Physical Security can be Poor to Non-Existent
In a physically secure data center, security focuses primarily on the network perimeter. At the distributed edge, devices are frequently unmanned, physically accessible, and exposed. Your security might literally be “a padlock out in the middle of nowhere”. This raises critical concerns: What happens if someone walks away with the device? What if someone plugs into a physical port? The solution must incorporate security from the bare metal layer up.
3. Scaling Means Scaling Clusters, Not Nodes
In the cloud, scaling means adding more nodes to a few large clusters. At the edge, architects are dealing with a large number of small clusters, often single-node, three-node, or maybe up to 10 nodes. For deployments at the scale of 5,000 clusters, manually managing each is impossible; you need a way to organize and deploy applications across thousands of clusters simultaneously.
This graphic sums up the difference between cloud Kubernetes scaling requirements and those of edge Kubernetes:
The ZEDEDA Full-Stack Edge Blueprint
ZEDEDA addresses these contradictions with a full-stack edge Kubernetes solution that manages everything from the bare-metal operating system to application deployment via GitOps.
Layer 1: Edge Device Life Cycle Management (The Zero-Trust Core)
This foundational layer, built on the open-source EVE-OS (Edge Virtualization Engine Operating System), tackles physical security and initial provisioning:
- Zero-Touch Provisioning: Devices require only that EVE-OS be installed (at the factory or in a lab) and that they be wired for power and network connectivity. No further manual configuration is needed at the edge location. Once turned on remotely, the device calls home to ZEDEDA Cloud via a secure outbound WebSocket connection to collect its configuration.
- Hardware-Backed Identity: Every edge node uses a Trusted Platform Module (TPM) for a hardware-backed identity. On first boot, EVE-OS executes a measured boot process, collecting cryptographic measurements and uploading the PCR (Platform Configuration Registers) values to the TPM cloud for remote attestation. This verifies the device’s integrity and prevents tampering.
- Secure Pull Model: The architecture enforces a zero-trust perimeter by requiring edge nodes to pull updates and manifests after authentication; the controller cannot push updates to nodes. This fundamentally protects the fleet if the central control plane were ever compromised. Additionally, direct user login to the underlying EVE-OS hardware is prohibited by default.
- Footprint: EVE-OS maintains a minimalist footprint, requiring less than 500 MB of storage. For a comfortable deployment that accounts for virtualization, storage replication (for failover), and third-party AI applications (like vision-based monitoring), the recommendation is 12 CPU cores, 16 GB of RAM, and enterprise-grade NVMe SSDs.
Layer 2: Kubernetes Infrastructure Life Cycle Management
This layer, provided by ZEDEDA Edge Kubernetes Service, delivers the K8s runtime environment and is built on standard open-source tools to avoid vendor lock-in. It includes
- K3s Distribution: ZEDEDA uses K3s, a standard, lightweight Kubernetes distribution.
- Hybrid Workload Support: The platform supports running both modern containers (bare metal) and legacy VM applications (like Ignition apps for SCADA) side-by-side on the same hardware, utilizing a hypervisor layer. This is key for gradual modernization in brownfield industrial environments.
- Hardware Diversity and AI: The platform natively supports diverse hardware, including AMD64 and ARM architectures. Crucially for AI inference, it offers bare metal support and handles resource matching for specialized compute like GPUs, NPUs, and TPUs from vendors like NVIDIA and Qualcomm. This capability is crucial for computer vision, wich will be the largest edge AI use case in 2030, focused on real-time tasks such as quality control (e.g., automated product inspections) or worker safety (e.g., PPE detection in well sites).
- Scaling Architecture: ZEDEDA can orchestrate tens of thousands of small, distributed clusters. To handle this scale, the solution uses a dynamic sharding architecture that enables horizontal scaling without theoretical limitations, ensuring performance across the entire fleet. The company has already deployed over 20,000 nodes and targets support for clearing the 100,000 benchmark.
Layer 3: GitOps App Flows and CI/CD for the Disconnected Edge
The top layer, ZEDEDA Edge Kubernetes App Flows, enables true continuous integration and continuous delivery (CI/CD) across thousands of clusters, replacing error-prone manual deployments.
With this product, there are two primary methods for application deployment:
- Marketplace Deployment (Ad Hoc/Manual) For single deployments or staging, the Kubernetes Marketplace allows users to upload standard manifest files or Helm charts. The platform supports versioning and tracking. Importantly, you can edit the values file right before deployment for last-minute, specific changes to the Helm chart.
- GitOps Continuous Delivery (Fleet Scale) For large-scale, automated rollouts, GitOps is the preferred method, since it provides:
- Single Source of Truth: Instead of uploading a Helm chart once, ZEDEDA Cloud monitors a specified Git repository. This repository can be public and accessed via HTTPS, or private and secured using SSH keys secured in ZEDEDA Store.
- Targeting Scale: Since you’re dealing with many small clusters, the orchestration leverages cluster groupings defined by key-value tags (e.g., location:site67 or function:cam-control). A single Git commit can then target and deploy simultaneously across all associated clusters in that group.
- Near Real-Time Deployment: The platform constantly polls the repository (as demonstrated in the webinar, around 15 seconds). When a change is detected (such as updating the replica count from 1 to 3 or the application version), the deployment is triggered immediately. The system automatically performs audits and diffs based on the Git changes to ensure the desired state is enforced.
- Disconnectivity Resilience: This GitOps workflow is essential for environments with unreliable connectivity. If a cluster is down or disconnected when a commit rolls out, ZEDEDA Cloud caches the deployment state until that cluster comes back online and is reconciled.
- CI/CD Integration: When a Helm chart is updated in a git repository, such as GitHub or GitLab, your fleet is automatically updated in near real time.
This diagram summarizes the above three layers:
Management and Observability
ZEDEDA has designed its Kubernetes management and observability features for architects and DevOps teams in the following ways;
- API-First Approach: ZEDEDA is fundamentally an API-first company. Most large-scale operations are handled programmatically via REST APIs or the ZEDEDA Terraform provider.
- Secure kubectl Access: Users can access traditional K8s controls and run standard kubectl commands securely via an in-browser shell (ZEDEDA EdgeView) or by downloading a kubeconfig file. This access is tunneled through ZEDEDA Cloud and does not require opening SSH ports on the edge device.
- Health Monitoring: Nodes publish health and status updates via a dedicated WebSocket connection to ZEDEDA Cloud. This data can then be connected to monitoring tools like Prometheus.
Conclusion: The Edge-Native Advantage
The fundamental difference here is domain expertise. ZEDEDA’s core strength is its eight years of experience focused solely on the edge, rather than force-fitting a cloud product to it.
This commitment to edge reality results in an integrated solution that brings the velocity and consistency of cloud-like DevOps workflows, using GitOps, Terraform, or APIs, to distributed edge infrastructure. By providing a single, managed full-stack solution, from EVE-OS on bare metal, through K3s cluster lifecycle management, to GitOps application deployment, ZEDEDA achieves resilience, security, and compliance while reducing the enormous operational burden of managing orchestrators across a massively fragmented and constrained landscape.
Next Steps
Check out this demo of ZEDEDA Edge Kubernetes App Flows:
You can also:
- Watch our webinar replay
- View the product page
- Dive into the documentation
Contact us at at [email protected] to learn more about how we can help you use Kubernetes to better manage your edge device fleets.