Deploy and Scale AI Solutions at the Edge with Ease and Efficiency
Organizations of all sizes and types are eager to realize the benefits that can come from implementing AI and Machine Learning (ML) technologies, but building and implementing these projects requires addressing a number of challenges. These include the complexity of AI pipeline development as well as edge hardware and AI accelerator compatibility, not to mention the unique nature of remote locations, often in difficult-to-reach locations with no onsite IT staff.
ZEDEDA and Scailable have partnered to provide a scalable AI/ML solution that enables organizations to implement their desired AI models across a fleet of devices securely and efficiently, reducing development and deployment time to hours instead of months.
Scailable enables effortless edge AI and ML deployment and management on the ZEDEDA edge platform. Scailable can deploy AI-powered computer vision or ML solutions fast, easy and with the highest performance, without having to worry about the complexity of AI pipeline development or edge hardware compatibility.
ZEDEDA delivers an open, distributed , cloud-native edge orchestration and management solution, simplifying the security and remote management of edge infrastructure and applications at scale. ZEDEDA leverages an open architecture built on EVE-OS, from the Linux Foundation. EVE-OS delivers an industry leading identity and software attestation workflow that ensures the device can be trusted and that the entire software stack is exactly as expected.

Benefits of the Joint Scailable and ZEDEDA Solution
- No Re-Engineering: The combination of the Scailable AI manager and ZEDEDA cuts time-to-market from months to hours by making it effortless to configure and deploy AI-model pipelines to new edge infrastructure. Currently, engineering teams typically have to re-engineer the selected AI-model for specific edge hardware and accelerators, and develop a pipeline to pre-process inputs and post-process inference results. Scailable integrates edge AI deployment and configuration through a no-code UI. This makes AI/ML deployment as easy as “select and configure” and makes it portable to any ZEDEDA-supported hardware. Fast and Efficient Inference: The Scailable AI manager ensures high efficiency and fast inference in a minimal footprint. This leaves additional CPU/GPU and RAM resources available on the edge device for data processing and other ZEDEDA deployed applications, and minimizes the hardware requirement and power consumption of the edge device.
- Heterogeneity: Scailable offers remote OTA deployment of AI/ML models to a fleet of edge devices, along with centralized model management, versioning, and model portability. Any model can be deployed to a heterogeneous fleet of edge devices, including models customers upload themselves. The Scailable platform will optimize and compile the model for low bandwidth OTA deployment and efficient execution on the edge device.
- API Integration: The inference output of the Scailable AI manager can be easily integrated through standard APIs with other legacy applications running on the ZEDEDA edge device, or can be sent back to the datacenter or cloud for further action.
- Simplified Model Updates: The Scailable AI manager is optionally configurable to collect data on inference results, which can be fed back to the AI training platform to make the AI model more robust. After re-training, Scailable can update the entire fleet of devices instantaneously through the secure ZEDEDA edge platform or as soon as they come online. Secure Sandbox: The AI/ML model and pipeline is executed on the device in the Scailable AI sandbox, which runs as a virtual application in the secure ZEDEDA environment. Models are pre-compiled before deployment and as such hardened against tampering on the edge device.
- Simplified Edge Management and Orchestration: ZEDEDA provides centralized management and orchestration of edge devices, enabling administrators to remotely configure, monitor, and update hardware and applications.

Technology Overview
The Scailable and ZEDEDA solution enables customers to effortlessly deploy and implement their selected or trained AI/ ML model to their fleet of devices.
The Scailable AI manager installs within a VM on edge nodes within minutes via the ZEDEDA Marketplace, which is used to define the desired state of the applications running on the node. This includes selecting application infrastructure (e.g., VMs, containers, Kubernetes, NFVs), application services (e.g., networking, security), and the applications themselves.
ZEDEDA makes it easy to deploy the additional workloads that comprise the AI solution. With deep support for data and eventing platforms like Node-RED and Network Optix, AI solutions can be deployed at scale.
Once installed, the Scailable AI manager makes it possible to simply configure AI/ML solutions without the need for any on-device engineering. The AI manager includes video stream decoding and data pre- and post-processing to build the AI pipeline and it loads and configures the most efficient method of running a model on a selected device, considering the availability of CPU, GPU, NPU or other xPU compute. This ensures efficient execution of the edge AI solution, at low power and for a fraction of the cost.
The Scailable Platform is the cloud platform from which AI models are deployed to the AI manager installed on the edge nodes. Customers can easily add their own AI models, trained with their preferred AI training tool (e.g., TensorFlow, PyTorch) or platform (e.g., Edge Impulse), and deploy those efficiently and securely to a fleet of ZEDEDA edge devices.
ZEDEDA leverages an open architecture built on Project EVE, from the Linux Foundation. EVE is a lightweight, opensource Linux-based edge operating system, with open orchestration APIs. EVE runs on over 75 different hardware platforms providing customers the flexibility to choose the ideal configuration for every workload.
Example Use Cases
The joint Scailable and ZEDEDA solution is suitable across diverse environments, use cases, and industries. Additionally, any AI or ML model can be deployed through the Scailable solution, creating an unended list of possible use cases. A few sample use cases and verticals include:
- Smart Security
- Transportation & Logistics
- Industry 4.0, IIOT
- Energy, Smart Grid
- Smart Buildings, City
- Agriculture
