German

Accelerating AI and IoT at the Edge

June 10, 2021

Accelerating AI and IoT at the Edge

Summary

AI and IoT workloads are increasingly being deployed at the edge for reasons including reducing latency and network bandwidth consumption and ensuring autonomy, security and privacy. To address the associated challenges at scale, technology providers should focus on open infrastructure, orchestration and security tools, with necessarily unique hardware, software and services applied.

The Importance of Orchestration to Accelerate AI and IoT at the Edge

In a perfect world we’d just run the bulk of AI and IoT workloads in the cloud where compute is centralized and readily scalable, however the benefits of centralization must be balanced out with factors that drive decentralization. The explosion of devices and data is driving a need for more processing at the edge, with reasons including reducing latency and network bandwidth consumption and ensuring autonomy, security and privacy. Edge computing for AI and IoT workloads means that we’re simply moving some aspects of the data aggregation and analytics out of centralized data centers, closer to where the data originates and where decisions are made in the physical world.

The Killer Edge Apps

The “edge” is not one location, rather a continuum spanning billions of constrained devices in the field to thousands of regional data centers located just downstream of centralized cloud resources. Use cases that are latency-critical or require a high degree of security and privacy will always be driven proximal to the user or process in the field, for example deploying a vehicle’s airbag from the cloud when milliseconds matter or stripping PII from interactions with consumers.. Meanwhile latency-sensitive applications will often take advantage of upstream edge tiers (e.g. offered by telcos and service providers) or the cloud because of the scale factor spanning many end users. Where workloads are best deployed across the edge-to-cloud continuum is ultimately driven by a balance of performance and cost.

From a bandwidth consumption perspective, computer vision is the killer app for edge AI. Here AI inferencing models tapping into streaming video feeds will trigger events for use cases such as identity (e.g. demographics, surveillance), object recognition (e.g. license plate, weapon), quality control, predictive maintenance and compliance monitoring. For these use cases, the analysis will often be done locally and only critical events will be triggered to backend systems, such as messages along the lines of “the person at the door looks suspicious” or “please send a tech, this machine is about to fail”.

Another example where bandwidth drives a need for deploying AI at the Edge is vibration analysis as part of a use case like predictive maintenance. In this case, data sampling rates of well over 1KHz (1000 times a second) are common, often approaching 8–10KHz because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, AI inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Today the edge component of AI typically involves deploying inferencing models local to the data source but even that will evolve over time. The decision on where to train and deploy AI models can be determined by balancing considerations across six vectors: scalability, latency, autonomy, bandwidth, security, and privacy.

It Takes a Village

Edge computing environments get more and more complex in terms of hardware, software and domain knowledge the closer you get to the physical world.For this reason, deploying AI and IoT solutions in the field introduces additional technical and logistical challenges. Deploying AI and IoT solutions at the edge takes a village, requiring the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize IoT frameworks and AI models; domain expert data scientist and system integrators to develop industry-specific solutions, and security providers to ensure deployments are secure.

Over the past several years there has been a dizzying array of IoT platforms, but this is beginning to shake out as more industrial players are moving away from trying to build their own platforms in favor of leveraging infrastructure from the cloud scalers. As has been the case in the IT world for years, the leaders in AI and IoT at the edge will be offering necessarily unique hardware, software and services on top of open, standardized infrastructure. Given the complexity at the edge, it is especially important to focus on an open ecosystem that prevents lock-in to any given provider. Here open source collaboration among communities such as the Linux Foundation’s LF Edge organization is a critical enabler for organizations to not only have choice but also focus on value creation instead of “undifferentiated heavy lifting”.

Deploying Solutions in the Real World

It’s important to consider that many of the general considerations for deploying AI in the cloud carry over to the edge. For instance, results must be validated in the real world — just because a particular model works in a pilot environment doesn’t guarantee that the success will be replicated when it’s deployed in practice at scale when external factors of camera angle and lighting come into play. Just Google “AI chihuahua muffin” for an example of the real-world challenges in detecting similar objects.

The reality is that to date many AI and IoT solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. But as organizations start to scale their solutions in the field they quickly realize the challenges. As such, in addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for success at the edge is having a consistent delivery or orchestration mechanism. While many of the same orchestration principles can be applied across the edge continuum, the inherent technical tradeoffs dictate that tool sets can’t be exactly the same spanning server-class infrastructure upstream and the increasingly constrained and diverse hardware at the distributed edge.

Stakeholders from IT and OT administrators to developers and data scientists need robust remote orchestration tools to not only be able to initially deploy and manage IoT infrastructure and AI models at scale in the field, but also continue to monitor and assess the overall health of the system. Traditional data center orchestration tools are not well-suited for the distributed edge because they are too resource-intensive, don’t comprehend the scale factor or pre-suppose a near-constant network connection to the central console which often isn’t the case in remote edge environments. Deploying edge computing for AI and IoT use cases distributed outside of physically-secure data centers also introduces new requirements such as a Zero Trust security model and Zero Touch deployment capability to accommodate non-IT skill sets.

In Closing

In order to scale AI and IoT solution deployments outside of traditional data center locations, enterprises need orchestration solutions that take into account the unique needs of the distributed edge in terms of diversity, resource constraints and security, and that help a diverse mix of resource spanning operations, IT, developers and data scientists keep tabs on both hardware and software deployed in the field. This includes keeping software up to date to ensure security, and having visibility into any potential issues that could lead to inaccurate data or analytics. Especially important is preventing total failure which often results in immediate risk to production and safety in the operations world. Finally, it’s important that the selected infrastructure foundation is based on an open model to prevent lock-in to any given cloud, application or hardware to maximize potential in the long run.

RELATED BLOG POSTS 

BLOG

Recap from Computex 2019 and Microsoft “IoT in Action”

BLOG

Is “cloud-native edge computing at hyperscale” just techno-babble?

Get In Touch