We are all experiencing the transformative impact that traditional and generative AI and other related technologies are having on society. The potential reach of this impact is vast and it’s driving increased demand within organizations to access the compute resources needed to execute and scale critical AI, machine learning, and High Performance Computing (HPC) workloads.
Today, we’re excited to announce a strategic partnership with Applied Digital to provide a new on-demand AI cloud service product, purpose-built to address the exploding demand for AI computing resources and infrastructure. Applied Digital is a designer, builder and operator of next-generation digital infrastructure designed for HPC applications, and this new service will extend their AI cloud service.
Applied Digital will integrate ZEDEDA’s award-winning management and orchestration solution into their service, enabling customers to purchase and use GPU resources on-demand and at scale, across all of Applied Digital’s data centers. For customers, this will mean a seamless experience with dynamic resource allocation, both on-demand and long-term, all fully integrated with orchestration, storage, and networking across all instances and applications. GPU resources can be purchased by the hour, and any Applied Digital resources allocated during the initial stages of a project, and then effortlessly scaled as the project moves into production.
“As our customers leverage AI workloads and HPC applications in increasingly new and innovative ways, we must be able to provide th resources they need where and when they need them,” said Wes Cummins, CEO of Applied Digital. “ZEDEDA’s cloud-native orchestration platform, coupled with our next-generation proprietary data center assets, provides the ideal foundation for us to seamlessly deliver these resources.”
Part of ZEDEDA’s vision has always been to extend the cloud model across environments, and this partnership with Applied Digital is a powerful extension of ZEDEDA’s focus. Our CEO, Said Ouissal, puts it best, “We believe the cloud computing paradigm should be extended everywhere, and we have already witnessed this at the far edge of the network. Generative AI is now demanding new learning and inference infrastructure requirements, specifically in power efficient environments, validating that edge computing is truly ubiquitous and transformative.”