AUSTIN, Texas, March 18, 2025 (GLOBE NEWSWIRE) -- Quali, a leading provider of platform engineering solutions for infrastructure automation and management, announced today support for AI-centric cloud platform Nebius that will accelerate adoption and unlock velocity for AI builders leveraging Nebius-powered infrastructure.
Nebius provides an AI-centric cloud platform built for intensive AI workloads, large-scale GPU clusters, cloud platforms, and tools and services for developers. By giving AI builders the compute, storage, managed services, and tools they need to build, tune, and run their models all in one place, Nebius is helping drive the explosive growth of AI adoption.
Quali Torque is a Software-as-a-Service platform that simplifies the orchestration, delivery, and management of multi-cloud and on-premises infrastructure. To support AI workloads, this approach creates and deploys the code to support each layer of the AI tech stack: infrastructure, data services, models, agents, and other applications. After deployment, Torque monitors the state of all live infrastructure continually, triggers notifications about errors or configuration drift, automates routine actions such as training and data quality assurance, and scales GPUs to support these actions automatically.
As a result of this integration, Torque automates the creation of Environment as Code blueprints that can be used to provision and optimize Nebius cloud-based infrastructure as part of a managed environment supporting the full Agentic AI tech stack. This helps to reduce the learning curve, accelerate productivity, and optimize efficiency for Nebius customers by automating complex tasks and simplifying the provisioning and maintenance of GPU infrastructure.
Key highlights of this release include:
- Reusable Environment as Code Blueprints Defining Nebius Cloud-Based GPU Clusters: Torque defines each layer of the AI tech stack as an Environment as Code blueprint that can be executed repeatedly and managed continuously. This eliminates repeat orchestration of all provisioning parameters and dependencies needed to generate environments supporting any layer of the AI tech stack, including Nebius cloud-based GPU clusters.
- AI-Driven Blueprint Design and Creation: To accelerate and simplify the orchestration of AI workloads, Torque’s AI Copilot automatically orchestrates the user’s Infrastructure as Code (IaC) and other automation assets into a reusable blueprint defining all resources, parameters, dependencies, and other components needed to generate an AI workload, such as Nebius cloud-based infrastructure. Torque’s blueprint orchestration process also includes an interactive Design Canvas that provides a graphical layout of the environment orchestration, provides an interactive drag-and-drop interface for users to modify the environment, and automatically updates the code in the blueprint in response to these modifications.
- Simplified Provisioning & Integration Across the AI Tech Stack: Once they’ve created an environment blueprint, Torque users can launch the environment via Torque’s UI, save the blueprint for future use, and make it available to be integrated alongside other layers of the AI tech stack. Torque’s native provisioning experience provides simple pick-lists and form fields for users to set inputs and other parameters for their infrastructure and environments. This can include live infrastructure services deployed via Torque. For example, this allows users to select an actively deployed Nebius cloud-based GPU cluster as an input when provisioning cloud-based data services in support of an AI model or ML application. This approach reduces the learning curve and accelerates operations for Nebius cloud customers by simplifying the user experience and eliminating redundant configuration and provisioning processes.
- Automating Day 2 Actions on Live Nebius Infrastructure and Other AI Resources: Since Torque treats the AI tech stack as a managed environment, the platform can perform actions on live resources. For example, users can create Torque Workflows that execute routine actions like monitoring infrastructure utilization, AI model drift, and inference accuracy on a recurring basis, in response to custom events, and on-demand via a single click in Torque’s native UI. This level of automation reduces manual work and provides assurance that critical actions are executed consistently.
- Dynamic GPU Scaling in Response to Application Needs: Torque automatically scales GPU resources to align resource utilization at various phases of the AI solution’s infrastructure lifecycle. This approach ensures adequate GPU capacity to support resource-intensive activity like training, while scaling down to avoid over-provisioning as the solution transitions to less resource-intensive activities.
“Nebius is at the forefront of the AI revolution, and they understand what AI builders need to capitalize on this opportunity,” said Lior Koriat, CEO of Quali. “Our deep knowledge and experience simplifying infrastructure complexity is a perfect fit to empower Nebius customers to accelerate their AI journey.”
About Quali:
Headquartered in Austin, Texas, Quali provides platform engineering tools to help enterprise technology and engineering teams accelerate and optimize the use of multi-cloud infrastructure. Global 2000 enterprises rely on Quali’s solutions to democratize cloud access securely and efficiently by simplifying the experience of deploying application environments and enforcing cloud governance at scale. For more information, please visit quali.com and follow Quali on LinkedIn.
**Contact Information:**
Colin Neagle
VP of Marketing
Quali
Colin.n@quali.com