Activeport Group (ASX: ATV) has unveiled a new AI model routing platform designed to provide secure, sovereign and low-latency execution of large-scale AI workloads. The launch shifts the company beyond its established GPU orchestration role in cloud gaming and into the fast-growing market for private AI infrastructure.
The move aligns with a broader industry trend. Governments, telcos, cloud operators and enterprise users are increasingly prioritising local execution of AI models to reduce latency and improve compliance. Demand for alternatives to fully public AI APIs is rising as security rules tighten and infrastructure costs remain unpredictable.
Platform Capabilities
Activeport’s new AI Gateway offers unified routing intelligence, an enterprise-grade API and access to sovereign, locally hosted GPU clusters. The platform supports private execution of hundreds of models on GPU infrastructure already deployed by telcos using Activeport’s orchestration software.
The system routes workloads based on speed, cost, context window size and data-residency requirements. It also includes auto-provisioning of GPU nodes, real-time workload management and zero-touch scaling, enabling customers to infer models across hybrid cloud environments while retaining full control of performance and data handling.
CEO Peter Christie said: “Many of our telco customers are making significant investments in GPUs to operate sovereign AI models.” He added: “Extending our GPU orchestration from cloud gaming to AI inference is a natural progression.”
He also noted the company’s enthusiasm for diffusion model hosting, explaining that these workloads align well with Activeport’s existing streaming optimisation capability.
Strategic Context
The global AI inference market is expected to exceed US$25 billion as private execution becomes more common. Much of this demand stems from sectors that cannot rely solely on public clouds due to regulatory and commercial constraints. Activeport is positioning itself to become the middle layer connecting private infrastructure with public platforms such as AWS Bedrock, Google Vertex, Groq and Cerebras.
The company’s telco-grade orchestration technology already manages large GPU clusters for international cloud-gaming networks. Converting these same clusters into AI inference environments reduces deployment friction for customers and deepens Activeport’s presence in high-value network infrastructure.
Why It Matters for Investors?
Activeport has been signalling a shift into sovereign compute for several months. Today’s launch places the company in a rapidly growing segment where demand is accelerating, especially across Australia, the Middle East and Asia. The rise of national AI strategies and stricter compliance frameworks is also strengthening the case for local hosting and private routing of sensitive workloads.
Although competition in AI infrastructure is intensifying, the company’s existing deployments in telecom networks give it a pathway into markets that require highly reliable, low-latency GPU operations.
