Nscale Is The World’s First AI GPU Cloud Provider To Utilize AMD’s Instinct MI300X Accelerators
AMD has announced Nscale as the world's first AMD technology-focused cloud provider powered by the AI juggernaut, Instinct MI300X.
AMD Instinct MI300X GPUs Found Home In Nscale, The World's First AMD-Focused AI Cloud Service Provider
Press Release: Today marks the official launch of Nscale as one of the world’s first AMD technology-focused cloud service providers powered by the Instinct MI300X AI GPU Accelerator. Nscale is a vertically integrated GPU cloud spun out from Arkon Energy, a 300MW Data Centre and Hosting business in North America.
Related Story MSI Releases AGESA 1.2.0.Ca BIOS For AM4 Motherboards, Addresses AMD Zen 2 CPU Security Issues
Strategically located in Northern Norway, Nscale’s N1 Glomfjord site benefits from some of the lowest-cost renewable energy in the world, therefore making the N1 Glomfjord site one of the most cost-efficient LLM and AI training hubs in the world.
Key Features and Benefits of Nscale Cloud:
- Vertical AI cloud: Nscale owns and operates the full AI stack from its modular data centers to its high-performance compute clusters, allowing it to optimize each layer of the stack for performance and cost efficiency.
- Built for sustainability: Located in Northern Norway, Nscale is powered by 100% renewable energy and leverages natural cooling solutions to deliver sustainable GPU compute services.
- Best-in-class economics: Nscale's vertical integration and low-cost renewable power enable it to deliver one of the most affordable GPU computing solutions on the market.
- Unrivaled user experience: Purpose-built for AI, Nscale streamlines the setup, configuration, and management of cloud-based supercomputing clusters to accelerate AI R&D
By using the enhanced memory bandwidth and capacity of the AMD Instinct MI300X accelerators, and Nscale’s extensive experience working with hardware and the proven AMD ROCm open ecosystem, Nscale can provide customers impressive price, performance, and efficiency for the most demanding LLM and GenAI workloads.
T