du + NextGenAI Bring B300 GPUs and Liquid Cooling to Dubai

At GITEX 2025, du and NextGenAI unveiled a 13+MW, liquid-cooled AI supercluster using NVIDIA B300 GPUs at du’s DSO data centre. Here’s what it means for UAE enterprises, from performance and cooling to access and use cases.

Abbas Jaffar Ali
By
Abbas Jaffar Ali
Abbas has been covering tech for more than two decades- before phones became smart or clouds stored data. He brought publications like CNET, TechRadar and IGN...
5 Min Read
du + NextGenAI Bring B300 GPUs and Liquid Cooling to Dubai
TL;DR
  • du and NextGenAI launched a live, 13+MW liquid-cooled AI supercluster in Dubai using NVIDIA B300 GPUs.
  • It more than doubles du’s 2024 5+MW GPU deployment.
  • Direct-to-chip liquid cooling boosts efficiency and performance density.

du and NextGenAI just flipped the switch on what they call the Middle East’s most advanced AI supercluster: 13+ megawatts of liquid-cooled NVIDIA B300 GPUs at du’s Dubai Silicon Oasis (DSO) data centre, announced at GITEX Global 2025. It’s live, not a concept, and it more than doubles du’s 2024 GPU footprint. For UAE firms chasing faster model training or high-density inference, this is a serious bump in local compute — without the long wait times or data-sovereignty headaches.

What’s been announced

A quick summary of the core specs and the context.

  • 13+MW operational AI facility at du’s DSO data centre
  • NVIDIA B300 GPUs with direct-to-chip liquid cooling
  • Live access for customers across AI and HPC
  • Announced 17 October 2025 at GITEX Global

The press note says the new build “more than doubles” du’s 2024 5+MW GPU deployment, bringing total operational capacity past 13MW. It uses NVIDIA’s B300 architecture and direct-to-chip liquid cooling at DSO, and it’s available now for enterprise use. The reveal took place at GITEX Global 2025 in Dubai.

Why the liquid cooling matters

Liquid cooling isn’t just for show. It’s about density, stability, and opex.

  • Direct-to-chip liquid cooling improves energy efficiency
  • Higher thermal headroom lets you pack more GPUs per rack
  • Better stability at sustained loads for training and inference

According to du, the direct-to-chip system “significantly enhances energy efficiency while enabling optimal performance” of the B300 GPUs. In plain terms: more performance per square metre and fewer throttling headaches as clusters run flat out. That feeds into lower power per unit of work and steadier time-to-train when you scale out.

Who it’s for in the UAE

This isn’t only for labs. The target is any team hitting compute limits.

  • Enterprises building or fine-tuning large models
  • Banks, healthcare, and public sector needing data to stay in-country
  • Media and retail teams scaling recommendation or vision models
  • Universities and R&D pushing HPC and AI workloads

du frames the cluster as “unparalleled computing power” for AI and HPC customers in the region, with access to resources “previously unavailable in the Middle East.” That translates to faster iteration cycles, less queue time, and fewer compromises on model size — crucial for teams doing retrieval-augmented generation, multi-modal vision, or complex time-series forecasting.

Access and rollout

The key promise here is immediacy: it’s not a paper launch.

  • The GPU cluster is fully functional and accessible today
  • Part of du’s broader data centre and colocation strategy
  • Built to support enterprise-scale AI, research and commercial apps

du says customers can access a “fully functional, live GPU cluster” now. The move aligns with du’s push on colocation and high-density infrastructure, with the 13+MW capacity positioned for big AI and HPC jobs, research initiatives, and production deployments. Expect service-wrapper details (tenancy, quotas, networking) to shape actual user experience and cost, but the hardware is online.

What executives said (and what it signals)

The quotes spell out intent: speed, scale, and accessibility.

  • du CEO highlights meeting current demand and shaping future needs
  • NextGenAI EVP focuses on removing infrastructure barriers
  • Both point to rapid scaling for enterprise AI projects

du’s CEO says deploying “the most advanced GPU cluster technology available today” will help customers accelerate timelines and competitiveness. NextGenAI emphasises that the joint setup removes “traditional barriers” like limited capacity or slow deployment, opening the door to faster pilots and rollouts. In practice, that should mean shorter procurement cycles and easier onboarding for UAE-based teams.


What exactly is the capacity?

A 13+MW operational facility using NVIDIA B300 GPUs with direct-to-chip liquid cooling, hosted at du’s DSO data centre.

Is it live or still “coming soon”?

It’s live. du and NextGenAI are presenting a fully functional cluster that customers can access today.

What changed since du’s 2024 setup?

The new build more than doubles the earlier 5+MW GPU deployment from 2024.

Share This Article
Abbas has been covering tech for more than two decades- before phones became smart or clouds stored data. He brought publications like CNET, TechRadar and IGN to the Middle East. From computers to mobile phones and watches, Abbas is always interested in tech that is smarter and smaller.