Musk’s xAI has completed the world’s first 100,000-card GPU cluster computing center.
This ultra-large computing center is composed of 100,000 H100 Nvidia GPUs and uses liquid cooling.
The entire installation took only 122 days, which is a miracle, because it would take other companies several years to build a computing center half the size.
xAI is currently expanding and plans to add an additional 50,000 H100 and 50,000 H200 Nvidia GPUs.
Such a large-scale computing center will be used for the training of Grok 3 large models, and its capability level may surpass the yet-to-be-released GPT-5.