Lambda is advancing the infrastructure needed for superintelligence by deploying NVIDIA GB300 NVL72 systems in its high-density, liquid-cooled datacenters, creating gigawatt-scale AI factories. These systems, featuring 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs per rack, are designed to support the demanding computing requirements of trillion-parameter models and reasoning workloads. The GB300 NVL72 offers significant improvements over its predecessor, such as increased HBM3e memory and enhanced FP4 performance, which boost inference efficiency and speed up training cycles. Additionally, the system boasts advanced networking capabilities, including NVIDIA Quantum-X800 InfiniBand, reducing communication overhead during distributed training. Lambda's infrastructure integrates compute, storage, and orchestration, with tools like Kubernetes or Slurm, as well as observability through Prometheus and Grafana. This setup positions Lambda to support ambitious AI initiatives by providing dedicated GPU clusters tailored for high-performance, scalable operations in purpose-built data centers.