The Nvidia A100 GPU is still a popular choice for training and deploying large language models (LLMs) and diffusion models, despite being no longer the most bleeding-edge GPU on the market. The price of an A100 varies based on its configuration, including memory capacity (40GB or 80GB) and form factor (PCIe or SXM). The SXM version is more expensive than the PCIe version but offers enhanced performance capabilities due to its direct socketing onto the motherboard. Cloud platforms such as AWS, GCP, and OCI offer flexible pricing models for A100s, including spot and on-demand options. Serverless compute startups are emerging as alternatives to hyperscalers, offering a more flexible model for accessing and scaling resources by spinning up GPUs only when needed.