Home / Companies / Lambda / Blog / Post Details
Content Deep Dive

Best GPU for Deep Learning in 2022 (so far)

Blog post from Lambda

Post Details
Company
Date Published
Author
Chuan Li
Word Count
2,082
Language
English
Hacker News Points
-
Summary

Ampere GPUs have improved throughput and throughput-per-dollar compared to pre-Ampere generation GPUs, with significant benefits for language models. The Ampere GPU family offers better performance per dollar than Turing/Volta generation GPUs. However, the lower-end GPUs in the Ampere family may be more cost-effective options when considering budget constraints. Scalability tests showed that some Ampere GPUs perform well with multi-GPU training jobs, while others, such as Geforce cards, experience significant bottlenecks. The recommended GPU choices for Deep Learning depend on specific needs, including multi-node distributed training and model size, with the A100 80GB SXM4 being a top choice for large models and A6000 for mainstream research.