Home / Companies / Lambda / Blog / Post Details
Content Deep Dive

Reproduce Fast.ai/DIUx imagenet18 with a Titan RTX server

Blog post from Lambda

Post Details
Company
Date Published
Author
Chuan Li
Word Count
959
Language
English
Hacker News Points
-
Summary

The text discusses the reproduction of the current state-of-the-art ImageNet training performance on a single Turing GPU server, achieving 93% Top-5 accuracy in just 2.36 hours. This was made possible by using dynamic-size images and replacing fully connected layers with global pooling layers, which reduced unnecessary preprocessing and allowed for more efficient inference. Additionally, the team employed progressive training with images of multiple resolutions, increasing the resolution step-by-step while adjusting the batch size and learning rate to achieve optimal performance. The results demonstrate a significant reduction in training time compared to previous approaches, showcasing the effectiveness of these techniques in achieving state-of-the-art performance on ImageNet.