Home / Companies / Anyscale / Blog / Post Details
Content Deep Dive

Three ways to speed up XGBoost model training

Blog post from Anyscale

Post Details
Company
Date Published
Author
Antoni Baum, Chandler Gibbons
Word Count
1,609
Language
English
Hacker News Points
-
Summary

XGBoost is a popular open-source implementation of the gradient boosting algorithm, known for its efficiency and model performance. To speed up XGBoost model training, three approaches can be used: the tree method (using GPU acceleration), cloud training using services like AWS or Google Cloud, and distributed XGBoost-Ray on Ray, which leverages multiple nodes and GPUs to achieve significant improvements in training time. The latter approach is found to be the most effective way to reduce training time, thanks to its multi-node training capabilities, full CPU support, full GPU support, and configurable parameters like RayParams.