ActiveLab is an open-sourced active learning method that helps determine which new data should be labeled or which current labels should be checked again to improve machine learning models within a limited annotation budget. It uses a weighted ensemble of model-based predictions and annotator reliability estimates to decide on the most informative next label. By accounting for multiple annotators, agreement, and model confidence, ActiveLab outperforms other active learning methods in various settings, including multi-annotator and single-annotator scenarios with or without infinite unlabeled data. The method can be used to improve dataset labels, train better classifiers, and estimate annotator quality. It is available as part of the cleanlab library and has a tutorial notebook that provides a simple way to use it.