The article explores the use of Dropout, a technique typically employed to prevent overfitting in neural networks, as a method to estimate uncertainty at prediction time, known as MC Dropout. By leaving Dropout active during prediction, a model effectively simulates an ensemble of subnetworks, which allows for the calculation of mean and variance from multiple predictions to estimate uncertainty. The article demonstrates this technique with the Auto MPG dataset, noting that the approach requires minimal code changes. Key metrics like Prediction Interval Coverage Probability (PICP) and Mean Prediction Interval Width (MPIW) are discussed to evaluate the quality of interval estimates, showing that increasing the dropout rate raises both metrics. However, the sample size appears to have a negligible effect on these metrics. While MC Dropout avoids the computational costs of training multiple networks, its limitations include the need for multiple predictions per data point and the necessity of using Dropout, which may not suit all use cases. Future parts of the series promise to explore methods that provide uncertainty estimates from a single prediction.