A research project focused on improving automated chest X-ray radiography utilized the Stanford CheXpert dataset to develop an open-source X-ray classification model that achieved a state-of-the-art accuracy of 0.93, surpassing previous models. The study explored the impact of view-specific model training and examined various model architectures, such as DenseNet, VGG networks, and support vector machines (SVMs), revealing that DenseNet121 outperformed VGG models due to its feature propagation and non-linear decision boundaries. Despite attempts to enhance performance through view-specific models, results were underwhelming due to similarities in X-ray scans across different views and insufficient training data for certain views, particularly PA and lateral. The research highlighted the potential of decision trees for enhancing interpretability and accuracy and suggested that training data composition influenced model performance. Future efforts aim to expand the dataset's diversity and perform uncertainty analysis to improve prediction reliability.