Binarized Neural Networks (BNNs) present an innovative approach to neural network training by binarizing weights and activations, significantly reducing memory usage and enhancing power efficiency, which is particularly beneficial for low-power devices. Originating from a 2016 paper by Courbariaux et al., BNNs utilize binary matrix multiplication to accelerate training time, achieving near state-of-the-art results on datasets like MNIST. Despite their reliance on binarized values, real-valued weights are maintained for optimization, and challenges in gradient calculation are addressed using the Saturated Straight Through Estimator (STE). The implementation of shift-based methods for Batch Normalization and optimization further speeds up the process without compromising accuracy. Libraries like Larq in TensorFlow/Keras offer user-friendly tools for building and training BNNs, making them accessible for deployment on mobile devices. The efficiency and reduced power consumption of BNNs make them a promising solution for deploying deep learning models on platforms with limited resources, such as Android devices.