The closed-world assumption is a foundational belief in deep neural networks that the network will only encounter data it was trained on. However, this assumption neglects the reality of real-world scenarios where data distribution often diverges from training data. Out-of-distribution (OOD) detection refers to a model's ability to recognize and handle data that deviates significantly from its training set. OOD detection is crucial for ensuring the robustness and reliability of AI systems, especially in critical domains like medicine and home robotics. The brittleness of models to OOD data can be attributed to various factors such as model complexity, lack of regularization, dataset shift, assumptions made by traditional statistical models, high dimensionality, adversarial inputs, absence of OOD training samples, and the objective function used during training. Researchers are exploring diverse approaches to enhance OOD detection, including leveraging generative models, ensembling multiple models, segmentation techniques, and Monte-Carlo dropout. The field of OOD detection is rapidly evolving, with a focus on enhanced generalization, integration with other AI domains, real-time OOD detection, and ethical considerations.