Explainability in machine learning (ML) refers to the ability to understand and articulate the processes and decisions made by models from input to output, addressing the "black box" nature of many AI systems. This concept is encapsulated in Explainable AI (XAI), which provides methods to help human experts comprehend AI solutions, enhancing transparency, accountability, and trust, especially in high-risk domains like healthcare and finance. Approaches to explainability can be global, offering an overarching view of model behavior, or local, focusing on individual features and instances. Models such as linear regression and decision trees inherently possess explainability due to their transparent nature, while complex models like neural networks require tools and techniques like LIME, SHAP, and PDP to achieve interpretability. The importance of explainability spans numerous aspects, including accountability, compliance with regulations like GDPR, improved model performance, and enhanced control. Various tools, such as AI Explainability 360, Skater, and InterpretML, provide frameworks and libraries to facilitate the integration of explainability into ML workflows, ultimately fostering better governance and understanding among stakeholders.