Contrastive learning is a powerful method for extracting meaningful representations from unlabeled data by leveraging similarities and dissimilarities, effectively mapping similar instances close together in latent space while distancing dissimilar ones. This approach is applicable across diverse domains, including computer vision, natural language processing (NLP), and reinforcement learning. It encompasses both supervised and self-supervised methods, where supervised contrastive learning (SCL) uses labeled data to differentiate between similar and dissimilar instances, and self-supervised contrastive learning (SSCL) utilizes pretext tasks to derive insights from unlabeled data. Essential components of this technique include data augmentation, encoder, and projection networks, which work together to capture relevant features and similarities, while various loss functions like contrastive loss, triplet loss, and InfoNCE loss guide the learning process by maximizing the agreement between positive samples and minimizing it between negative samples. Prominent frameworks such as SimCLR, MoCo, BYOL, SwAV, and Barlow Twins have advanced the field by implementing innovative methodologies to improve model performance and generalization across different tasks, demonstrating effectiveness in semi-supervised and supervised learning scenarios, as well as in NLP and data augmentation.