Triplet Loss - Advanced Intro
Blog post from Qdrant
Triplet Loss, introduced in the FaceNet paper in 2015, is a popular loss function for supervised similarity or metric learning, designed to ensure that dissimilar pairs are separated from similar pairs by a certain margin. It uses an anchor, positive, and negative sample setup to maintain a distance between clusters, allowing for intra-class variance and avoiding the collapse of samples into a single point in vector space, unlike Contrastive Loss. Triplet Loss employs online triplet mining strategies to form useful triplets dynamically during training, increasing efficiency compared to offline strategies. This method involves creating a distance matrix and applying broadcasting to compute loss values for all possible triplets, filtering out invalid or easy ones to focus on the most informative triplets. The article further explains the implementation of Triplet Loss using PyTorch and discusses the batch-all strategy for online triplet mining, setting the stage for exploring more advanced strategies like batch-hard and batch-semihard mining in future discussions.