Improving retrievers through the fine-tuning of embedding models and rerankers is explored, using the sentence-transformers Python library for model training. The analysis investigates whether fine-tuning should always be applied, with findings suggesting that fine-tuning is beneficial for domain-specific datasets but may lead to overfitting and unstable results with general datasets like SQuAD. The experiments demonstrate that while fine-tuning can enhance model performance, especially with larger domain-specific data, it is not universally advantageous. Augmentation and synthetic data generation are discussed as means to improve datasets, though they are not foolproof solutions if the base data is poor. Combining fine-tuned embedding models with rerankers yields improved retrieval results, emphasizing the potential of integrated approaches. Furthermore, LanceDB's embedding API is highlighted for its easy integration with popular embedding model providers, facilitating the use of both pre-trained and custom fine-tuned embeddings in database queries.