The evaluation workflow for comparing open-source embedding models uses Ollama and pgai Vectorizer to automate embedding generation and management. The process involves creating a vectorizer for each model, generating questions of specific types for testing, and evaluating the models' ability to retrieve correct parent text chunks using vector similarity search. The study found that `bge-m3` achieved the highest overall retrieval accuracy at 72%, significantly outperforming other models. However, the choice of embedding model depends on key considerations such as query type, model size, and availability of resources. While higher dimensions are critical for performance, they come with a trade-off in terms of speed and storage requirements. The study highlights the importance of balancing these factors to select the best open-source embedding model for RAG applications.