An audio clip of a person speaking can be uploaded and embedded using an audio embedding model, which is then compared against a dataset of celebrity voices using a tool called Chroma. Chroma facilitates the transition from a basic prototype in a Jupyter notebook to a fully deployed application, illustrated through the example of the Celebrity Voice project. The VoxCeleb dataset, which is used for this purpose, contains 1,251 speakers across 145,265 short spoken audio utterances stored as WAV files. The initial prototype was developed with a few lines of code in a Jupyter notebook, demonstrating the simplicity and efficiency of creating a voice comparison application.