Computer vision, a challenging area in machine learning, involves teaching computers to distinguish visual representations, a task humans naturally excel at. This article explores image embeddings, which are dense vector representations of images used in various tasks like classification. Using a convolutional neural network (CNN), specifically a pre-trained ResNet-18 model, image embeddings are generated from a Kaggle Dog Breed Images dataset. These embeddings represent images in lower-dimensional space, making them crucial for tasks such as creating search engines based on image similarity. The process involves extracting features up to the penultimate layer of the CNN, with embeddings stored in Activeloop Deep Lake for easy reuse. The embeddings enable efficient classification and similarity searches by analyzing cosine similarity between vectors. The dataset, containing 918 images of different dog breeds, is processed using PyTorch, and the embeddings are uploaded to Deep Lake for further use. This approach highlights the utility of embeddings in computer vision applications across various industries, demonstrating how such techniques can enhance search and classification tasks.