The Food Discovery Demo is an open-source project designed to help users explore food options through a semantic search interface, particularly when their preferences are not clearly defined. Utilizing a FastAPI backend, a React frontend, and a Qdrant instance, the demo employs a CLIP model to encode images and text into a shared vector space, allowing for effective image-based searching and recommendations. The system addresses challenges such as the cold start problem by presenting random dish selections initially and enables textual and location-based searches to enhance user interaction. Users can provide feedback through liking or disliking dishes, which updates search results accordingly, and the demo supports both positive and negative feedback to refine recommendations. The underlying dataset includes over 2 million images sourced from the Wolt dataset, and the demo's architecture facilitates both local deployment via Docker and cloud deployment through Qdrant Cloud. The project is accessible on GitHub, allowing users to fork and modify it for personalized use cases.