FI Consulting, a D.C.-based team known for complex data solutions, has developed a Retrieval-Augmented Generation (RAG) pipeline that effectively combines Memgraph and DeepSeek to deliver precise and contextually accurate answers from large sets of internal documentation. This innovative approach addresses the limitations of general-purpose large language models (LLMs) by integrating graph structures to manage the data, incorporating domain-specific knowledge, and filtering information to reduce inaccuracies and noise. The system employs a chunking strategy to break down extensive documents and uses Named Entity Recognition (NER) to build a rich contextual graph, ensuring that the LLMs return reliable answers without hallucinations. The deployment of this RAG system on Azure, utilizing Memgraph in a Docker container, demonstrates a cost-effective and scalable solution without the need for extensive GPU resources, maintaining high performance and low latency. Additionally, the approach highlights the advantages of using graphs for structuring, retrieval, and integrating domain knowledge, which enhances the capabilities of LLMs in generating accurate responses.