RAG (Retrieval Augmented Generation) consistently outperformed fine-tuning in knowledge-intensive tasks, demonstrating its superior ability to integrate external information. Fine-tuning improved performance over the base model but was not as competitive as RAG. Data augmentation proved beneficial for fine-tuning by exposing models to multiple variations of the same fact during training, enhancing knowledge retention. RAG's superiority over fine-tuning was attributed to its contextual relevance and reduced hallucinations, making it a more reliable choice for integrating external knowledge. The use of vector databases like Milvus enabled efficient storage and retrieval of high-dimensional embeddings, further improving factual accuracy and reducing computational latency. Future research directions include exploring hybrid knowledge integration methods, combining fine-tuning approaches, and developing new evaluation frameworks to better assess knowledge retention in Large Language Models (LLMs).