ChatGPT's knowledge cutoff is two years old, and updating it is costly due to the expensive process of training new language models and cleaning data. To work around the limitation of outdated data, developers can use Retrieval Augmented Generation (RAG), a method that involves searching for relevant context, providing it to the language model, and obtaining improved responses. Despite its seemingly complex nature, RAG is straightforward and widely adopted by open-source projects, including LlamaIndex, which integrates with numerous vector databases. Fine-tuning, another technique, involves creating a custom model on top of an existing one, though it requires more effort and data. Knowing when to employ RAG or fine-tuning, or a combination of both, is crucial. For applications not needing recent data, the current ChatGPT is sufficient, while various chatbots like Google Bard and BingGPT use RAG to offer more updated information.