Large language models (LLMs) have powerful general capabilities but may not precisely fit specific use cases. To adapt LLMs for any use case, three techniques can be employed: prompt engineering, retrieval-augmented generation, and fine-tuning. Prompt engineering involves adding additional detail to prompts to improve model results, often using few-shot prompting strategies such as providing example inputs and outputs. This technique is useful for data formatting and is considered an inexpensive and effective way to get a wide range of behaviors from an off-the-shelf model. Retrieval-augmented generation adds vector databases to provide extra information in every call to the LLM, allowing for more efficient passing of relevant context. This approach is particularly useful for document search, help center chatbots, and domain-specific writing. Fine-tuning modifies the underlying model directly to add new corpus or modify behavior, often used in combination with embeddings and prompt engineering. The cost of each technique varies, with prompt engineering being low-cost and retrieval-augmented generation being medium-cost. Fine-tuning has a high up-front cost but returns customized results at a lower ongoing cost. By starting with the most basic approach and trying more customization if needed, users can effectively adapt LLMs to meet their specific use cases.