Prompt chaining is a powerful technique for building natural conversations with large language models (LLMs). It involves combining multiple prompts to capture different entities, classify intents, and generate responses. By using techniques such as intent classification, entity capture, re-prompting, personas, and mixing NLU with LLMs, developers can create more contextually-aware and dynamic assistants. However, prompt chaining also presents challenges such as inconsistent outputs and formatting issues, which require experimentation and fine-tuning. Despite these challenges, prompt chaining is most effective for building first versions of assistants and allows developers to quickly prototype and test their ideas before refining them into a more standardized model. By mastering prompt chaining techniques, developers can unlock the full potential of LLMs and create more sophisticated and engaging conversational interfaces.