Retrieval augmented generation (RAG) is crucial in the development of large language model (LLM) applications, enabling interactive chats and question-answering systems with documents. The text outlines three strategies for integrating semi-structured data like tables into LLM contexts: directly inputting documents into long-context LLMs, targeted extraction of tables, and document chunking. While long-context LLMs offer simplicity, they struggle with large datasets and information placement within inputs. Targeted table extraction, despite its complexity, potentially offers the highest performance for complex tables but requires specialized tools. Document chunking, although straightforward, faces challenges in maintaining table integrity unless chunks align with page boundaries. The use of ensemble retrievers can enhance the retrieval of table-derived information by prioritizing their chunks over text body chunks, thereby improving the performance of LLM-driven applications in handling structured data.