Ellora is a collection of standardized recipes designed to enhance large language models (LLMs) using Low-Rank Adaptation (LoRA), offering a more efficient alternative to full fine-tuning. Introduced by Microsoft Research in 2021, LoRA reduces the number of parameters trained by injecting low-rank matrices into Transformer layers, achieving comparable results to full fine-tuning with significantly fewer resources. The Ellora project provides production-ready methodologies that are infrastructure agnostic and focus on efficiency, quality, and progressive complexity across various capabilities, such as accuracy recovery, reasoning, tool calling, and secure code generation. These recipes leverage techniques like self-supervised data generation, reinforcement learning, and curriculum learning to address challenges like quantization-induced performance loss, reasoning skills, and secure coding practices. Ellora's approach is flexible, allowing practitioners to adapt the recipes to different models, domains, and infrastructures, standing as a valuable resource in an evolving research landscape that includes innovations like Text-to-LoRA and Transformer².