The rise of large language models (LLMs) and their integration into various applications necessitates a robust evaluation process to ensure their performance, reliability, and safety. LLM evaluation is crucial for developers to systematically assess and improve the models, enhancing user trust and product effectiveness. This involves understanding the fundamentals of LLM evaluation, which differs from traditional software testing in its reliance on qualitative methods due to the non-deterministic nature of LLM outputs. The evaluation process includes defining specific tasks, choosing appropriate metrics, and integrating evaluation throughout the software development lifecycle. Various methods, such as human evaluations, automated metrics, and LLM-based evaluations like LLM-as-a-judge, are employed to assess core dimensions like faithfulness, relevance, coherence, bias, and efficiency. The choice of evaluation approach depends on the use case, model type, and stakeholder needs, emphasizing the role of continuous monitoring and iteration to maintain product quality and alignment with user expectations. Tools like Opik are recommended for facilitating evaluation, offering features such as tracing, observability, and scalable evaluation pipelines to support product development and deployment.