A recent paper by OpenAI, titled "Why Language Models Hallucinate," presents a mathematical framework explaining why language models confidently produce falsehoods, even with perfect training data and infinite computational resources. The paper argues that hallucinations arise from the inherent difficulty in generating correct text compared to verifying it, formalized through the Generation-Classification Inequality, which states that the error rate for generation will always be higher than for classification. Despite the paper's solid math, its practical implications are nuanced, as modern techniques like retrieval-augmented generation and chain-of-thought prompting already mitigate these theoretical limits. The paper highlights the need for confidence-aware scoring in model evaluation to reduce hallucinations by addressing the incentive for models to guess rather than abstain in uncertainty. Although the paper underscores that hallucinations are theoretically inevitable, it also shows that engineering solutions, such as improved calibration and problem reformulation, can effectively reduce their prevalence, emphasizing that the challenges are more about engineering than insurmountable mathematical constraints.