Data poisoning has transitioned from an academic concept to a tangible threat in 2025, affecting various stages of the AI model lifecycle, including pre-training, fine-tuning, retrieval, and tools. This adversarial attack involves inserting corrupted or biased data into models' learning processes, leading to compromised outputs, hidden backdoors, or biased behavior. The severity of this threat is underscored by real-world incidents, such as backdoors in GitHub code and poisoned synthetic data pipelines, highlighting how even minimal contamination can have outsized impacts. Effective defenses require a comprehensive approach, combining data provenance, adversarial testing, and runtime guardrails to protect against both external and internal threats. Researchers and practitioners are urged to evolve benchmarks and defenses rapidly to address the complex challenges posed by data poisoning, ensuring AI systems remain reliable and trustworthy in critical applications.