Prompt injection is a security vulnerability in large language models (LLMs) like ChatGPT, which allows attackers to bypass ethical safeguards and manipulate outputs to generate harmful or restricted content. This can occur through direct attacks, such as jailbreaks and adversarial suffixes, or indirect attacks, like hidden prompts in external data. To combat these threats, developers use prevention-based measures, such as paraphrasing, retokenization, and instructional safeguards, alongside detection-based strategies like perplexity checks and response analysis. Despite these efforts, LLMs remain susceptible to evolving threats, necessitating a balance between security and usability. Advanced defenses, including prompt hardening and multi-tiered moderation, are also employed, though no system is entirely immune. The ongoing challenge is to develop robust architectures that separate system instructions from user inputs, and future advancements may address these vulnerabilities through adversarial training and AI-driven detection models.