Prompt guardrails are essential tools for safeguarding Large Language Model (LLM) applications against security threats such as prompt injection, data exfiltration, and other forms of misuse, by establishing boundaries around acceptable inputs and behaviors. These guardrails operate at various stages, including input validation, prompt construction, and output filtering, to ensure security, safety, and compliance in interactions with generative AI applications. Implementing both security and safety guardrails is crucial, with security guardrails focusing on detecting and mitigating attacks, such as prompt injection and data leakage, while safety guardrails prevent exposure to toxic content. They fit within the LLM application architecture by acting as intermediaries between clients and agents, using methods such as regex-based filters and AI-powered classifiers to detect and neutralize threats. Maintaining least privilege and role isolation is critical to prevent tool misuse and privilege escalation. Continuous monitoring and evaluation of guardrails are necessary to adapt to the evolving exploitation landscape, ensuring their effectiveness in mitigating security risks.