Widespread adoption of large language models (LLMs) necessitates robust protection against attacks like jailbreak, prompt injection, and indirect prompt injection. Jailbreak attacks target the LLM itself by overriding system prompts, while prompt injection attacks manipulate applications built on LLMs to perform unintended actions. Indirect prompt injection occurs when data sources, rather than direct user commands, are manipulated to influence the model's behavior. Meta's Prompt Guard is an open-source model designed to mitigate these vulnerabilities by classifying inputs as JAILBREAK, INJECTION, or BENIGN, but its strictness often results in false positives. Fine-tuning the model with application-specific data can improve its effectiveness, and it can be deployed using Ploomber Cloud for demonstration purposes. Despite its limitations, such as handling only 512 tokens at a time and requiring fine-tuning to reduce false positives, Prompt Guard represents a step towards securing LLM-powered systems against potential threats.