MCP is a communication standard designed to connect AI models with external tools, environments, and services, posing new security challenges as AI systems become more embedded in our products, services, and infrastructure. Risks associated with MCP include tool injection, remote code execution, session hijacking, and data leakage, which can be mitigated through strict input validation, sandboxing, secure session tokens, redaction tools, and monitoring. Security best practices for MCP-enabled systems include tool whitelisting, strong authentication and authorization, logging and audit trails, prompt injection defenses, and red teaming and security testing. Broader AI security concerns include model poisoning, prompt injection, synthetic identity attacks, and the need for shared responsibility across product teams, engineering leads, and infosec to ensure secure development and deployment of AI systems.