Over the past year, Anthropic has collaborated with the US Center for AI Standards and Innovation (CAISI) and the UK AI Security Institute (AISI) to enhance the security of its AI systems through rigorous testing and feedback. This partnership, which evolved from initial consultations to ongoing cooperation, involved granting these government bodies access to Anthropic's models at various development stages to identify and mitigate vulnerabilities. These collaborations have uncovered significant weaknesses, such as prompt injection vulnerabilities and sophisticated jailbreak methods, leading to improvements in Anthropic's defensive measures. Furthermore, the collaboration has provided valuable insights into risk evaluation and methodology, reinforcing the importance of public-private partnerships for advancing AI security. Anthropic's experience underscores the effectiveness of such partnerships and encourages other AI developers to engage in similar collaborations to enhance the safety and security of AI technologies.