The rapid evolution of general-purpose AI (GPAI) systems has brought significant advancements but also high risks, including flaws in outputs and systemic vulnerabilities. To address these issues, the AI community must prioritize the needs of third-party researchers and hackers who uncover flaws, proposing actionable solutions such as standardized protections for researchers, robust reporting infrastructure, and coordinated disclosure mechanisms. Drawing from lessons in software security, a safe harbor framework is essential to empower researchers while enabling organizations to address flaws responsibly, addressing unique challenges such as opaque AI systems with complex supply chains. The proposed framework includes clear rules of engagement, liability protections, and standardized reporting processes to ensure flaws are addressed without exposing researchers to undue risk, utilizing existing platforms like Disclose.io and Bugcrowd as a foundation for building the infrastructure for AI flaw reporting.