Frontier AI development requires a transparency framework to ensure public safety and accountability for companies creating powerful AI systems. As AI rapidly evolves, the proposed framework targets only the largest AI model developers, establishing disclosure requirements for safety practices without stifling innovation. It suggests a Secure Development Framework to mitigate risks like chemical or biological harms and mandates public disclosure of these frameworks with redaction for sensitive information. The framework advocates for publishing system cards summarizing testing and evaluation procedures, ensuring labs comply with transparency standards. It also proposes legal protection for whistleblowers to prevent labs from making false compliance statements. By focusing on flexible, evolving standards, the framework aims to balance security and innovation, enabling policymakers and the public to assess AI's development responsibly. This approach, which mirrors existing practices by leading AI labs, seeks to provide a baseline for industry accountability while preserving the potential for AI to drive scientific and economic advancements.