Generative AI presents significant safety challenges, driven by biases, misinformation, and legal concerns that complicate its deployment. A comprehensive framework grounded in algorithmic fairness principles aims to clarify these issues, highlighting seven foundational themes to ensure AI systems are secure and trustworthy. The complexity of AI safety is underscored by the entanglement of concerns from biased outputs to potential harm in various contexts, including user exposure to stereotypes and societal biases. Despite emerging speculative ideas around AI alignment, the focus remains on developing fair algorithms to mitigate representational and allocational harms. These types of harms are difficult to measure, with allocational fairness involving demographic performance gaps and representational fairness requiring nuanced standards. The challenge lies in addressing biases from training data and throughout the modeling cycle, as biases can be amplified by language models. Additionally, AI safety must be context-specific, with careful consideration of trade-offs in fairness and performance, making universal standards impractical. To advance AI safety, a deliberate methodology involving clear value judgments and goals is essential, avoiding the misconception that safety and performance are inherently opposed. Confronting current limitations provides an opportunity to develop AI systems that are both safe and fulfill their potential responsibly.