Our approach to understanding and addressing AI harms is an evolving framework that considers various types of impacts, including physical, psychological, economic, societal, and individual autonomy impacts. This comprehensive approach helps our teams communicate clearly, make well-reasoned decisions, and develop targeted solutions for both known and emergent harms. We examine potential AI impacts across multiple baseline dimensions, with room to grow and expand over time, and consider factors like likelihood, scale, affected populations, duration, causality, technology contribution, and mitigation feasibility. By addressing and managing risks through policies and practices such as Usage Policies, evaluations, detection techniques, and robust enforcement, we balance multiple considerations while maintaining the helpfulness and functionality of our systems in everyday use cases. This perspective informs our thinking about responsible AI development and complements our Responsible Scaling Policy, which focuses specifically on catastrophic risks.