As artificial intelligence (AI) systems become more integral to various industries, aligning them with human values and ethics, known as AI alignment, is crucial to prevent potentially harmful outcomes and maintain trust. AI alignment involves encoding human values into large language models to ensure these systems operate safely and ethically, aligning with societal expectations and human intentions. Techniques such as imitation learning, value learning, and personalized alignment are explored to achieve this alignment, addressing challenges like ambiguity in human values, the attribution problem, and value drift. Despite these challenges, ongoing research aims to enhance AI's ability to align with human goals, while tools like Lakera Guard offer real-time protection against specific AI threats, ensuring AI systems remain secure and reliable.