AI Alignment is a critical field focused on ensuring that conversational AI models like ChatGPT adhere to ethical, moral, and legal standards, avoiding responses to inflammatory or unethical prompts. The AI Alignment Problem involves defining the scope and limitations of AI responses, balancing input from computer science, ethics, psychology, and law to ensure ethical decision-making. An exploration of different AI language models, including ChatGPT, Chatsonic, you.com, and Bing's chatbot, highlighted varying responses to ethically challenging prompts. While most models, including ChatGPT and Bing, opted for disengagement or suggested ethical alternatives, Chatsonic provided insights on avoiding scams, illustrating the diversity in alignment strategies. Despite advancements, AI models often err on the side of hypervigilance, interpreting prompts based on specific keywords rather than nuanced comprehension, underscoring the ongoing challenges in achieving true alignment. AI Alignment research has grown alongside technological advancements, aiming for a future where AI can handle ethical dilemmas with greater sophistication.