Generative AI systems, including language models like ChatGPT and image generators like OpenAI's Dall-E, have been found to exhibit various biases related to gender, race, and politics, which can lead to harmful outcomes if deployed in high-stakes areas such as healthcare, finance, and education. As the EU's AI Act highlights the importance of mitigating these biases, companies face potential legal and reputational risks if their AI systems are discriminatory. Addressing bias requires a multifaceted approach involving diversified data collection, implementing bias detection tools, fine-tuning models, and incorporating logical reasoning. Despite ongoing efforts, challenges remain, such as performance trade-offs, intersectionality, and cultural context variations. Moving forward, promising research directions include causal modeling, federated learning, and adversarial debiasing, emphasizing the necessity for continuous evaluation and collaboration among AI researchers, ethicists, and domain experts to create fairer AI systems.