Bias in machine learning (ML) systems is a critical issue that has recently garnered significant attention, as highlighted by the documentary "Coded Bias," which illustrates how algorithmic decision-making can lead to biased outcomes affecting entire populations. The inherent bias in data, stemming from societal, collection, and annotation factors, poses a challenge for those developing ML systems, who must ensure these systems do not perpetuate unfairness. Despite the difficulty in defining what constitutes a bias-free system, the ML community is encouraged to integrate rigorous testing processes, similar to those used in safety-critical systems, to mitigate bias. The discussion around algorithmic fairness should involve legal and regulatory experts to create concrete guidance, with notions such as "Demographic Parity" and "Equality of Opportunity" serving as starting points, despite their inability to be simultaneously satisfied. Recent regulatory proposals, like the EU's initiative to categorize "high-risk" AI systems, represent a step towards more structured and accountable development practices, emphasizing the need for thorough testing to ensure fairness across diverse demographics and scenarios.