Trust calibration is a critical concept in human-machine interaction design, especially for AI software developers, focusing on aligning the trust users have in a product with its actual capabilities to prevent over-reliance or underutilization. A study in 2023 reviewed extensive literature on this topic, highlighting the importance of designing trust calibration through accurate mental models and adaptive systems that respond to user behavior to avoid over-trust or under-trust. Trust calibration involves pre-interaction, during-interaction, and post-interaction phases, each with unique strategies to set realistic user expectations, such as through onboarding experiences, real-time feedback, and reflective post-use moments. Effective trust calibration combines performance- and process-oriented signals, adapting explanations to user expertise while avoiding information overload to maintain transparency without fostering unwarranted trust. The study emphasizes the necessity of adaptive calibration over static methods and suggests that anthropomorphism in AI can unintentionally increase user trust beyond the system's capabilities. The research underscores the importance of investing in robust onboarding and measurement frameworks to detect miscalibrated trust patterns, advocating for trust calibration as an integral part of product identity rather than an afterthought.