With the increasing integration of AI-enabled services into production applications, securing AI/ML components in the software supply chain is crucial, and applying DevSecOps best practices can be effective even without specialized tools for model scanning. Organizations should ensure that Data Scientists and Machine Learning Engineers use the same security tools and processes as core development teams, focusing on securing dependencies, source code, and container images. Tools like MLflow, Qwak, and AWS Sagemaker, when paired with a unified system like the JFrog Platform, can block unsafe components in model development. Once a model is developed, additional steps such as artifact signing and promotion/release blocking help maintain AI application integrity. It's essential to integrate security without hindering productivity, using tools like JFrog Xray and Curation to provide seamless security policy definitions. Although traditional DevSecOps tools have limitations in AI/ML development, such as handling data set issues or container deployment, the outlined steps provide a solid foundation for securing ML model development and governance.