Luxonis OAK-D - Deploy a Custom Object Detection Model with Depth
Blog post from Roboflow
Jacob Solawetz's tutorial provides a comprehensive guide on developing and deploying a custom object detection model for the Luxonis OpenCV AI Kit (OAK-D) using Roboflow and DepthAI, with an emphasis on real-time American Sign Language identification. The process begins with gathering and labeling images, followed by installing a MobileNetV2 training environment in the TensorFlow Object Detection API. Users can download custom training data from Roboflow, train a MobileNetV2 model, and run test inferences to evaluate its functionality. After achieving satisfactory performance, the model is converted to formats compatible with OpenVino and DepthAI, facilitating deployment on the OAK-D device. The tutorial highlights the importance of image labeling, model conversion, and deployment preparation, showcasing how these steps enable real-time inferencing with depth measurement capabilities. The tutorial concludes by encouraging further exploration of custom tasks using similarly structured workflows in diverse domains.