Home / Companies / Roboflow / Blog / Post Details
Content Deep Dive

Using Computer Vision to Help Deaf and Hard of Hearing Communities

Blog post from Roboflow

Post Details
Company
Date Published
Author
Joseph Nelson
Word Count
1,941
Language
English
Hacker News Points
-
Summary

David Lee, a data scientist, explores the potential of using computer vision to aid the deaf and hard-of-hearing communities by developing a model to interpret American Sign Language (ASL) through machine learning. By creating an original image dataset and employing YOLOv5 for modeling, Lee demonstrates the feasibility of accurately interpreting ASL alphabet letters, even with a limited dataset. Despite challenges such as low-resolution images and the absence of certain letters requiring movement, the model achieved an impressive mAP score of 85.27%. The project highlights the importance of data augmentation and transfer learning, with promising results even in varied environments and with different hand types. Lee emphasizes the need for more data and partnerships with organizations like the National Association of the Deaf to enhance the model's accuracy and usability. The project reflects Lee's journey from a former sales profession to finding confidence in technical abilities, aiming to leverage technology to improve accessibility and education for the deaf community.