/plushcap/analysis/ably/iot-wearable-azure-cognitive-services-ably

Making a wearable live caption display using Azure Cognitive Services and Ably

What's this blog post about?

The text describes the development of a wearable live captioning demo that uses Azure Cognitive Services and Ably Realtime. The aim is to help people with hearing difficulties, particularly those who rely on lip reading but struggle with mask-wearing during the pandemic. The demo consists of a web app that captures microphone data, transcribes it into readable text using Azure Cognitive Services Speech, and sends it to a wearable device for display. The wearable part is a 32 by 8 display of neopixels connected to an Adafruit Feather Huzzah. The system uses the getUserMedia() API to access the microphone, Azure Cognitive Services Speech to transcribe the speech, MQTT for communication between the web app and the microcontroller, and a custom binary message format to efficiently send text to the hardware. The transcription is displayed on the wearable device using a pixel "font" converted into an array of binary values. The system can also display static or scrolling text. The project is open-source and the author encourages others to use the code for their own wearable tech projects. They also express hope that the idea could be further developed to include language translation.

Company
Ably

Date published
Dec. 21, 2020

Author(s)
Jo Franchetti

Word count
3319

Hacker News points
None found.

Language
English


By Matt Makai. 2021-2024.