/plushcap/analysis/assemblyai/deep-shallow-fusion-for-rnn-t-personalization

Deep Shallow Fusion for RNN-T Personalization

What's this blog post about?

The research paper "Deep Shallow Fusion for RNN-T Personalization" discusses methods to improve the accuracy of proper nouns and rare words in end-to-end deep learning models, which are typically hard to personalize. Two key techniques mentioned include subword regularization and grapheme-2-grapheme (G2G) augmentation. Subword regularization involves sampling from a list of n-best outputs during training instead of using the highest probable prediction, reducing overfitting on high-frequency words. G2G can generate alternative spellings with similar pronunciations, improving recognition of rare names when used for decoding. These techniques help enhance the model's ability to predict low-frequency words like proper nouns.

Company
AssemblyAI

Date published
Oct. 29, 2021

Author(s)
Michael Nguyen

Word count
286

Hacker News points
None found.

Language
English


By Matt Makai. 2021-2024.