Replies: 2 comments
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
0 replies
-
>>> rebecca-burwei |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
>>> rebecca-burwei
[January 6, 2021, 6:08pm]
There is research suggesting that a
simple tweak like layer or instance normalization could improve model
performance on unseen accents (accents not seen in training data). Given
the amount of concern regarding DeepSpeech's American English model not
working well for other accents, I think it would be worthwhile to run an
experiment to see if something as simple as a few normalization layers
could help. I wouldn't expect this simple fix to achieve parity for
other accents compared to American English, but it might help close the
gap while we wait for more data collection.
I am happy to help in any capacity with such an experiment. I do not
have access to GPUs / hardware at the moment. If someone could point me
in the direction of inexpensive hardware / cloud services, then I can
run the experiments and share the results here. I'm also happy to mentor
someone else in running experiments if they would like to give it a go.
[This is an archived TTS discussion thread from discourse.mozilla.org/t/simple-tweak-to-improve-deepspeech-on-accents]
Beta Was this translation helpful? Give feedback.
All reactions