-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The single audio per sentence restriction is too strict for most languages #113
Comments
I'd like to add an additional note that this restriction implicitly creates bias in the training set unless extra steps are taken (which I believe they aren't):
At least this is the case for most languages currently in Common Voice. |
I hit this wall while trying to train with v7.0 of the Turkish dataset. Before getting our hands on the new dataset, I wanted to know where we stand with v7.0 to see the effect of our campaign. I used @ftyers's technical paper for replication - as of now acoustic model only... But v7.0 was giving bad results, v6.1 was better... So I did a roundup for all dataset versions: As the training of v7.0 got optimized at a rather early stage, I had to analyze the splits... So I did another roundup: Two additional notes before commenting on these:
The problem lies in several places with v7:
Here is data for the last point: Train Dev So, I think this is the worst possible scenario. Because these splits are meant to be a benchmark, I think a better split algorithm is needed. @ftyers's PR is only one part of the solution. Your comments are greatly appreciated... |
I've been training quite a few models recently. And after getting through about 18 Common Voice languages I realised that most of the data wasn't being included. The issue surfaced when I was looking for an additional datapoint with more training data than Tatar to fill out the following graph:
It seemed odd to me that Portuguese only had 7 hours of data, but not odd enough. Then I looked at Basque.
The total amount of data available in the training split was a fraction of what is validated.
The obvious solution is that everyone goes and makes their own splits. But this is a bit unsatisfactory because then people's results won't be comparable. I imagine one of the desiderata of the dataset releases and splits is that they be standard and comparable.
Another option would be to have options:
--strict-speaker
: One speaker only lives in one file--strict-sentence
: One sentence only lives in one file--strict-audio
: Only a single recording per sentence--strict-speaker
and--strict-sentence
should be turned on by default, these mean that the model doesn't get to peek at either the speaker or the sentence.--strict-audio
should be turned off by default, this is more about model optimisation, e.g. you could consider having more than one recording per sentence as a kind of augmentation.It would also be worth looking into balancing the train/dev/test by gender, but that is certainly another issue.
The text was updated successfully, but these errors were encountered: