-
Notifications
You must be signed in to change notification settings - Fork 655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use non-persistent buffers #3059
Comments
This sounds reasonable to me. @pytorch/team-audio-core any objection? |
The team thinks this is a good change, and we look for contributor to make the change. Feel free to open a PR. |
Hello, Is anyone working on this? I'm planning to give it a shot and possibly submit a PR. |
HIi @ashikshafi08 Thanks for the interest. No one is working on it, so feel free to do so. |
@mthrok I don’t see recent updates on this issue. I can take a look at this issue if there are no conflicts with others? |
@francislata Thanks for the suggestion. I don't think anyone is working on it, so feel free to send a PR. |
🚀 The feature
Suggest using
register_buffer()
withpersistent=False
, so the buffer (e.g. window of spectrogram) will not be included in module's state dict.Motivation, pitch
When I add new transforms to my model and load a pre-existing checkpoint, a
missing key
error will be raised, e.g. missing resample kernel intransforms.Resample
. It can be solved by specifyingstrict=False
at load time, however, I don't see any reason to save the buffers. They can be recomputed on construction time and that won't affect model's behavior.Alternatives
No response
Additional context
Same motivation as pytorch/pytorch#18056.
The text was updated successfully, but these errors were encountered: