You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the limitation on 80% memory usage is hard-coded:
# limit memory usage to 80% # XXX make it faster by leaving this outiftorch.cuda.is_available():
torch.cuda.set_per_process_memory_fraction(0.8, device)
Better:
First: Does the current approach actually work? Run an experiment.
Update training_config/test_config to have an attribute for the percentage of memory, probably under gpu
Pass this attribute to the set_per_process_memory_fraction
The text was updated successfully, but these errors were encountered:
In
train_model.py
andgenerate.py
Currently the limitation on 80% memory usage is hard-coded:
Better:
gpu
set_per_process_memory_fraction
The text was updated successfully, but these errors were encountered: