Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn limitation of memory usage into a configurable property #88

Closed
ybracke opened this issue Nov 15, 2023 · 0 comments
Closed

Turn limitation of memory usage into a configurable property #88

ybracke opened this issue Nov 15, 2023 · 0 comments

Comments

@ybracke
Copy link
Owner

ybracke commented Nov 15, 2023

In train_model.py and generate.py

Currently the limitation on 80% memory usage is hard-coded:

# limit memory usage to 80% # XXX make it faster by leaving this out
if torch.cuda.is_available():
    torch.cuda.set_per_process_memory_fraction(0.8, device)

Better:

  • First: Does the current approach actually work? Run an experiment.
  • Update training_config/test_config to have an attribute for the percentage of memory, probably under gpu
  • Pass this attribute to the set_per_process_memory_fraction
ybracke added a commit that referenced this issue Oct 15, 2024
Memory usage limit via configs, fixes #88
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant