Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threading and PyTorch #116

Closed
odow opened this issue Oct 7, 2024 · 3 comments
Closed

Threading and PyTorch #116

odow opened this issue Oct 7, 2024 · 3 comments

Comments

@odow
Copy link
Collaborator

odow commented Oct 7, 2024

@mjgarc has a model where he solves a bunch of models with threading.

There's a need to lift the PyTorchModel out of the threading loop.

We should also check that a unique JuMP model is being built in each loop. (Perhaps I mis-read the slide.)

See https://jump.dev/JuMP.jl/dev/tutorials/algorithms/parallelism/#With-multi-threading

@mjgarc
Copy link
Collaborator

mjgarc commented Oct 7, 2024

Thanks for the help, Oscar.

My pseudo code on the slide had a mistake. It should be building the JuMP model in each loop. (I was doing this part correctly in my actual code.)

The image below is a better representation of what I’m doing in my code. Does this look correct?

Do I also need to make a copy of the MathOptAI.Pipeline object in each iteration of the for loop?

Perhaps I should instead store the results to hard drive within the function _build_and_solve()?

image

@odow
Copy link
Collaborator Author

odow commented Oct 8, 2024

Oh, yeah, that looks better.

I think everything is correct now.

@odow odow closed this as completed Oct 8, 2024
@odow
Copy link
Collaborator Author

odow commented Oct 8, 2024

The threading issue calling into Python is likely related to the GIL? Python isn't threaded, so it makes sense that our connection has some issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants