We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@mjgarc has a model where he solves a bunch of models with threading.
There's a need to lift the PyTorchModel out of the threading loop.
We should also check that a unique JuMP model is being built in each loop. (Perhaps I mis-read the slide.)
See https://jump.dev/JuMP.jl/dev/tutorials/algorithms/parallelism/#With-multi-threading
The text was updated successfully, but these errors were encountered:
Thanks for the help, Oscar.
My pseudo code on the slide had a mistake. It should be building the JuMP model in each loop. (I was doing this part correctly in my actual code.)
The image below is a better representation of what I’m doing in my code. Does this look correct?
Do I also need to make a copy of the MathOptAI.Pipeline object in each iteration of the for loop?
Perhaps I should instead store the results to hard drive within the function _build_and_solve()?
Sorry, something went wrong.
Oh, yeah, that looks better.
I think everything is correct now.
The threading issue calling into Python is likely related to the GIL? Python isn't threaded, so it makes sense that our connection has some issues.
No branches or pull requests
@mjgarc has a model where he solves a bunch of models with threading.
There's a need to lift the PyTorchModel out of the threading loop.
We should also check that a unique JuMP model is being built in each loop. (Perhaps I mis-read the slide.)
See https://jump.dev/JuMP.jl/dev/tutorials/algorithms/parallelism/#With-multi-threading
The text was updated successfully, but these errors were encountered: