-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auglag: pass ellipsis to gradient #37
Conversation
d238d15
to
af8d530
Compare
af8d530
to
6e8b788
Compare
This fixes cases where objective, constraints or their derivatives have parameters in addition to controls.
Is there any reason why this has not been merged? I just ran into the issue this PR is supposed to resolve today. |
Thanks for contributing this PR. I can understand why you would propagate ellipsis to both function and gradient as they probably share the same optional parameters. However, why would you propagate these optional arguments to the constraints? |
@astamm I'm not the original author, it has been some time that I have looked at the issue, and I tend to use other algorithms more frequently than |
Great. I'll give it some more thoughts. I know it was some years ago but I became maintainer only some months ago. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes are no longer appropriate given the changes to the hin/heq directions. Please see my comments below.
Since we started the process of switching the default order of the inequality, the code for all the nloptr(x0,
eval_f = fn(x, parameters),
eval_grad_f = gr(x, parameters),
lb = lower,
ub = upper,
eval_g_ineq = hin(x, parameters),
eval_jac_g_ineq = hinjac(, parameters),
eval_g_eq = heq(x, parameters),
eval_jac_g_eq = heqjac(x, parameters),
opts = opts) |
I am following @aadler's advice. Please reopen an issue if current version is not satisfying for your use case. |
This fixes cases where both objective and gradient take more parameters
than just controls and optimization algorithm uses derivatives.