Bug fix release.
- If using
line_search = "backtracking"
with a specifiedstep_down
parameter, an incorrectly large number of gradient calculations was being reported. - The documentation now specifies that if you don't provide a
step_down
argument withline_search = "backtracking"
, interpolation using function and gradient evaluations is carried out. To get a typical Armijo-style backtracking line search, specify a value forstep_down
(e.g.step_down = 0.5
to halve the step size), and only function evaluations are used.
A patch release to fix an incompatibility with R-devel.
- Fixed a bug where
class
was being checked directly and a scalar value was assumed. The correct behavior is to usemethods::is
.
A patch release for a bug fix.
- Fixed a bug where if the maximum number of function evaluations for the
Schmidt line search is exceeded (controlled by the
ls_max_fn
parameter), a'bracket_step' not found
error could result. Thank you to reporter Charles Driver. - Fixed a couple of vignette links that were missing the "http://" at the front.
A patch release to fix an incompatibility with R-devel.
- Fixed an error with bold driver and back-tracking line search where the stage was being incorrectly checked.
- New method: Truncated Newton (
method = "TN"
). Can be controlled using thetn_init
andtn_exit
options. - New method: SR1 (
method = "SR1"
), falling back to the BFGS direction if a descent direction is not found. - New option
preconditioner
, which applies to the conjugate gradient and truncated newton methods. The only value currently available ispreconditioner = "L-BFGS"
which uses L-BFGS to estimate the inverse Hessian for preconditioning. The number of updates to store for this preconditioner is controlled by thememory
parameter, just as if you were usingmethod = "L-BFGS"
. - BFGS, SR1, L-BFGS methods will now make use of a user-supplied inverse Hessian
function if provided. In the input
fg
list, supply a functionhi
, that takes thepar
vector as input. The function can return a matrix (obviously not a great idea for memory use), or a vector, the latter of which is assumed to be the diagonal of the matrix. ls_max_alpha
(forline_search = "More-Thuente"
only): sets maximum value of alpha that can be attained during line search.ls_max_alpha_mult
(for Wolfe-type line search only): sets maximum value that can be attained by the ratio of the initial guess for alpha for the current line search, to the final value of alpha of the previous line search. Used to stop line searches diverging due to very large initial guesses.ls_safe_cubic
(forline_search = "More-Thuente"
only): ifTRUE
, use the safe-guarded cubic modification suggested by Xie and Schlick.cg_update = "prfr"
, the "PR-FR" (Polak-Ribiere/Fletcher-Reeves) conjugate gradient update suggested by Gilbert and Nocedal.
- Error occurred when checking if a step size was finite during line search.
- DBD method didn't use momentum when asked to.
- Fix incorrectly specified conjugate gradient descent methods:
Hestenes-Stiefel (
cg_udpate = "hs"
), Conjugate Descent (cg_udpate = "cd"
), Dai-Yuan (cg_udpate = "dy"
) and Liu-Storey (cg_udpate = "ls"
).
Initial release.