-
-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose hypergeometric_2F1 function #2792
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple small comments (and one optional big one)
@SteveBronder just flagging that I'm still adding some special-case handling, so don't worry about re-reviewing just yet. Thanks! |
@spinkey I've added a bunch of closed-form specialisations/reductions of the Which gives a bit of a wider domain of inputs. Do you know if there are any other transformations/reductions/etc that would be good to have in? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I have a few comments for things you can optionally resolve but overall I think its clean
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of Qs around grad_2F1_impl
. I think we should either return a tuple or do in/out parameters, but I think doing both just makes things look a bit hard to read. Also I'm a little confused on why you need the user defined template parameter bool to specify whether the return should based on the partial_type_t
or the return_type_t
@SteveBronder This is ready for another look, I've tidied up the tuple/multiple-assignment handling and split out the Euler transform flow, so hopefully the codes a little more readable now |
@@ -81,7 +81,7 @@ TEST(ProbDistributionsWishartCholesky, dof_0) { | |||
MatrixXd L_Y = Y.llt().matrixL(); | |||
MatrixXd L_S = Sigma.llt().matrixL(); | |||
|
|||
unsigned int dof = std::numeric_limits<double>::quiet_NaN(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change was needed to fix the wishart_cholesky
prim failure from this earlier Jenkins run
This was relying on undefined behavior of a NaN
overflowing to 0 when assigned to an int, which looks like it didn't happen in that Jenkins run. Let me know if you'd prefer I open separate PR and issue for this
Thanks!! That's fine and I'll look at this on Monday |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two small optionals but looks good!
* @param[in] max_steps number of steps to take | ||
*/ | ||
template <bool ReturnSameT, typename T1, typename T2, typename T3, typename T_z, | ||
require_not_t<std::integral_constant<bool, ReturnSameT>>* = nullptr> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[optional]
require_not_t<std::integral_constant<bool, ReturnSameT>>* = nullptr> | |
require_not_t<bool_constant<ReturnSameT>>* = nullptr> |
template <bool ReturnSameT, typename T1, typename T2, typename T3, typename T_z, | ||
require_t<std::integral_constant<bool, ReturnSameT>>* = nullptr> | ||
auto grad_2F1(const T1& a1, const T2& a2, const T3& b1, const T_z& z, | ||
double precision = 1e-14, int max_steps = 1e6) { | ||
return internal::grad_2F1_impl<true, true, true, true>(a1, a2, b1, z, | ||
precision, max_steps); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you set a default value for ReturnSameT
like bool ReturnSameT = false
do you need the other specialization below this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly, but since the default values have to come last in the template list I'd have to make some changes to how function would get called. I might leave that as a future TODO
Summary
This PR adds the
hypergeometric_2F1
function, using the existinggrad_2f1
function to calculate the gradients.Tests
Minimal
mix
tests are added as the values are calculated usingboost::math::hypergeometric_pFq
, which is tested through thehypergeometric_pFq
tests, and the gradients throughgrad_2F1
which already has extensive tests.Side Effects
N/A
Release notes
Added
hypergeometric_2F1
functionChecklist
Math issue Hypergeometric function naming #2664
Copyright holder: Andrew Johnson
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
- Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
- Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
the basic tests are passing
./runTests.py test/unit
)make test-headers
)make test-math-dependencies
)make doxygen
)make cpplint
)the code is written in idiomatic C++ and changes are documented in the doxygen
the new changes are tested