Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose hypergeometric_2F1 function #2792

Merged
merged 31 commits into from
Jul 27, 2022
Merged

Expose hypergeometric_2F1 function #2792

merged 31 commits into from
Jul 27, 2022

Conversation

andrjohns
Copy link
Collaborator

Summary

This PR adds the hypergeometric_2F1 function, using the existing grad_2f1 function to calculate the gradients.

Tests

Minimal mix tests are added as the values are calculated using boost::math::hypergeometric_pFq, which is tested through the hypergeometric_pFq tests, and the gradients through grad_2F1 which already has extensive tests.

Side Effects

N/A

Release notes

Added hypergeometric_2F1 function

Checklist

  • Math issue Hypergeometric function naming #2664

  • Copyright holder: Andrew Johnson

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple small comments (and one optional big one)

stan/math/fwd/fun/hypergeometric_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/hypergeometric_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/hypergeometric_2F1.hpp Show resolved Hide resolved
stan/math/rev/fun/hypergeometric_2F1.hpp Outdated Show resolved Hide resolved
@andrjohns
Copy link
Collaborator Author

@SteveBronder just flagging that I'm still adding some special-case handling, so don't worry about re-reviewing just yet. Thanks!

@andrjohns
Copy link
Collaborator Author

@spinkey I've added a bunch of closed-form specialisations/reductions of the hypergeometric_2F1 from the WolframAlpha reference docs, and also added the Euler Transformation if the arguments don't meet convergence criteria:

$$ _2F_1(a,b;c;z) = \frac{_2F_1(c-a,b;c;\frac{z}{z-1})}{(1-z)^b} $$

Which gives a bit of a wider domain of inputs. Do you know if there are any other transformations/reductions/etc that would be good to have in?

SteveBronder
SteveBronder previously approved these changes Jul 15, 2022
Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I have a few comments for things you can optionally resolve but overall I think its clean

stan/math/prim/fun/grad_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/grad_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/grad_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/grad_inc_beta.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/hypergeometric_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/hypergeometric_2F1.hpp Outdated Show resolved Hide resolved
Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of Qs around grad_2F1_impl. I think we should either return a tuple or do in/out parameters, but I think doing both just makes things look a bit hard to read. Also I'm a little confused on why you need the user defined template parameter bool to specify whether the return should based on the partial_type_t or the return_type_t

stan/math/fwd/fun/grad_inc_beta.hpp Outdated Show resolved Hide resolved
stan/math/fwd/fun/grad_inc_beta.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/grad_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/grad_2F1.hpp Outdated Show resolved Hide resolved
stan/math/prim/fun/grad_2F1.hpp Outdated Show resolved Hide resolved
stan/math/rev/fun/grad_inc_beta.hpp Outdated Show resolved Hide resolved
@andrjohns
Copy link
Collaborator Author

@SteveBronder This is ready for another look, I've tidied up the tuple/multiple-assignment handling and split out the Euler transform flow, so hopefully the codes a little more readable now

@@ -81,7 +81,7 @@ TEST(ProbDistributionsWishartCholesky, dof_0) {
MatrixXd L_Y = Y.llt().matrixL();
MatrixXd L_S = Sigma.llt().matrixL();

unsigned int dof = std::numeric_limits<double>::quiet_NaN();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change was needed to fix the wishart_cholesky prim failure from this earlier Jenkins run

This was relying on undefined behavior of a NaN overflowing to 0 when assigned to an int, which looks like it didn't happen in that Jenkins run. Let me know if you'd prefer I open separate PR and issue for this

@SteveBronder
Copy link
Collaborator

Thanks!! That's fine and I'll look at this on Monday

Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two small optionals but looks good!

* @param[in] max_steps number of steps to take
*/
template <bool ReturnSameT, typename T1, typename T2, typename T3, typename T_z,
require_not_t<std::integral_constant<bool, ReturnSameT>>* = nullptr>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional]

Suggested change
require_not_t<std::integral_constant<bool, ReturnSameT>>* = nullptr>
require_not_t<bool_constant<ReturnSameT>>* = nullptr>

Comment on lines +303 to +309
template <bool ReturnSameT, typename T1, typename T2, typename T3, typename T_z,
require_t<std::integral_constant<bool, ReturnSameT>>* = nullptr>
auto grad_2F1(const T1& a1, const T2& a2, const T3& b1, const T_z& z,
double precision = 1e-14, int max_steps = 1e6) {
return internal::grad_2F1_impl<true, true, true, true>(a1, a2, b1, z,
precision, max_steps);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you set a default value for ReturnSameT like bool ReturnSameT = false do you need the other specialization below this one?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly, but since the default values have to come last in the template list I'd have to make some changes to how function would get called. I might leave that as a future TODO

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants