-
-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use less memory in multi_normal_cholesky_lpdf
#2983
base: develop
Are you sure you want to change the base?
Conversation
half = mdivide_left_tri<Eigen::Lower>(L_val, y_val_minus_mu_val) | ||
.transpose(); | ||
scaled_diff = mdivide_right_tri<Eigen::Lower>(half, L_val).transpose(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the part that concerns me, since it's gone from a single solve each (with a matrix) to size_vec
solves (with a vector) for half
and scaled_diff
each. Especially when the single larger solve can be better vectorised with SIMD & other compiler opts
Is there enough of a memory hit to justify to extra operations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely agreed. We do this sort of thing already in
math/stan/math/prim/prob/multi_normal_lpdf.hpp
Lines 94 to 98 in 0c16b89
lp_type sum_lp_vec(0.0); for (size_t i = 0; i < size_vec; i++) { const auto& y_col = as_column_vector_or_scalar(y_vec[i]); const auto& mu_col = as_column_vector_or_scalar(mu_vec[i]); sum_lp_vec += trace_inv_quad_form_ldlt(ldlt_Sigma, y_col - mu_col); math/stan/math/prim/prob/multi_student_t_lpdf.hpp
Lines 124 to 128 in 0c16b89
for (size_t i = 0; i < size_vec; i++) { const auto& y_col = as_column_vector_or_scalar(y_vec[i]); const auto& mu_col = as_column_vector_or_scalar(mu_vec[i]); sum_lp_vec += log1p(trace_inv_quad_form_ldlt(ldlt_Sigma, y_col - mu_col) / nu);
I'm really not sure what's best here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's alright with you, I'd prefer not to implement this. The current implementation is likely to scale better to larger inputs, and the changes would also reduce any benefits from OpenCL accelerated-ops.
But also completely happy for you to call someone in for a tie-breaker if you feel strongly about it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with closing it but I want someone to weigh in on if we should change the other distributions. I can update the mvn derivatives pr to follow the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SteveBronder - as the Chief of Memory Police, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have an M1max. Is there someone who could benchmark on a windows and linux machine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got the library setup but I don't have taskset. Also, how can I set up the script to run the two branches?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need taskset to run the benchmarks, only if you want to isolate in a single core.
You can add another branch in your benchmarks cmake file like
FetchContent_Declare(
stanmathalt
GIT_REPOSITORY https://github.com/stan-dev/math
GIT_TAG mybranch # replace with the version you want to use
)
FetchContent_GetProperties(stanmathalt)
if(NOT stanmathalt_POPULATED)
FetchContent_Populate(stanmathalt)
endif()
Then you can include it in your executible build like ${stanmathalt_SOURCE_DIR}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then how do I just run benchmarks for this distribution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an example of how to add a benchmark on the readme and an example benchmark folder below. You need to write a little Cmake file for compiling the benchmark and should be able to use that folder as an example
https://github.com/SteveBronder/stan-perf/tree/main/benchmarks/matmul_aos_soa
half = mdivide_left_tri<Eigen::Lower>(L_val, y_val_minus_mu_val) | ||
.transpose(); | ||
scaled_diff = mdivide_right_tri<Eigen::Lower>(half, L_val).transpose(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imo if the benchmarks are not showing a huge difference I think it's good.
The main thing I'm thinking of is memory pressure. While this code is going to produce more instructions that need to be run for each loop, the memory it's working over will be smaller and more likely to stay in cache.
@andrjohns I think the memory pressure is more of a focus because either way this is implemented, if a machine has large simd instructions the vector code should still use them.
@spinkney can you run the benchmark comparing two var
inputs (and var<Matrix>
inputs) like 30 or so times and just plot that? imo I think that's what we care about the most. I have a little repo below you can use to compare both branches. Also let us know what cpu you are using. I can test this on my m2 mac as well if you send me a script to do it.
Co-authored-by: Steve Bronder <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@spinkney @SteveBronder Considering the points above, I think this is fine to go in without worrying too much about benchmarking
I'll approve the changes and leave it up to @SteveBronder to merge if he agrees
Summary
I've converted the partial matrices to vectors and looped over them to update the derivatives.
Tests
No new tests.
I did run a benchmark against the develop branch and it shows that the speed is roughly the same.
Side Effects
None.
Release notes
Increase the memory efficiency of the multivariate normal Cholesky parameterized lpdf.
Checklist
Copyright holder: Sean Pinkney
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
- Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
- Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
the basic tests are passing
./runTests.py test/unit
)make test-headers
)make test-math-dependencies
)make doxygen
)make cpplint
)the code is written in idiomatic C++ and changes are documented in the doxygen
the new changes are tested