Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specialise LinearAlgebra.lmul! for LowerTriangular blockdiagonal matrices #119

Merged
merged 15 commits into from
Nov 3, 2022

Conversation

mjp98
Copy link
Contributor

@mjp98 mjp98 commented Nov 2, 2022

closes #116

Copy link
Collaborator

@mzgubic mzgubic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work, needs version bump. Feel free to merge when comments are addressed.

src/linalg.jl Outdated Show resolved Hide resolved
src/linalg.jl Outdated Show resolved Hide resolved
src/linalg.jl Show resolved Hide resolved
x = rand(rng, N1 + N2)
y = rand(rng, N2 + N4)

@testset "Lower triangular" begin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these tests fail on master, or are they just slow? Perhaps we could test for performance as well, see NamedDims.jl which does a few allocation tests, maybe some speed tests as well.

(I'm thinking higher level test, i.e. sampling)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test pass on master, just allocate more. So if I add a test for the case in the #116 (and add Distributions to the test deps)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that sounds good!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added an allocation test just on lmul!. I suggest we leave higher level testing to a separate PR and plan it together with a benchmark suite.

test/linalg.jl Outdated Show resolved Hide resolved
Co-authored-by: Miha Zgubic <[email protected]>
@mzgubic
Copy link
Collaborator

mzgubic commented Nov 2, 2022

Oh, also, squashing the commits might be a good idea

mjp98 and others added 2 commits November 2, 2022 15:50
Co-authored-by: Miha Zgubic <[email protected]>
@codecov
Copy link

codecov bot commented Nov 2, 2022

Codecov Report

Merging #119 (fbe1c75) into master (10f2c9f) will increase coverage by 0.03%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #119      +/-   ##
==========================================
+ Coverage   98.85%   98.88%   +0.03%     
==========================================
  Files           5        5              
  Lines         348      358      +10     
==========================================
+ Hits          344      354      +10     
  Misses          4        4              
Impacted Files Coverage Δ
src/linalg.jl 96.00% <100.00%> (+0.44%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

x = rand(rng, N1 + N2)
y = rand(rng, N2 + N4)

@testset "Lower triangular" begin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that sounds good!

test/linalg.jl Outdated Show resolved Hide resolved
@mjp98
Copy link
Contributor Author

mjp98 commented Nov 3, 2022

Actually do need a larger allocation bound. Julia 1.0 on x64 allocates 320.

@mjp98 mjp98 merged commit 86c57e3 into master Nov 3, 2022
@mjp98 mjp98 deleted the mjp/extend-lmul-lowertriangular branch November 3, 2022 14:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sampling from MvNormal with BlockDiagonal covariance is slow
2 participants