Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cholesky factorization of SparseMatrixCSC{BigFloat,Int} fails #374

Closed
Sacha0 opened this issue Oct 15, 2016 · 6 comments
Closed

Cholesky factorization of SparseMatrixCSC{BigFloat,Int} fails #374

Sacha0 opened this issue Oct 15, 2016 · 6 comments
Labels
Hacktoberfest Good for Hacktoberfest participants sparse Sparse arrays

Comments

@Sacha0
Copy link
Member

Sacha0 commented Oct 15, 2016

cholfact does not fall back to the generic Cholesky implementation for SparseMatrixCSC{BigFloat,Int}:

julia> versioninfo()
Julia Version 0.6.0-dev.979
Commit a54312e (2016-10-15 16:38 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin15.5.0)
  CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, ivybridge)

julia> A = big(sparse(SymTridiagonal(2*ones(4), ones(3))));

julia> cholfact(A)
ERROR: MethodError: Cannot `convert` an object of type SparseMatrixCSC{BigFloat,Int64} to an object of type Base.SparseArrays.CHOLMOD.Sparse{Tv<:Union{Complex{Float64},Float64}}
This may have arisen from a call to the constructor Base.SparseArrays.CHOLMOD.Sparse{Tv<:Union{Complex{Float64},Float64}}(...),
since type constructors fall back to convert methods.
 in #cholfact#9(::Array{Any,1}, ::Function, ::SparseMatrixCSC{BigFloat,Int64}) at ./sparse/cholmod.jl:1333
 in cholfact(::SparseMatrixCSC{BigFloat,Int64}) at ./sparse/cholmod.jl:1333

The signature here might be too permissive. Best!

@Sacha0
Copy link
Member Author

Sacha0 commented Oct 15, 2016

Slightly more information: The linked cholfact method tries to convert the input array to Base.SparseArrays.CHOLMOD.Sparse, but the CHOLMOD Sparse type requires Tv<:Union{Float64,Complex{Float64}}. (Though I thought CHOLMOD Sparse could also handle Float32/Complex{Float32}s?) Best!

@andreasnoack
Copy link
Member

So far, I think it has been the view that the generic fallback would be so slow for sparse matrices that it wouldn't be an attractive solution. I hope that https://github.com/weijianzhang/Multifrontal.jl eventually will be a candidate for a generic alternative to CHOLMOD. I haven't tried it lately so I don't know that status.

Regarding Float32 support in CHOLMOD then see JuliaLang/julia#14076 (comment).

@Sacha0
Copy link
Member Author

Sacha0 commented Oct 16, 2016

So far, I think it has been the view that the generic fallback would be so slow for sparse matrices that it wouldn't be an attractive solution.

Though I appreciate that argument, legitimate use cases exist where suboptimal runtime is not important. For example, runtime often matters little when prototyping or coding exploratorily, but that operations work does matter. Another concrete example: When teaching with, demo'ing, or learning Julia, having operations generally work, no matter runtime, is valuable. (These are both concrete instances of the abstract situation described in JuliaLang/julia#15568 (comment) second block, beginning 'This is really nicely stated!'. I make a more thorough argument for the preceding position in that comment.) Best!

@Sacha0 Sacha0 added linear algebra sparse Sparse arrays labels Oct 16, 2016
@tkelman
Copy link

tkelman commented Oct 17, 2016

How would applying the current generic implementation (which is not at all designed with sparsity in mind) to a sparse input compare to first converting the input to dense? This is the type of thing that SciPy has sparse performance warnings for.

@Sacha0
Copy link
Member Author

Sacha0 commented Oct 19, 2016

How would applying the current generic implementation (which is not at all designed with sparsity in mind) to a sparse input compare to first converting the input to dense?

The performance difference is less than one might imagine. For example, below I compare applying the current generic cholesky implementation to SparseMatrixCSC{BigFloat,Int} and Matrix{BigFloat} forms of (1) a symmetric tridiagonal (central difference) matrix and (2) an arrowhead matrix. Time and space differences are typically a factor of two in the former case and less than that in the latter (--- which is interesting as I would've expected the other way around, perhaps I made a mistake?) Best!

julia> VERSION
v"0.6.0-dev.987"

julia> using BenchmarkTools

julia> function relcost(A)
           sparseA = Hermitian(SparseMatrixCSC(A), :U)
           denseA = Hermitian(Array(A), :U)
           sb = @benchmarkable Base.LinAlg.chol!(x) setup=(x = copy($sparseA))
           db = @benchmarkable Base.LinAlg.chol!(x) setup=(x = copy($denseA))
           # can't tune!(sb/db) due to https://github.com/JuliaCI/BenchmarkTools.jl/issues/24
           warmup(sb)
           warmup(db)
           sbr = run(sb)
           dbr = run(db)
           ratio(median(sbr), median(dbr))
       end
relcost (generic function with 1 method)

julia> centraldiff(N) = SymTridiagonal(fill(BigFloat(2), N), fill(BigFloat(-1), N-1))
centraldiff (generic function with 1 method)

julia> arrowhead(N) = (A = N*eye(BigFloat, N); A[2:end, 1] = 1; A[1, 2:end] = 1; A)
arrowhead (generic function with 1 method)

julia> relcost(centraldiff(10))
BenchmarkTools.TrialRatio:
  time:             2.086049883333711
  gctime:           1.0
  memory:           1.9538632573652028
  allocs:           1.9530685920577617
  time tolerance:   5.00%
  memory tolerance: 1.00%

julia> relcost(centraldiff(100))
BenchmarkTools.TrialRatio:
  time:             2.145691787873939
  gctime:           1.996690587140026
  memory:           2.4467762510528224
  allocs:           2.446774772220918
  time tolerance:   5.00%
  memory tolerance: 1.00%

julia> relcost(centraldiff(500))
BenchmarkTools.TrialRatio:
  time:             1.5364927471228857
  gctime:           1.5199039968667396
  memory:           2.4894690116430507
  allocs:           2.489468999306479
  time tolerance:   5.00%
  memory tolerance: 1.00%

julia> relcost(arrowhead(10))
BenchmarkTools.TrialRatio:
  time:             1.3626963460730785
  gctime:           1.0
  memory:           1.1586066333148046
  allocs:           1.0914560770156438
  time tolerance:   5.00%
  memory tolerance: 1.00%

julia> relcost(arrowhead(100))
BenchmarkTools.TrialRatio:
  time:             1.1750865474464056
  gctime:           0.9891396374134706
  memory:           1.0227380105450325
  allocs:           1.0143392671795848
  time tolerance:   5.00%
  memory tolerance: 1.00%

julia> relcost(arrowhead(500))
BenchmarkTools.TrialRatio:
  time:             1.2188040436060177
  gctime:           0.9818427271420406
  memory:           1.004373821851527
  allocs:           1.0029731595057436
  time tolerance:   5.00%
  memory tolerance: 1.00%

@fredrikekre fredrikekre added the Hacktoberfest Good for Hacktoberfest participants label Sep 28, 2017
@ViralBShah
Copy link
Member

This needs a full sparse solver - something that we are unlikely to have in Base. It is best done in an external package.

@KristofferC KristofferC transferred this issue from JuliaLang/julia Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Hacktoberfest Good for Hacktoberfest participants sparse Sparse arrays
Projects
None yet
Development

No branches or pull requests

5 participants