Skip to content

Commit

Permalink
format
Browse files Browse the repository at this point in the history
  • Loading branch information
ArnoStrouwen committed Jan 21, 2023
1 parent d38aeb6 commit 3072f82
Show file tree
Hide file tree
Showing 12 changed files with 433 additions and 392 deletions.
41 changes: 21 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,26 +7,26 @@
[![Build Status](https://github.com/SciML/LinearSolvers.jl/workflows/CI/badge.svg)](https://github.com/SciML/LinearSolvers.jl/actions?query=workflow%3ACI)
[![Build status](https://badge.buildkite.com/74699764ce224514c9632e2750e08f77c6d174c5ba7cd38297.svg?branch=main)](https://buildkite.com/julialang/linearsolve-dot-jl)

[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)

Fast implementations of linear solving algorithms in Julia that satisfy the SciML
common interface. LinearSolve.jl makes it easy to define high level algorithms
which allow for swapping out the linear solver that is used while maintaining
maximum efficiency. Specifically, LinearSolve.jl includes:

- Fast pure Julia LU factorizations which outperform standard BLAS
- KLU for faster sparse LU factorization on unstructured matrices
- UMFPACK for faster sparse LU factorization on matrices with some repeated structure
- MKLPardiso wrappers for handling many sparse matrices faster than SuiteSparse (KLU, UMFPACK) methods
- Sparspak.jl for sparse LU factorization in pure Julia for generic number types and for non-GPL distributions
- GPU-offloading for large dense matrices
- Wrappers to all of the Krylov implementations (Krylov.jl, IterativeSolvers.jl, KrylovKit.jl) for easy
testing of all of them. LinearSolve.jl handles the API differences, especially with the preconditioner
definitions
- A polyalgorithm that smartly chooses between these methods
- A caching interface which automates caching of symbolic factorizations and numerical factorizations
as optimally as possible
- Fast pure Julia LU factorizations which outperform standard BLAS
- KLU for faster sparse LU factorization on unstructured matrices
- UMFPACK for faster sparse LU factorization on matrices with some repeated structure
- MKLPardiso wrappers for handling many sparse matrices faster than SuiteSparse (KLU, UMFPACK) methods
- Sparspak.jl for sparse LU factorization in pure Julia for generic number types and for non-GPL distributions
- GPU-offloading for large dense matrices
- Wrappers to all of the Krylov implementations (Krylov.jl, IterativeSolvers.jl, KrylovKit.jl) for easy
testing of all of them. LinearSolve.jl handles the API differences, especially with the preconditioner
definitions
- A polyalgorithm that smartly chooses between these methods
- A caching interface which automates caching of symbolic factorizations and numerical factorizations
as optimally as possible

For information on using the package,
[see the stable documentation](https://docs.sciml.ai/LinearSolve/stable/). Use the
Expand All @@ -37,8 +37,9 @@ the documentation which contains the unreleased features.

```julia
n = 4
A = rand(n,n)
b1 = rand(n); b2 = rand(n)
A = rand(n, n)
b1 = rand(n);
b2 = rand(n);
prob = LinearProblem(A, b1)

linsolve = init(prob)
Expand All @@ -53,7 +54,7 @@ sol1.u
1.8385599677530706
=#

linsolve = LinearSolve.set_b(linsolve,b2)
linsolve = LinearSolve.set_b(linsolve, b2)
sol2 = solve(linsolve)

sol2.u
Expand All @@ -65,8 +66,8 @@ sol2.u
-0.4998342686003478
=#

linsolve = LinearSolve.set_b(linsolve,b2)
sol2 = solve(linsolve,IterativeSolversJL_GMRES()) # Switch to GMRES
linsolve = LinearSolve.set_b(linsolve, b2)
sol2 = solve(linsolve, IterativeSolversJL_GMRES()) # Switch to GMRES
sol2.u
#=
4-element Vector{Float64}:
Expand All @@ -76,8 +77,8 @@ sol2.u
-0.4998342686003478
=#

A2 = rand(n,n)
linsolve = LinearSolve.set_A(linsolve,A2)
A2 = rand(n, n)
linsolve = LinearSolve.set_A(linsolve, A2)
sol3 = solve(linsolve)

sol3.u
Expand Down
34 changes: 19 additions & 15 deletions docs/src/advanced/custom.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Passing in a Custom Linear Solver

Julia users are building a wide variety of applications in the SciML ecosystem,
often requiring problem-specific handling of their linear solves. As existing solvers in `LinearSolve.jl` may not
be optimally suited for novel applications, it is essential for the linear solve
Expand All @@ -10,7 +11,7 @@ user can pass in their custom linear solve function, say `my_linsolve`, to
```@example advanced1
using LinearSolve, LinearAlgebra
function my_linsolve(A,b,u,p,newA,Pl,Pr,solverdata;verbose=true, kwargs...)
function my_linsolve(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwargs...)
if verbose == true
println("solving Ax=b")
end
Expand All @@ -19,34 +20,37 @@ function my_linsolve(A,b,u,p,newA,Pl,Pr,solverdata;verbose=true, kwargs...)
end
prob = LinearProblem(Diagonal(rand(4)), rand(4))
alg = LinearSolveFunction(my_linsolve)
sol = solve(prob, alg)
alg = LinearSolveFunction(my_linsolve)
sol = solve(prob, alg)
sol.u
```

The inputs to the function are as follows:
- `A`, the linear operator
- `b`, the right-hand-side
- `u`, the solution initialized as `zero(b)`,
- `p`, a set of parameters
- `newA`, a `Bool` which is `true` if `A` has been modified since last solve
- `Pl`, left-preconditioner
- `Pr`, right-preconditioner
- `solverdata`, solver cache set to `nothing` if solver hasn't been initialized
- `kwargs`, standard SciML keyword arguments such as `verbose`, `maxiters`, `abstol`, `reltol`

- `A`, the linear operator
- `b`, the right-hand-side
- `u`, the solution initialized as `zero(b)`,
- `p`, a set of parameters
- `newA`, a `Bool` which is `true` if `A` has been modified since last solve
- `Pl`, left-preconditioner
- `Pr`, right-preconditioner
- `solverdata`, solver cache set to `nothing` if solver hasn't been initialized
- `kwargs`, standard SciML keyword arguments such as `verbose`, `maxiters`, `abstol`, `reltol`

The function `my_linsolve` must accept the above specified arguments, and return
the solution, `u`. As memory for `u` is already allocated, the user may choose
to modify `u` in place as follows:

```@example advanced1
function my_linsolve!(A,b,u,p,newA,Pl,Pr,solverdata;verbose=true, kwargs...)
function my_linsolve!(A, b, u, p, newA, Pl, Pr, solverdata; verbose = true, kwargs...)
if verbose == true
println("solving Ax=b")
end
u .= A \ b # in place
return u
end
alg = LinearSolveFunction(my_linsolve!)
sol = solve(prob, alg)
alg = LinearSolveFunction(my_linsolve!)
sol = solve(prob, alg)
sol.u
```
123 changes: 63 additions & 60 deletions docs/src/advanced/developing.md
Original file line number Diff line number Diff line change
@@ -1,60 +1,63 @@
# Developing New Linear Solvers

Developing new or custom linear solvers for the SciML interface can be done in
one of two ways:

1. You can either create a completely new set of dispatches for `init` and `solve`.
2. You can extend LinearSolve.jl's internal mechanisms.

For developer ease, we highly recommend (2) as that will automatically make the
caching API work. Thus, this is the documentation for how to do that.

## Developing New Linear Solvers with LinearSolve.jl Primitives

Let's create a new wrapper for a simple LU-factorization which uses only the
basic machinery. A simplified version is:

```julia
struct MyLUFactorization{P} <: SciMLBase.AbstractLinearAlgorithm end

init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose) = lu!(convert(AbstractMatrix,A))

function SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization; kwargs...)
if cache.isfresh
A = convert(AbstractMatrix,A)
fact = lu!(A)
cache = set_cacheval(cache, fact)
end
y = ldiv!(cache.u, cache.cacheval, cache.b)
SciMLBase.build_linear_solution(alg,y,nothing,cache)
end
```

The way this works is as follows. LinearSolve.jl has a `LinearCache` that everything
shares (this is what gives most of the ease of use). However, many algorithms
need to cache their own things, and so there's one value `cacheval` that is
for the algorithms to modify. The function:

```julia
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose)
```

is what is called at `init` time to create the first `cacheval`. Note that this
should match the type of the cache later used in `solve` as many algorithms, like
those in OrdinaryDiffEq.jl, expect type-groundedness in the linear solver definitions.
While there are cheaper ways to obtain this type for LU factorizations (specifically,
`ArrayInterfaceCore.lu_instance(A)`), for a demonstration, this just performs an
LU-factorization to get an `LU{T, Matrix{T}}` which it puts into the `cacheval`
so it is typed for future use.

After the `init_cacheval`, the only thing left to do is to define
`SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization)`. Many algorithms
may use a lazy matrix-free representation of the operator `A`. Thus, if the
algorithm requires a concrete matrix, like LU-factorization does, the algorithm
should `convert(AbstractMatrix,cache.A)`. The flag `cache.isfresh` states whether
`A` has changed since the last `solve`. Since we only need to factorize when
`A` is new, the factorization part of the algorithm is done in a `if cache.isfresh`.
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache,
so it's updated for future solves. Then `y = ldiv!(cache.u, cache.cacheval, cache.b)`
performs the solve and a linear solution is returned via
`SciMLBase.build_linear_solution(alg,y,nothing,cache)`.
# Developing New Linear Solvers

Developing new or custom linear solvers for the SciML interface can be done in
one of two ways:

1. You can either create a completely new set of dispatches for `init` and `solve`.
2. You can extend LinearSolve.jl's internal mechanisms.

For developer ease, we highly recommend (2) as that will automatically make the
caching API work. Thus, this is the documentation for how to do that.

## Developing New Linear Solvers with LinearSolve.jl Primitives

Let's create a new wrapper for a simple LU-factorization which uses only the
basic machinery. A simplified version is:

```julia
struct MyLUFactorization{P} <: SciMLBase.AbstractLinearAlgorithm end

function init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol,
verbose)
lu!(convert(AbstractMatrix, A))
end

function SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization; kwargs...)
if cache.isfresh
A = convert(AbstractMatrix, A)
fact = lu!(A)
cache = set_cacheval(cache, fact)
end
y = ldiv!(cache.u, cache.cacheval, cache.b)
SciMLBase.build_linear_solution(alg, y, nothing, cache)
end
```

The way this works is as follows. LinearSolve.jl has a `LinearCache` that everything
shares (this is what gives most of the ease of use). However, many algorithms
need to cache their own things, and so there's one value `cacheval` that is
for the algorithms to modify. The function:

```julia
init_cacheval(alg::MyLUFactorization, A, b, u, Pl, Pr, maxiters, abstol, reltol, verbose)
```

is what is called at `init` time to create the first `cacheval`. Note that this
should match the type of the cache later used in `solve` as many algorithms, like
those in OrdinaryDiffEq.jl, expect type-groundedness in the linear solver definitions.
While there are cheaper ways to obtain this type for LU factorizations (specifically,
`ArrayInterfaceCore.lu_instance(A)`), for a demonstration, this just performs an
LU-factorization to get an `LU{T, Matrix{T}}` which it puts into the `cacheval`
so it is typed for future use.

After the `init_cacheval`, the only thing left to do is to define
`SciMLBase.solve(cache::LinearCache, alg::MyLUFactorization)`. Many algorithms
may use a lazy matrix-free representation of the operator `A`. Thus, if the
algorithm requires a concrete matrix, like LU-factorization does, the algorithm
should `convert(AbstractMatrix,cache.A)`. The flag `cache.isfresh` states whether
`A` has changed since the last `solve`. Since we only need to factorize when
`A` is new, the factorization part of the algorithm is done in a `if cache.isfresh`.
`cache = set_cacheval(cache, fact)` puts the new factorization into the cache,
so it's updated for future solves. Then `y = ldiv!(cache.u, cache.cacheval, cache.b)`
performs the solve and a linear solution is returned via
`SciMLBase.build_linear_solution(alg,y,nothing,cache)`.
18 changes: 9 additions & 9 deletions docs/src/basics/CachingAPI.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Caching Interface API Functions

```@docs
LinearSolve.set_A
LinearSolve.set_b
LinearSolve.set_u
LinearSolve.set_p
LinearSolve.set_prec
```
# Caching Interface API Functions

```@docs
LinearSolve.set_A
LinearSolve.set_b
LinearSolve.set_u
LinearSolve.set_p
LinearSolve.set_prec
```
50 changes: 25 additions & 25 deletions docs/src/basics/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,22 +14,22 @@ efficiency and ability to choose solvers.
This is addressed in the [JuliaCon 2022 video](https://youtu.be/JWI34_w-yYw?t=182). This happens in
a few ways:

1. The Fortran/C code that NumPy/SciPy uses is actually slow. It's [OpenBLAS](https://github.com/xianyi/OpenBLAS),
a library developed in part by the Julia Lab back in 2012 as a fast open source BLAS implementation. Many
open source environments now use this build, including many R distributions. However, the Julia Lab has greatly
improved its ability to generate optimized SIMD in platform-specific ways. This, and improved multithreading support
(OpenBLAS's multithreading is rather slow), has led to pure Julia-based BLAS implementations which the lab now
works on. This includes [RecursiveFactorization.jl](https://github.com/JuliaLinearAlgebra/RecursiveFactorization.jl)
which generally outperforms OpenBLAS by 2x-10x depending on the platform. It even outperforms MKL for small matrices
(<100). LinearSolve.jl uses RecursiveFactorization.jl by default sometimes, but switches to BLAS when it would be
faster (in a platform and matrix-specific way).
2. Standard approaches to handling linear solves re-allocate the pivoting vector each time. This leads to GC pauses that
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
3. LinearSolve.jl makes many other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
4. LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
1. The Fortran/C code that NumPy/SciPy uses is actually slow. It's [OpenBLAS](https://github.com/xianyi/OpenBLAS),
a library developed in part by the Julia Lab back in 2012 as a fast open source BLAS implementation. Many
open source environments now use this build, including many R distributions. However, the Julia Lab has greatly
improved its ability to generate optimized SIMD in platform-specific ways. This, and improved multithreading support
(OpenBLAS's multithreading is rather slow), has led to pure Julia-based BLAS implementations which the lab now
works on. This includes [RecursiveFactorization.jl](https://github.com/JuliaLinearAlgebra/RecursiveFactorization.jl)
which generally outperforms OpenBLAS by 2x-10x depending on the platform. It even outperforms MKL for small matrices
(<100). LinearSolve.jl uses RecursiveFactorization.jl by default sometimes, but switches to BLAS when it would be
faster (in a platform and matrix-specific way).
2. Standard approaches to handling linear solves re-allocate the pivoting vector each time. This leads to GC pauses that
can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
3. LinearSolve.jl makes many other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
4. LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.

## How do I use IterativeSolvers solvers with a weighted tolerance vector?

Expand All @@ -41,16 +41,15 @@ hack the system via the following formulation:
using LinearSolve, LinearAlgebra
n = 2
A = rand(n,n)
A = rand(n, n)
b = rand(n)
weights = [1e-1, 1]
Pl = LinearSolve.InvPreconditioner(Diagonal(weights))
Pr = Diagonal(weights)
prob = LinearProblem(A,b)
sol = solve(prob,IterativeSolversJL_GMRES(),Pl=Pl,Pr=Pr)
prob = LinearProblem(A, b)
sol = solve(prob, IterativeSolversJL_GMRES(), Pl = Pl, Pr = Pr)
sol.u
```
Expand All @@ -63,14 +62,15 @@ of the weights like as follows:
using LinearSolve, LinearAlgebra
n = 4
A = rand(n,n)
A = rand(n, n)
b = rand(n)
weights = rand(n)
realprec = lu(rand(n,n)) # some random preconditioner
Pl = LinearSolve.ComposePreconditioner(LinearSolve.InvPreconditioner(Diagonal(weights)),realprec)
realprec = lu(rand(n, n)) # some random preconditioner
Pl = LinearSolve.ComposePreconditioner(LinearSolve.InvPreconditioner(Diagonal(weights)),
realprec)
Pr = Diagonal(weights)
prob = LinearProblem(A,b)
sol = solve(prob,IterativeSolversJL_GMRES(),Pl=Pl,Pr=Pr)
prob = LinearProblem(A, b)
sol = solve(prob, IterativeSolversJL_GMRES(), Pl = Pl, Pr = Pr)
```
2 changes: 1 addition & 1 deletion docs/src/basics/LinearProblem.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

```@docs
LinearProblem
```
```
Loading

0 comments on commit 3072f82

Please sign in to comment.