Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Update sparse matrix document #6719

Merged
merged 2 commits into from
Nov 30, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 14 additions & 13 deletions docs/lang/articles/math/sparse_matrix.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ sidebar_position: 2

# Sparse Matrix

Sparse matrices are frequently used when solving linear systems in science and engineering. Taichi provides programmers with useful APIs for sparse matrices.
Sparse matrices are frequently involved in solving linear systems in science and engineering. Taichi provides useful APIs for sparse matrices on the CPU and CUDA backends.

To use the sparse matrix in taichi programs, you should follow these three steps:
To use sparse matrices in Taichi programs, follow these three steps:

1. Create a `builder` using `ti.linalg.SparseMatrixBuilder()`.
2. Fill the `builder` with your matrices' data.
3. Create sparse matrices from the `builder`.
2. Call `ti.kernel` to fill the `builder` with your matrices' data.
3. Build sparse matrices from the `builder`.

:::caution WARNING
The sparse matrix is still under implementation. There are some limitations:
The sparse matrix feature is still under development. There are some limitations:
- The sparse matrix data type on the CPU backend only supports `f32` and `f64`.
- The sparse matrix data type on the CUDA backend only supports `f32`.

- Only the CPU backend is supported.
- The data type of sparse matrix is float32.
- The storage format is column-major
:::
Here's an example:
```python
import taichi as ti
ti.init(arch=ti.x64) # only CPU backend is supported for now
arch = ti.cpu # or ti.cuda
ti.init(arch=arch)

n = 4
# step 1: create sparse matrix builder
Expand Down Expand Up @@ -135,22 +135,23 @@ print(f">>>> Element Access: A[0,0] = {A[0,0]}")
## Sparse linear solver
You may want to solve some linear equations using sparse matrices.
Then, the following steps could help:
1. Create a `solver` using `ti.linalg.SparseSolver(solver_type, ordering)`. Currently, the sparse solver supports `LLT`, `LDLT` and `LU` factorization types, and orderings including `AMD`, `COLAMD`.
1. Create a `solver` using `ti.linalg.SparseSolver(solver_type, ordering)`. Currently, the factorization types supported on CPU backends are `LLT`, `LDLT`, and `LU`, and supported orderings include `AMD` and `COLAMD`. The sparse solver on CUDA supports the `LLT` factorization type only.
2. Analyze and factorize the sparse matrix you want to solve using `solver.analyze_pattern(sparse_matrix)` and `solver.factorize(sparse_matrix)`
3. Call `solver.solve(b)` to get your solutions, where `b` is a numpy array or taichi filed representing the right-hand side of the linear system.
3. Call `x = solver.solve(b)`, where `x` is the solution and `b` is the right-hand side of the linear system. On CPU backends, `x` and `b` can be NumPy arrays, Taichi Ndarrays, or Taichi fields. On the CUDA backend, `x` and `b` *must* be Taichi Ndarrays.
4. Call `solver.info()` to check if the solving process succeeds.

Here's a full example.

```python
import taichi as ti

ti.init(arch=ti.x64)
arch = ti.cpu # or ti.cuda
ti.init(arch=arch)

n = 4

K = ti.linalg.SparseMatrixBuilder(n, n, max_num_triplets=100)
b = ti.field(ti.f32, shape=n)
b = ti.ndarray(ti.f32, shape=n)

@ti.kernel
def fill(A: ti.types.sparse_matrix_builder(), b: ti.template(), interval: ti.i32):
Expand Down