Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Lang] [test] Copy-free interaction between Taichi and PaddlePaddle #4886

Merged
merged 22 commits into from
May 4, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
e183718
Implement has_paddle(), to_paddle_type() and update to_taichi_type in…
0xzhang Mar 30, 2022
c1ea8f9
Implement get_paddle_callbacks() and update get_function_body(), matc…
0xzhang Mar 30, 2022
6a57eab
Add test test_io_devices() in tests\python\test_torch_io.py
0xzhang Apr 28, 2022
6ba45ad
Implement callback for CPU-GPU/GPU-CPU copy between Taichi and Paddle
0xzhang Apr 28, 2022
b99b4cd
Partially implement to_torch()/from_torch() according to PyTorch in T…
0xzhang Apr 29, 2022
ff2c825
Fix paddle.Tensor's backend check
0xzhang Apr 29, 2022
d76c298
Update tests for from_paddle()/to_paddle()
0xzhang Apr 29, 2022
e9c4e05
[doc] Update Global settings with TI_ENABLE_PADDLE
0xzhang Apr 29, 2022
c405233
Fix to avoid fail when only import paddle
0xzhang Apr 29, 2022
3760c63
[test] Fix the expected list alphabetically
0xzhang Apr 29, 2022
95d7c23
[doc] Add info about paddle.Tensor
0xzhang Apr 29, 2022
25066ff
[ci] Try to test paddle's GPU version
0xzhang Apr 30, 2022
63d9b9f
Fix the usage of paddle.ones
0xzhang Apr 30, 2022
2af5f98
Fix f16 tests for paddle
0xzhang Apr 30, 2022
f2c4cb8
Fixed supported archs for tests of paddle
0xzhang Apr 30, 2022
5a06849
Use 1 thread run tests for torch and paddle
0xzhang Apr 30, 2022
ed56ee7
Fix linux test
0xzhang Apr 30, 2022
618e44f
Fix windows test
0xzhang May 1, 2022
537157c
Unify the name to Paddle
0xzhang May 1, 2022
527720a
Add tests for paddle
0xzhang May 1, 2022
db7251d
Replace usage of device to place for paddle
0xzhang May 1, 2022
34b1fef
Paddle's GPU develop package on Linux import error
0xzhang May 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 18 additions & 8 deletions .github/workflows/scripts/unix_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,16 @@ python3 -m pip install dist/*.whl
if [ -z "$GPU_TEST" ]; then
python3 -m pip install -r requirements_test.txt
python3 -m pip install "torch; python_version < '3.10'"
# Paddle's develop package doesn't support CI's MACOS machine at present
if [[ $OSTYPE == "linux-"* ]]; then
python3 -m pip install "paddlepaddle==0.0.0; python_version < '3.10'" -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html
fi
else
## Only GPU machine uses system python.
export PATH=$PATH:$HOME/.local/bin
# pip will skip packages if already installed
python3 -m pip install -r requirements_test.txt
# Import Paddle's develop GPU package will occur error `Illegal Instruction`.
fi
ti diagnose
ti changelog
Expand All @@ -38,27 +43,32 @@ TI_LIB_DIR="$TI_PATH/_lib/runtime" ./build/taichi_cpp_tests
if [ -z "$GPU_TEST" ]; then
if [[ $PLATFORM == *"m1"* ]]; then
# Split per arch to avoid flaky test
python3 tests/run_tests.py -vr2 -t4 -k "not torch" -a cpu
python3 tests/run_tests.py -vr2 -t4 -k "not torch and not paddle" -a cpu
# Run metal and vulkan separately so that they don't use M1 chip simultaneously.
python3 tests/run_tests.py -vr2 -t4 -k "not torch" -a vulkan
python3 tests/run_tests.py -vr2 -t2 -k "not torch" -a metal
python3 tests/run_tests.py -vr2 -t4 -k "not torch and not paddle" -a vulkan
python3 tests/run_tests.py -vr2 -t2 -k "not torch and not paddle" -a metal
python3 tests/run_tests.py -vr2 -t1 -k "torch" -a "$TI_WANTED_ARCHS"
else
python3 tests/run_tests.py -vr2 -t4 -a "$TI_WANTED_ARCHS"
# Fail fast, give priority to the error-prone tests
if [[ $OSTYPE == "linux-"* ]]; then
python3 tests/run_tests.py -vr2 -t1 -k "paddle" -a "$TI_WANTED_ARCHS"
fi
python3 tests/run_tests.py -vr2 -t4 -k "not paddle" -a "$TI_WANTED_ARCHS"
fi
else
# Split per arch to increase parallelism for linux GPU tests
if [[ $TI_WANTED_ARCHS == *"cuda"* ]]; then
python3 tests/run_tests.py -vr2 -t4 -k "not torch" -a cuda
python3 tests/run_tests.py -vr2 -t4 -k "not torch and not paddle" -a cuda
fi
if [[ $TI_WANTED_ARCHS == *"cpu"* ]]; then
python3 tests/run_tests.py -vr2 -t8 -k "not torch" -a cpu
python3 tests/run_tests.py -vr2 -t8 -k "not torch and not paddle" -a cpu
fi
if [[ $TI_WANTED_ARCHS == *"vulkan"* ]]; then
python3 tests/run_tests.py -vr2 -t8 -k "not torch" -a vulkan
python3 tests/run_tests.py -vr2 -t8 -k "not torch and not paddle" -a vulkan
fi
if [[ $TI_WANTED_ARCHS == *"opengl"* ]]; then
python3 tests/run_tests.py -vr2 -t4 -k "not torch" -a opengl
python3 tests/run_tests.py -vr2 -t4 -k "not torch and not paddle" -a opengl
fi
python3 tests/run_tests.py -vr2 -t1 -k "torch" -a "$TI_WANTED_ARCHS"
# Paddle's paddle.fluid.core.Tensor._ptr() is only available on develop branch, and CUDA version on linux will get error `Illegal Instruction`
fi
12 changes: 8 additions & 4 deletions .github/workflows/scripts/win_test.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,24 @@ pip install -r requirements_test.txt
# TODO relax this when torch supports 3.10
if ("$env:TI_WANTED_ARCHS".Contains("cuda")) {
pip install "torch==1.10.1+cu113; python_version < '3.10'" -f https://download.pytorch.org/whl/cu113/torch_stable.html
pip install "paddlepaddle-gpu==0.0.0.post112; python_version < '3.10'" -f https://www.paddlepaddle.org.cn/whl/windows/gpu/develop.html
} else {
pip install "torch; python_version < '3.10'"
pip install "paddlepaddle==0.0.0; python_version < '3.10'" -f https://www.paddlepaddle.org.cn/whl/windows/cpu-mkl-avx/develop.html
}
# Fail fast, give priority to the error-prone tests
python tests/run_tests.py -vr2 -t1 -k "paddle" -a "$env:TI_WANTED_ARCHS"
if ("$env:TI_WANTED_ARCHS".Contains("cuda")) {
python tests/run_tests.py -vr2 -t4 -k "not torch" -a cuda
python tests/run_tests.py -vr2 -t4 -k "not torch and not paddle" -a cuda
if (-not $?) { exit 1 }
}
if ("$env:TI_WANTED_ARCHS".Contains("cpu")) {
python tests/run_tests.py -vr2 -t6 -k "not torch" -a cpu
python tests/run_tests.py -vr2 -t6 -k "not torch and not paddle" -a cpu
if (-not $?) { exit 1 }
}
if ("$env:TI_WANTED_ARCHS".Contains("opengl")) {
python tests/run_tests.py -vr2 -t4 -k "not torch" -a opengl
python tests/run_tests.py -vr2 -t4 -k "not torch and not paddle" -a opengl
if (-not $?) { exit 1 }
}
python tests/run_tests.py -vr2 -t2 -k "torch" -a "$env:TI_WANTED_ARCHS"
python tests/run_tests.py -vr2 -t1 -k "torch" -a "$env:TI_WANTED_ARCHS"
if (-not $?) { exit 1 }
3 changes: 2 additions & 1 deletion ci/scripts/ubuntu_build_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,6 @@ export TI_IN_DOCKER=true

# Run tests
ti diagnose
python tests/run_tests.py -vr2 -t2 -k "not ndarray and not torch"
# Paddle's paddle.fluid.core.Tensor._ptr() is only available on develop branch, and CUDA version on linux will get error `Illegal Instruction`
python tests/run_tests.py -vr2 -t2 -k "not ndarray and not torch and not paddle"
python tests/run_tests.py -vr2 -t1 -k "ndarray or torch"
6 changes: 4 additions & 2 deletions ci/scripts/ubuntu_build_test_cpu.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,14 @@ git clone --recursive https://github.com/taichi-dev/taichi --branch=master
cd taichi
git checkout $SHA
python3 -m pip install -r requirements_dev.txt -i http://repo.taichigraphics.com/repository/pypi/simple --trusted-host repo.taichigraphics.com
# Paddle's paddle.fluid.core.Tensor._ptr() is only available on develop branch
python3 -m pip install paddlepaddle==0.0.0 -f https://www.paddlepaddle.org.cn/whl/linux/cpu-mkl/develop.html
TAICHI_CMAKE_ARGS="-DTI_WITH_VULKAN:BOOL=OFF -DTI_WITH_CUDA:BOOL=OFF -DTI_WITH_OPENGL:BOOL=OFF" python3 setup.py install

# Add Docker specific ENV
export TI_IN_DOCKER=true

# Run tests
ti diagnose
python tests/run_tests.py -vr2 -t2 -k "not ndarray and not torch"
python tests/run_tests.py -vr2 -t1 -k "ndarray or torch"
python tests/run_tests.py -vr2 -t2 -k "not ndarray and not torch and not paddle"
python tests/run_tests.py -vr2 -t1 -k "ndarray or torch or paddle"
2 changes: 1 addition & 1 deletion ci/windows/win_build_test.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -59,5 +59,5 @@ python setup.py develop
WriteInfo("Build finished")

WriteInfo("Testing Taichi")
python tests/run_tests.py -vr2 -t2 -k "not torch" -a cpu
python tests/run_tests.py -vr2 -t2 -k "not torch and not paddle" -a cpu
WriteInfo("Test finished")
63 changes: 36 additions & 27 deletions docs/lang/articles/basic/external.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,12 @@ sidebar_position: 5

# Interacting with external arrays

Although Taichi fields are mainly used in Taichi-scope, in some cases
efficiently manipulating Taichi field data in Python-scope could also be
Although Taichi fields are mainly used in Taichi-scope, in some cases efficiently manipulating Taichi field data in Python-scope could also be
helpful.

We provide various interfaces to copy the data between Taichi fields and
external arrays. External arrays refer to NumPy arrays or PyTorch tensors.
Let's take a look at the most common usage: interacting with NumPy arrays.
We provide various interfaces to copy the data between Taichi fields and external arrays. External arrays refer to NumPy arrays, PyTorch tensors or Paddle Tensors. Let's take a look at the most common usage: interacting with NumPy arrays.

**Export data in Taichi fields to NumPy arrays** via `to_numpy()`. This
allows us to export computation results to other Python packages that
support NumPy, e.g. `matplotlib`.
**Export data in Taichi fields to NumPy arrays** via `to_numpy()`. This allows us to export computation results to other Python packages that support NumPy, e.g. `matplotlib`.

```python {8}
@ti.kernel
Expand All @@ -28,8 +23,7 @@ x_np = x.to_numpy()
print(x_np) # np.array([0, 2, 4, 6])
```

**Import data from NumPy arrays to Taichi fields** via `from_numpy()`.
This allows us to initialize Taichi fields via NumPy arrays:
**Import data from NumPy arrays to Taichi fields** via `from_numpy()`. This allows us to initialize Taichi fields via NumPy arrays:

```python {3}
x = ti.field(ti.f32, 4)
Expand Down Expand Up @@ -59,21 +53,42 @@ print(x[1]) # 7
print(x[2]) # 3
print(x[3]) # 5
```
And Taichi fields also can be **imported from and exported to Paddle tensors**:

```python
@ti.kernel
def my_kernel():
for i in x:
x[i] = i * 2

x = ti.field(ti.f32, 4)
my_kernel()
x_paddle = x.to_paddle()
print(x_paddle) # paddle.Tensor([0, 2, 4, 6])

x.from_numpy(paddle.to_tensor([1, 7, 3, 5]))
print(x[0]) # 1
print(x[1]) # 7
print(x[2]) # 3
print(x[3]) # 5
```

When calling `to_torch()`, specify the PyTorch device where the Taichi field is exported using the `device` argument:

```python
x = ti.field(ti.f32, 4)
x.fill(3.0)
x_torch = x.to_torch(device="cuda:0")
print(x_torch.device) # device(type='cuda', index=0)
```

For Paddle, specify the device by `paddle.CPUPlace()` or `paddle.CUDAPlace(n)` where n is an optional ID, default is 0.

## External array shapes

Shapes of Taichi fields and those of corresponding NumPy arrays or PyTorch tensors are closely
connected via the following rules:
Shapes of Taichi fields and those of corresponding NumPy arrays, PyTorch tensors or Paddle Tensors are closely connected via the following rules:

- For scalar fields, **the shape of NumPy array or PyTorch tensor equals the shape of
the Taichi field**:
- For scalar fields, **the shape of NumPy array, PyTorch tensor or Paddle Tensor equals the shape of the Taichi field**

```python
field = ti.field(ti.i32, shape=(256, 512))
Expand All @@ -85,8 +100,7 @@ array.shape # (256, 512)
field.from_numpy(array) # the input array must be of shape (256, 512)
```

- For vector fields, if the vector is `n`-D, then **the shape of NumPy
array or Pytorch tensor should be** `(*field_shape, vector_n)`:
- For vector fields, if the vector is `n`-D, then **the shape of NumPy array, PyTorch tensor or Paddle Tensor should be** `(*field_shape, vector_n)`:

```python
field = ti.Vector.field(3, ti.i32, shape=(256, 512))
Expand All @@ -99,8 +113,7 @@ array.shape # (256, 512, 3)
field.from_numpy(array) # the input array must be of shape (256, 512, 3)
```

- For matrix fields, if the matrix is `n`-by-`m` (`n x m`), then **the shape of NumPy
array or Pytorch Tensor should be** `(*field_shape, matrix_n, matrix_m)`:
- For matrix fields, if the matrix is `n`-by-`m` (`n x m`), then **the shape of NumPy array, PyTorch tensor or Paddle Tensor should be** `(*field_shape, matrix_n, matrix_m)`:

```python
field = ti.Matrix.field(3, 4, ti.i32, shape=(256, 512))
Expand All @@ -114,8 +127,7 @@ array.shape # (256, 512, 3, 4)
field.from_numpy(array) # the input array must be of shape (256, 512, 3, 4)
```

- For struct fields, the external array will be exported as **a dictionary of NumPy arrays or PyTorch tensors** with keys
being struct member names and values being struct member arrays. Nested structs will be exported as nested dictionaries:
- For struct fields, the external array will be exported as **a dictionary of NumPy arrays, PyTorch tensors or Paddle Tensors** with keys being struct member names and values being struct member arrays. Nested structs will be exported as nested dictionaries:

```python
field = ti.Struct.field({'a': ti.i32, 'b': ti.types.vector(float, 3)} shape=(256, 512))
Expand All @@ -131,8 +143,7 @@ field.from_numpy(array_dict) # the input array must have the same keys as the fi

## Using external arrays as Taichi kernel arguments

Use type hint `ti.types.ndarray()` to pass external arrays as kernel
arguments. For example:
Use type hint `ti.types.ndarray()` to pass external arrays as kernel arguments. For example:

```python {10}
import taichi as ti
Expand Down Expand Up @@ -163,8 +174,7 @@ for i in range(n):
assert a[i, j] == i * j + i + j
```

Note that the elements in an external array must be indexed using a single square bracket.
This contrasts with a Taichi vector or matrix field where field and matrix indices are indexed separately:
Note that the elements in an external array must be indexed using a single square bracket. This contrasts with a Taichi vector or matrix field where field and matrix indices are indexed separately:
```python
@ti.kernel
def copy_vector(x: ti.template(), y: ti.types.ndarray()):
Expand All @@ -174,9 +184,8 @@ def copy_vector(x: ti.template(), y: ti.types.ndarray()):
# y[i][j][k] = x[i, j][k] incorrect
# y[i, j][k] = x[i, j][k] incorrect
```
Also, external arrays in a Taichi kernel are indexed using its **physical memory layout**. For PyTorch users,
this implies that the PyTorch tensor [needs to be made contiguous](https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html)
before being passed into a Taichi kernel:
Also, external arrays in a Taichi kernel are indexed using its **physical memory layout**. For PyTorch users, this implies that the PyTorch tensor [needs to be made contiguous](https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html) before being passed into a Taichi kernel:

```python
@ti.kernel
def copy_scalar(x: ti.template(), y: ti.types.ndarray()):
Expand Down
1 change: 1 addition & 0 deletions docs/lang/articles/misc/global_settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ sidebar_position: 7
- To start program in debug mode: `ti.init(debug=True)` or
`ti debug your_script.py`.
- To disable importing torch on start up: `export TI_ENABLE_TORCH=0`.
- To disable importing paddle on start up: `export TI_ENABLE_PADDLE=0`.

## Logging

Expand Down
42 changes: 41 additions & 1 deletion python/taichi/lang/field.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import taichi.lang
from taichi._lib import core as _ti_core
from taichi.lang.util import python_scope, to_numpy_type, to_pytorch_type
from taichi.lang.util import (python_scope, to_numpy_type, to_paddle_type,
to_pytorch_type)


class Field:
Expand Down Expand Up @@ -132,6 +133,18 @@ def to_torch(self, device=None):
"""
raise NotImplementedError()

@python_scope
def to_paddle(self, place=None):
"""Converts `self` to a paddle tensor.

Args:
place (paddle.CPUPlace()/CUDAPlace(n), optional): The desired place of returned tensor.

Returns:
paddle.Tensor: The result paddle tensor.
"""
raise NotImplementedError()

@python_scope
def from_numpy(self, arr):
"""Loads all elements from a numpy array.
Expand All @@ -154,6 +167,17 @@ def from_torch(self, arr):
"""
self.from_numpy(arr.contiguous())

@python_scope
def from_paddle(self, arr):
"""Loads all elements from a paddle tensor.

The shape of the paddle tensor needs to be the same as `self`.

Args:
arr (paddle.Tensor): The source paddle tensor.
"""
self.from_numpy(arr)

@python_scope
def copy_from(self, other):
"""Copies all elements from another field.
Expand Down Expand Up @@ -267,6 +291,22 @@ def to_torch(self, device=None):
taichi.lang.runtime_ops.sync()
return arr

@python_scope
def to_paddle(self, place=None):
"""Converts this field to a `paddle.Tensor`.
"""
import paddle # pylint: disable=C0415

# pylint: disable=E1101
# paddle.empty() doesn't support argument `place``
arr = paddle.to_tensor(paddle.zeros(self.shape,
to_paddle_type(self.dtype)),
place=place)
from taichi._kernels import tensor_to_ext_arr # pylint: disable=C0415
tensor_to_ext_arr(self, arr)
taichi.lang.runtime_ops.sync()
return arr

@python_scope
def from_numpy(self, arr):
"""Copies the data from a `numpy.ndarray` into this field.
Expand Down
Loading