Skip to content

Commit

Permalink
fix: onnx package conflict during setup (#894)
Browse files Browse the repository at this point in the history
* fix: onnx package conflict during setup

* fix: broken test image

* fix: broken test image

* fix: broken test image

* fix: broken test image

* fix: revert

* fix: re-revert

* fix: fix onnxruntime version

* fix: revert again

* fix: test

* fix: test

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: trt ci

* fix: onnx version

* fix: onnx version

* fix: onnx version

* docs: update docker usage
  • Loading branch information
ZiniuYu authored Feb 22, 2023
1 parent dabbe8b commit d70f238
Show file tree
Hide file tree
Showing 10 changed files with 127 additions and 42 deletions.
56 changes: 54 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ jobs:
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }} # not required for public repos

gpu-test:
trt-gpu-test:
needs: prep-testbed
runs-on: [self-hosted, x64, gpu, linux]
strategy:
Expand Down Expand Up @@ -173,6 +173,58 @@ jobs:
-v -s -m "gpu" ./tests/test_tensorrt.py
pytest --suppress-no-test-exit-code --cov=clip_client --cov=clip_server --cov-report=xml \
-v -s -m "gpu" ./tests/test_simple.py
echo "::set-output name=codecov_flag::cas"
timeout-minutes: 30
env:
# fix re-initialized torch runtime error on cuda device
JINA_MP_START_METHOD: spawn
- name: Check codecov file
id: check_files
uses: andstor/file-existence-action@v1
with:
files: "coverage.xml"
- name: Upload coverage from test to Codecov
uses: codecov/codecov-action@v3
if: steps.check_files.outputs.files_exists == 'true' && ${{ matrix.python-version }} == '3.7'
with:
file: coverage.xml
name: gpu-related-codecov
flags: ${{ steps.test.outputs.codecov_flag }}
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }} # not required for public repos

gpu-model-test:
needs: prep-testbed
runs-on: [ self-hosted, x64, gpu, linux ]
strategy:
fail-fast: false
matrix:
python-version: [ 3.7 ]
steps:
- uses: actions/checkout@v2
with:
# For coverage builds fetch the whole history
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Prepare enviroment
run: |
python -m pip install --upgrade pip
python -m pip install wheel pytest pytest-cov nvidia-pyindex
pip install -e "client/[test]"
pip install -e "server/[onnx]"
pip install -e "server/[transformers]"
{
pip install -e "server/[flash-attn]"
} || {
echo "flash attention was not installed."
}
pip install --no-cache-dir "server/[cn_clip]"
- name: Test
id: test
run: |
pytest --suppress-no-test-exit-code --cov=clip_client --cov=clip_server --cov-report=xml \
-v -s -m "gpu" ./tests/test_model.py
echo "::set-output name=codecov_flag::cas"
Expand All @@ -197,7 +249,7 @@ jobs:

# just for blocking the merge until all parallel core-test are successful
success-all-test:
needs: [commit-lint, core-test, gpu-test]
needs: [commit-lint, core-test, trt-gpu-test, gpu-model-test]
if: always()
runs-on: ubuntu-latest
steps:
Expand Down
26 changes: 25 additions & 1 deletion docs/user-guides/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -591,21 +591,45 @@ The build argument `--build-arg GROUP_ID=$(id -g ${USER}) --build-arg USER_ID=$(

### Run

````{tab} PyTorch
```bash
docker run -p 51009:51000 -v $HOME/.cache:/home/cas/.cache --gpus all jinaai/clip-server
```
````
````{tab} ONNX
```bash
docker run -p 51009:51000 -v $HOME/.cache:/home/cas/.cache --gpus all jinaai/clip-server:master-onnx onnx-flow.yml
```
````
````{tab} TensorRT
```bash
docker run -p 51009:51000 -v $HOME/.cache:/home/cas/.cache --gpus all jinaai/clip-server:master-tensorrt tensorrt-flow.yml
```
````

Here, `51009` is the public port on the host and `51000` is the {ref}`in-container port defined inside YAML<flow-config>`. The argument `-v $HOME/.cache:/home/cas/.cache` leverages host's cache and prevents you to download the same model next time on start.

Due to the limitation of the terminal inside Docker container, you will **not** see the classic Jina progress bar on start. Instead, you will face a few minutes awkward silent while model downloading and then see "Flow is ready to serve" dialog.

To pass a YAML config from the host, one can do:

````{tab} PyTorch
```bash
cat my.yml | docker run -i -p 51009:51000 -v $HOME/.cache:/home/cas/.cache --gpus all jinaai/clip-server -i
```
````
````{tab} ONNX
```bash
cat my.yml | docker run -i -p 51009:51000 -v $HOME/.cache:/home/cas/.cache --gpus all jinaai/clip-server:master-onnx -i
```
````
````{tab} TensorRT
```bash
cat my.yml | docker run -i -p 51009:51000 -v $HOME/.cache:/home/cas/.cache --gpus all jinaai/clip-server:master-tensorrt -i
```
````

The CLI usage is the same {ref}`as described here <start-server>`.
The CLI usage is the same {ref}`as described here <server-address>`.

```{tip}
You can enable debug logging via: `docker run --env JINA_LOG_LEVEL=debug ...`
Expand Down
11 changes: 8 additions & 3 deletions server/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,17 @@
],
extras_require={
'onnx': [
'onnxruntime',
'onnx',
'onnxmltools',
]
+ (['onnxruntime-gpu>=1.8.0'] if sys.platform != 'darwin' else []),
'tensorrt': ['nvidia-tensorrt'],
+ (
['onnxruntime-gpu<=1.13.1']
if sys.platform != 'darwin'
else ['onnxruntime<=1.13.1']
),
'tensorrt': [
'nvidia-tensorrt==8.4.1.5',
],
'transformers': ['transformers>=4.16.2'],
'search': ['annlite>=0.3.10'],
'flash-attn': ['flash-attn'],
Expand Down
2 changes: 1 addition & 1 deletion tests/test_asyncio.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ async def test_async_encode(make_flow):
DocumentArray(
[
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
text='hello, world',
),
]
Expand Down
14 changes: 7 additions & 7 deletions tests/test_helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ def test_numpy_softmax(shape, axis):
Document(text='goodbye, world'),
Document(
text='hello, world',
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
),
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
),
]
),
Expand All @@ -47,14 +47,14 @@ def test_numpy_softmax(shape, axis):
Document(text='hello, world'),
Document(tensor=np.array([0, 1, 2])),
Document(
uri='https://docarray.jina.ai/_static/favicon.png'
uri='https://clip-as-service.jina.ai/_static/favicon.png'
).load_uri_to_blob(),
Document(
tensor=np.array([0, 1, 2]),
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
),
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
),
]
),
Expand All @@ -64,7 +64,7 @@ def test_numpy_softmax(shape, axis):
DocumentArray(
[
Document(text='hello, world'),
Document(uri='https://docarray.jina.ai/_static/favicon.png'),
Document(uri='https://clip-as-service.jina.ai/_static/favicon.png'),
]
),
(1, 1),
Expand All @@ -86,7 +86,7 @@ def test_split_img_txt_da(inputs):
DocumentArray(
[
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
).load_uri_to_blob(),
]
)
Expand Down
24 changes: 14 additions & 10 deletions tests/test_ranker.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,14 +68,14 @@ async def test_torch_executor_rank_text2imgs(encoder_class):
[
[
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
],
),
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
Expand All @@ -85,14 +85,14 @@ async def test_torch_executor_rank_text2imgs(encoder_class):
DocumentArray(
[
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
],
),
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
Expand All @@ -102,7 +102,7 @@ async def test_torch_executor_rank_text2imgs(encoder_class):
),
lambda: (
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
Expand All @@ -115,7 +115,9 @@ async def test_torch_executor_rank_text2imgs(encoder_class):
Document(
text='hello, world',
matches=[
Document(uri='https://docarray.jina.ai/_static/favicon.png'),
Document(
uri='https://clip-as-service.jina.ai/_static/favicon.png'
),
Document(
uri=f'{os.path.dirname(os.path.abspath(__file__))}/img/00000.jpg'
),
Expand Down Expand Up @@ -144,7 +146,7 @@ def test_docarray_inputs(make_flow, inputs):
[
[
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
Expand All @@ -154,7 +156,7 @@ def test_docarray_inputs(make_flow, inputs):
DocumentArray(
[
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
Expand All @@ -164,7 +166,7 @@ def test_docarray_inputs(make_flow, inputs):
),
lambda: (
Document(
uri='https://docarray.jina.ai/_static/favicon.png',
uri='https://clip-as-service.jina.ai/_static/favicon.png',
matches=[
Document(text='hello, world'),
Document(text='goodbye, world'),
Expand All @@ -177,7 +179,9 @@ def test_docarray_inputs(make_flow, inputs):
Document(
text='hello, world',
matches=[
Document(uri='https://docarray.jina.ai/_static/favicon.png'),
Document(
uri='https://clip-as-service.jina.ai/_static/favicon.png'
),
Document(
uri=f'{os.path.dirname(os.path.abspath(__file__))}/img/00000.jpg'
),
Expand Down
4 changes: 2 additions & 2 deletions tests/test_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
lambda: (Document(text='hello, world') for _ in range(10)),
DocumentArray(
[
Document(uri='https://docarray.jina.ai/_static/favicon.png'),
Document(uri='https://clip-as-service.jina.ai/_static/favicon.png'),
Document(
uri=f'{os.path.dirname(os.path.abspath(__file__))}/img/00000.jpg'
),
Expand Down Expand Up @@ -52,7 +52,7 @@ def test_index_search(make_search_flow, inputs, limit):
lambda: (Document(text='hello, world') for _ in range(10)),
DocumentArray(
[
Document(uri='https://docarray.jina.ai/_static/favicon.png'),
Document(uri='https://clip-as-service.jina.ai/_static/favicon.png'),
Document(
uri=f'{os.path.dirname(os.path.abspath(__file__))}/img/00000.jpg'
),
Expand Down
18 changes: 9 additions & 9 deletions tests/test_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@

def test_server_download(tmpdir):
download_model(
url='https://docarray.jina.ai/_static/favicon.png',
url='https://clip-as-service.jina.ai/_static/favicon.png',
target_folder=tmpdir,
md5sum='66ea4817d73514888dcf6c7d2b00016d',
md5sum='43104e468ddd23c55bc662d84c87a7f8',
with_resume=False,
)
target_path = os.path.join(tmpdir, 'favicon.png')
Expand All @@ -27,28 +27,28 @@ def test_server_download(tmpdir):
os.remove(target_path)

download_model(
url='https://docarray.jina.ai/_static/favicon.png',
url='https://clip-as-service.jina.ai/_static/favicon.png',
target_folder=tmpdir,
md5sum='66ea4817d73514888dcf6c7d2b00016d',
md5sum='43104e468ddd23c55bc662d84c87a7f8',
with_resume=True,
)
assert os.path.getsize(target_path) == file_size
assert not os.path.exists(part_path)


@pytest.mark.parametrize('md5', ['ABC', None, '66ea4817d73514888dcf6c7d2b00016d'])
@pytest.mark.parametrize('md5', ['ABC', None, '43104e468ddd23c55bc662d84c87a7f8'])
def test_server_download_md5(tmpdir, md5):
if md5 != 'ABC':
download_model(
url='https://docarray.jina.ai/_static/favicon.png',
url='https://clip-as-service.jina.ai/_static/favicon.png',
target_folder=tmpdir,
md5sum=md5,
with_resume=False,
)
else:
with pytest.raises(Exception):
download_model(
url='https://docarray.jina.ai/_static/favicon.png',
url='https://clip-as-service.jina.ai/_static/favicon.png',
target_folder=tmpdir,
md5sum=md5,
with_resume=False,
Expand All @@ -58,7 +58,7 @@ def test_server_download_md5(tmpdir, md5):
def test_server_download_not_regular_file(tmpdir):
with pytest.raises(Exception):
download_model(
url='https://docarray.jina.ai/_static/favicon.png',
url='https://clip-as-service.jina.ai/_static/favicon.png',
target_folder=tmpdir,
md5sum='',
with_resume=False,
Expand Down Expand Up @@ -87,7 +87,7 @@ def test_make_onnx_flow_wrong_name_path():
'image_uri',
[
f'{os.path.dirname(os.path.abspath(__file__))}/img/00000.jpg',
'https://docarray.jina.ai/_static/favicon.png',
'https://clip-as-service.jina.ai/_static/favicon.png',
],
)
@pytest.mark.parametrize('size', [224, 288, 384, 448])
Expand Down
Loading

0 comments on commit d70f238

Please sign in to comment.