Skip to content

Commit

Permalink
docs: improve model support (#768)
Browse files Browse the repository at this point in the history
* docs: imporve model support

* docs: improve model support

* docs: improve narratives
  • Loading branch information
ZiniuYu authored Jul 18, 2022
1 parent ee7da10 commit 2b78b12
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/user-guides/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ The procedure and UI of ONNX and TensorRT runtime would look the same as Pytorch

## Model support

Open AI has released 9 models so far. `ViT-B/32` is used as default model in all runtimes. Due to the limitation of some runtime, not every runtime supports all nine models. Please also note that different model give different size of output dimensions. This will affect your downstream applications. For example, switching the model from one to another make your embedding incomparable, which breaks the downstream applications. Below is a list of supported models of each runtime and its corresponding size. We include the disk usage (in delta) and the peak RAM and VRAM usage (in delta) when running on a single Nvidia TITAN RTX GPU (24GB VRAM) using a default `minibatch_size=32` in server with PyTorch runtime and a default `batch_size=8` in client.
Open AI has released 9 models so far. `ViT-B/32` is used as default model in all runtimes. Due to the limitation of some runtime, not every runtime supports all nine models. Please also note that different model give different size of output dimensions. This will affect your downstream applications. For example, switching the model from one to another make your embedding incomparable, which breaks the downstream applications. Below is a list of supported models of each runtime and its corresponding size. We include the disk usage (in delta) and the peak RAM and VRAM usage (in delta) when running on a single Nvidia TITAN RTX GPU (24GB VRAM) for a series of text and image encoding tasks with `batch_size=8` using PyTorch runtime.

| Model | PyTorch | ONNX | TensorRT | Output Dimension | Disk Usage (MB) | Peak RAM Usage (GB) | Peak VRAM Usage (GB) |
|----------------|---------|------|----------|------------------|-----------------|---------------------|----------------------|
Expand All @@ -73,7 +73,7 @@ Open AI has released 9 models so far. `ViT-B/32` is used as default model in all
| ViT-B/32 |||| 512 | 351 | 3.20 | 1.40 |
| ViT-B/16 |||| 512 | 354 | 3.20 | 1.44 |
| ViT-L/14 |||| 768 | 933 | 3.66 | 2.04 |
| ViT-L/14-336px |||| 768 | 934 | 3.74 | 2.23 |
| ViT-L/14@336px |||| 768 | 934 | 3.74 | 2.23 |


## YAML config
Expand Down

0 comments on commit 2b78b12

Please sign in to comment.