diff --git a/.github/README-exec/onnx.readme.md b/.github/README-exec/onnx.readme.md index b0f7809e4..f4a3d29ab 100644 --- a/.github/README-exec/onnx.readme.md +++ b/.github/README-exec/onnx.readme.md @@ -13,18 +13,18 @@ The introduction of the CLIP model [can be found here](https://openai.com/blog/c `ViT-B-32::openai` is used as the default model. To use specific pretrained models provided by `open_clip`, please use `::` to separate model name and pretrained weight name, e.g. `ViT-B-32::laion2b_e16`. Please also note that **different models give different sizes of output dimensions**. -| Model | ONNX | Output dimension | -|---------------------------------------|------|------------------| -| RN50 | ✅ | 1024 | -| RN101 | ✅ | 512 | -| RN50x4 | ✅ | 640 | -| RN50x16 | ✅ | 768 | -| RN50x64 | ✅ | 1024 | -| ViT-B-32 | ✅ | 512 | -| ViT-B-16 | ✅ | 512 | -| ViT-B-lus-240 | ✅ | 640 | -| ViT-L-14 | ✅ | 768 | -| ViT-L-14@336px | ✅ | 768 | +| Model | ONNX | Output dimension | +|-------------------|------|------------------| +| RN50 | ✅ | 1024 | +| RN101 | ✅ | 512 | +| RN50x4 | ✅ | 640 | +| RN50x16 | ✅ | 768 | +| RN50x64 | ✅ | 1024 | +| ViT-B-32 | ✅ | 512 | +| ViT-B-16 | ✅ | 512 | +| ViT-B-16-plus-240 | ✅ | 640 | +| ViT-L-14 | ✅ | 768 | +| ViT-L-14-336 | ✅ | 768 | ✅ = First class support diff --git a/.github/README-exec/torch.readme.md b/.github/README-exec/torch.readme.md index c5f3130a5..500997eea 100644 --- a/.github/README-exec/torch.readme.md +++ b/.github/README-exec/torch.readme.md @@ -23,9 +23,9 @@ With advances of ONNX runtime, you can use `CLIPOnnxEncoder` (see [link](https:/ | RN50x64 | ✅ | 1024 | | ViT-B-32 | ✅ | 512 | | ViT-B-16 | ✅ | 512 | -| ViT-B-lus-240 | ✅ | 640 | +| ViT-B-16-plus-240 | ✅ | 640 | | ViT-L-14 | ✅ | 768 | -| ViT-L-14@336px | ✅ | 768 | +| ViT-L-14-336 | ✅ | 768 | | M-CLIP/XLM_Roberta-Large-Vit-B-32 | ✅ | 512 | | M-CLIP/XLM-Roberta-Large-Vit-L-14 | ✅ | 768 | | M-CLIP/XLM-Roberta-Large-Vit-B-16Plus | ✅ | 640 |