-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3.2.0 release breaks CLIPModel #3005
Comments
Same issue, did you find a fix? |
Downgrading to 3.1.1 fixed the issue temporarily. |
Got it, but im trying to quantize the clip model to qint8 onnx, but for that the functions are in 3.2 only |
I just created a PR (#3007). Before this PR is merged, you can use my fork (e.g. use |
Hi @BoPeng, I tried installing from the source and it works as expected, so this might already be fixed elsewhere. >>> import sentence_transformers as stf
>>> from sentence_transformers import SentenceTransformer
>>> st = SentenceTransformer('clip-ViT-L-14')
>>> stf.__version__
'3.3.0.dev0' |
You are right. I am closing the PR and will use the master branch for now. |
from sentence_transformers import SentenceTransformer, export_dynamic_quantized_onnx_model
st = SentenceTransformer('clip-ViT-B-16')
st.save("clip_vit_b_16_main", "onnx")
model = SentenceTransformer("/content/clip_vit_b_16_main", backend="onnx")
export_dynamic_quantized_onnx_model(model, "avx512_vnni", "clip-ViT-B-32", push_to_hub=False, create_pr=False) Doing this still gives me the same cache_dir error, even after building from source |
Yeah, it seems that ONNX is not supported currently for the CLIP model. In the meantime, if you are okay with using other libraries, check out this issue mlfoundations/open_clip#186, and this repo https://github.com/jina-ai/clip-as-service. |
Thanks for reporting this! I can reproduce it now, but only if the CLIP model is saved in the root of the model repository/directory. Some more details in #3007 (comment). This PR should fix this issue, and I'll include it in a patch release very soon. I do want to mention that ONNX exporting indeed isn't supported for CLIP models I'm afraid, as it's only supported in the Transformer module, not in the CLIPModel module.
|
After upgrading from version 3.1.1 to 3.2, loading
clip-ViT-L-14
gives the following error:This should be caused by the recent removal of the
try/except
block around the model loading code:sentence-transformers/sentence_transformers/SentenceTransformer.py
Lines 1721 to 1724 in 29535eb
The text was updated successfully, but these errors were encountered: