Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3.2.0 release breaks CLIPModel #3005

Closed
BoPeng opened this issue Oct 20, 2024 · 9 comments · Fixed by #3007
Closed

3.2.0 release breaks CLIPModel #3005

BoPeng opened this issue Oct 20, 2024 · 9 comments · Fixed by #3007

Comments

@BoPeng
Copy link
Contributor

BoPeng commented Oct 20, 2024

After upgrading from version 3.1.1 to 3.2, loading clip-ViT-L-14 gives the following error:

>>> SentenceTransformer('clip-ViT-L-14')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/path/to/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 306, in __init__
    modules, self.module_kwargs = self._load_sbert_model(
                                  ^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 1722, in _load_sbert_model
    module = module_class(model_name_or_path, cache_dir=cache_folder, backend=self.backend, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: CLIPModel.__init__() got an unexpected keyword argument 'cache_dir'

This should be caused by the recent removal of the try/except block around the model loading code:

# try:
module = module_class(model_name_or_path, cache_dir=cache_folder, backend=self.backend, **kwargs)
# except TypeError:
# module = module_class.load(model_name_or_path)

@BoPeng BoPeng changed the title sentence-transformer 3.2.0 breaks CLIPModel 3.2.0 release breaks CLIPModel Oct 20, 2024
@PraNavKumAr01
Copy link

Same issue, did you find a fix?

@BoPeng
Copy link
Contributor Author

BoPeng commented Oct 20, 2024

Downgrading to 3.1.1 fixed the issue temporarily.

@PraNavKumAr01
Copy link

Got it, but im trying to quantize the clip model to qint8 onnx, but for that the functions are in 3.2 only

@BoPeng
Copy link
Contributor Author

BoPeng commented Oct 20, 2024

I just created a PR (#3007). Before this PR is merged, you can use my fork (e.g. use git+https://github.com/BoPeng/sentence-transformers.git@issue3005#egg=sentence-transformers instead of sentence_transformers==3.2.0 in requirements.txt).

@ir2718
Copy link
Contributor

ir2718 commented Oct 20, 2024

Hi @BoPeng,

I tried installing from the source and it works as expected, so this might already be fixed elsewhere.

>>> import sentence_transformers as stf
>>> from sentence_transformers import SentenceTransformer
>>> st = SentenceTransformer('clip-ViT-L-14')
>>> stf.__version__
'3.3.0.dev0'

@BoPeng
Copy link
Contributor Author

BoPeng commented Oct 20, 2024

You are right. I am closing the PR and will use the master branch for now.

@PraNavKumAr01
Copy link

from sentence_transformers import SentenceTransformer, export_dynamic_quantized_onnx_model

st = SentenceTransformer('clip-ViT-B-16')
st.save("clip_vit_b_16_main", "onnx")

model = SentenceTransformer("/content/clip_vit_b_16_main", backend="onnx")
export_dynamic_quantized_onnx_model(model, "avx512_vnni", "clip-ViT-B-32", push_to_hub=False, create_pr=False)

Doing this still gives me the same cache_dir error, even after building from source

@ir2718
Copy link
Contributor

ir2718 commented Oct 20, 2024

@PraNavKumAr01

Yeah, it seems that ONNX is not supported currently for the CLIP model. In the meantime, if you are okay with using other libraries, check out this issue mlfoundations/open_clip#186, and this repo https://github.com/jina-ai/clip-as-service.

@tomaarsen
Copy link
Collaborator

Thanks for reporting this! I can reproduce it now, but only if the CLIP model is saved in the root of the model repository/directory. Some more details in #3007 (comment).

This PR should fix this issue, and I'll include it in a patch release very soon.

I do want to mention that ONNX exporting indeed isn't supported for CLIP models I'm afraid, as it's only supported in the Transformer module, not in the CLIPModel module.

  • Tom Aarsen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants