We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am trying to use CLIP executors for Dalle-Flow. CLIPTorchEncoder doesn't work.
CLIPTorchEncoder
- name: clip_encoder uses: jinahub+docker://CLIPTorchEncoder/latest-gpu uses_with: name: ViT-L-14-336::openai
status { code: ERROR description: "RuntimeError(\'expected scalar type Float but found Half\')" exception { name: "RuntimeError" args: "expected scalar type Float but found Half" stacks: "Traceback (most recent call last):\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/jina/serve/runtimes/worker/__init__.py\", line 164, in process_data\n return await self._data_request_handler.handle(requests=requests)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/jina/serve/runtimes/request_handlers/data_request_handler.py\", line 155, in handle\n return_data = await self._executor.__acall__(\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/jina/serve/executors/__init__.py\", line 291, in __acall__\n return await self.__acall_endpoint__(__default_endpoint__, **kwargs)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/jina/serve/executors/__init__.py\", line 310, in __acall_endpoint__\n return await func(self, **kwargs)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/jina/serve/executors/decorators.py\", line 207, in arg_wrapper\n return await fn(executor_instance, *args, **kwargs)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/clip_server/executors/clip_torch.py\", line 141, in encode\n self._model.encode_text(**batch_data)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/openclip_model.py\", line 48, in encode_text\n return self._model.encode_text(input_ids)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 573, in encode_text\n x = self.transformer(x, attn_mask=self.attn_mask)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\n return forward_call(*input, **kwargs)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 328, in forward\n x = r(x, attn_mask=attn_mask)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\n return forward_call(*input, **kwargs)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 297, in forward\n x = x + self.attention(self.ln_1(x), attn_mask=attn_mask)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 294, in attention\n return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0]\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\n return forward_call(*input, **kwargs)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/activation.py\", line 1153, in forward\n attn_output, attn_output_weights = F.multi_head_attention_forward(\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py\", line 5179, in multi_head_attention_forward\n attn_output, attn_output_weights = _scaled_dot_product_attention(q, k, v, attn_mask, dropout_p)\n" stacks: " File \"/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py\", line 4852, in _scaled_dot_product_attention\n attn = torch.baddbmm(attn_mask, q, k.transpose(-2, -1))\n" stacks: "RuntimeError: expected scalar type Float but found Half\n" executor: "CLIPEncoder" } }
CLIPOnnxEncoder works fine
CLIPOnnxEncoder
- name: clip_encoder uses: jinahub+docker://CLIPOnnxEncoder/latest-gpu uses_with: name: ViT-L-14-336::openai
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
I am trying to use CLIP executors for Dalle-Flow.
CLIPTorchEncoder
doesn't work.CLIPOnnxEncoder
works fineThe text was updated successfully, but these errors were encountered: