Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLIPTorchEncoder "expected scalar type Float but found Half" error #800

Closed
delgermurun opened this issue Aug 9, 2022 · 0 comments · Fixed by #801
Closed

CLIPTorchEncoder "expected scalar type Float but found Half" error #800

delgermurun opened this issue Aug 9, 2022 · 0 comments · Fixed by #801

Comments

@delgermurun
Copy link

I am trying to use CLIP executors for Dalle-Flow.
CLIPTorchEncoder doesn't work.

  - name: clip_encoder
    uses: jinahub+docker://CLIPTorchEncoder/latest-gpu
    uses_with:
      name: ViT-L-14-336::openai
status {
  code: ERROR
  description: "RuntimeError(\'expected scalar type Float but found Half\')"
  exception {
    name: "RuntimeError"
    args: "expected scalar type Float but found Half"
    stacks: "Traceback (most recent call last):\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/jina/serve/runtimes/worker/__init__.py\", line 164, in process_data\n    return await self._data_request_handler.handle(requests=requests)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/jina/serve/runtimes/request_handlers/data_request_handler.py\", line 155, in handle\n    return_data = await self._executor.__acall__(\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/jina/serve/executors/__init__.py\", line 291, in __acall__\n    return await self.__acall_endpoint__(__default_endpoint__, **kwargs)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/jina/serve/executors/__init__.py\", line 310, in __acall_endpoint__\n    return await func(self, **kwargs)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/jina/serve/executors/decorators.py\", line 207, in arg_wrapper\n    return await fn(executor_instance, *args, **kwargs)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/clip_server/executors/clip_torch.py\", line 141, in encode\n    self._model.encode_text(**batch_data)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/openclip_model.py\", line 48, in encode_text\n    return self._model.encode_text(input_ids)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 573, in encode_text\n    x = self.transformer(x, attn_mask=self.attn_mask)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\n    return forward_call(*input, **kwargs)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 328, in forward\n    x = r(x, attn_mask=attn_mask)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\n    return forward_call(*input, **kwargs)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 297, in forward\n    x = x + self.attention(self.ln_1(x), attn_mask=attn_mask)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/clip_server/model/model.py\", line 294, in attention\n    return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0]\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\n    return forward_call(*input, **kwargs)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/activation.py\", line 1153, in forward\n    attn_output, attn_output_weights = F.multi_head_attention_forward(\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py\", line 5179, in multi_head_attention_forward\n    attn_output, attn_output_weights = _scaled_dot_product_attention(q, k, v, attn_mask, dropout_p)\n"
    stacks: "  File \"/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py\", line 4852, in _scaled_dot_product_attention\n    attn = torch.baddbmm(attn_mask, q, k.transpose(-2, -1))\n"
    stacks: "RuntimeError: expected scalar type Float but found Half\n"
    executor: "CLIPEncoder"
  }
}

CLIPOnnxEncoder works fine

  - name: clip_encoder
    uses: jinahub+docker://CLIPOnnxEncoder/latest-gpu
    uses_with:
      name: ViT-L-14-336::openai
@ZiniuYu ZiniuYu linked a pull request Aug 10, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant