Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: install pytorch cu116 for server docker image #882

Merged
merged 3 commits into from
Dec 29, 2022
Merged

Conversation

ZiniuYu
Copy link
Member

@ZiniuYu ZiniuYu commented Dec 26, 2022

This PR fixes the error in linked issue

@jemmyshin
Copy link
Contributor

please rebuild the image and reply to the communities.

@codecov
Copy link

codecov bot commented Dec 26, 2022

Codecov Report

Merging #882 (a3d8796) into main (0b293ec) will decrease coverage by 11.27%.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##             main     #882       +/-   ##
===========================================
- Coverage   83.06%   71.78%   -11.28%     
===========================================
  Files          22       22               
  Lines        1529     1531        +2     
===========================================
- Hits         1270     1099      -171     
- Misses        259      432      +173     
Flag Coverage Δ
cas 71.78% <100.00%> (-11.28%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
server/clip_server/model/model.py 75.37% <100.00%> (+0.18%) ⬆️
server/clip_server/executors/clip_tensorrt.py 0.00% <0.00%> (-94.60%) ⬇️
server/clip_server/model/clip_trt.py 0.00% <0.00%> (-85.72%) ⬇️
server/clip_server/model/trt_utils.py 0.00% <0.00%> (-83.52%) ⬇️
server/clip_server/executors/clip_onnx.py 87.91% <0.00%> (+1.09%) ⬆️
server/clip_server/model/clip_onnx.py 87.30% <0.00%> (+22.22%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@numb3r3
Copy link
Member

numb3r3 commented Dec 26, 2022

@ZiniuYu Does the hub executor also suffer from this issue?

@numb3r3 numb3r3 marked this pull request as draft December 26, 2022 10:28
@ZiniuYu
Copy link
Member Author

ZiniuYu commented Dec 27, 2022

@ZiniuYu Does the hub executor also suffer from this issue?

I just tested both Torch and ONNX executors, they are all fine

@ZiniuYu ZiniuYu marked this pull request as ready for review December 27, 2022 03:01
@ZiniuYu ZiniuYu requested a review from numb3r3 December 27, 2022 03:03
@numb3r3
Copy link
Member

numb3r3 commented Dec 27, 2022

I just tested both Torch and ONNX executors, they are all fine

Then, I doubled if this issue comes from cuda? And I would not recommend pinning the cuda version. Our work should work on all stable cuda versions.

@numb3r3
Copy link
Member

numb3r3 commented Dec 27, 2022

So, please try to make it work with cuda-11.6.0, rather than freeze the cuda version.

@ZiniuYu ZiniuYu changed the title chore: bump docker cuda version to 11.7.0 fix: install pytorch cu116 for server docker image Dec 28, 2022
Copy link
Member

@numb3r3 numb3r3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@numb3r3 numb3r3 merged commit 8a576c5 into main Dec 29, 2022
@numb3r3 numb3r3 deleted the bump-docker-cuda branch December 29, 2022 01:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
3 participants