Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: update openclip loader #782

Merged
merged 7 commits into from
Jul 26, 2022
Merged

feat: update openclip loader #782

merged 7 commits into from
Jul 26, 2022

Conversation

shan-mx
Copy link
Contributor

@shan-mx shan-mx commented Jul 25, 2022

to support independent download process and make precision adapted to device to solve VRAM issue

@codecov
Copy link

codecov bot commented Jul 25, 2022

Codecov Report

Merging #782 (0ba7fad) into main (5877207) will decrease coverage by 0.55%.
The diff coverage is 52.00%.

@@            Coverage Diff             @@
##             main     #782      +/-   ##
==========================================
- Coverage   86.28%   85.72%   -0.56%     
==========================================
  Files          21       21              
  Lines        1108     1121      +13     
==========================================
+ Hits          956      961       +5     
- Misses        152      160       +8     
Flag Coverage Δ
cas 85.72% <52.00%> (-0.56%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
server/clip_server/model/pretrained_models.py 84.12% <ø> (ø)
server/clip_server/model/openclip_model.py 65.85% <50.00%> (-12.72%) ⬇️
server/clip_server/model/mclip_model.py 83.33% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5877207...0ba7fad. Read the comment docs.

Comment on lines 41 to 137
if pretrained.lower() == 'openai':
try:
# loading JIT archive
model = torch.jit.load(
model_path, map_location=device if jit else "cpu"
).eval()
state_dict = None
except RuntimeError:
# loading saved state dict
if jit:
warnings.warn(
f"File {model_path} is not a JIT archive. Loading as a state dict instead"
)
jit = False
state_dict = torch.load(model_path, map_location="cpu")
if not jit:
try:
model = build_model_from_openai_state_dict(
state_dict or model.state_dict()
).to(device)
except KeyError:
sd = {k[7:]: v for k, v in state_dict["state_dict"].items()}
model = build_model_from_openai_state_dict(sd).to(device)
if str(device) == "cpu":
model.float()
else:
# patch the device names
device_holder = torch.jit.trace(
lambda: torch.ones([]).to(torch.device(device)),
example_inputs=[],
)
device_node = [
n
for n in device_holder.graph.findAllNodes("prim::Constant")
if "Device" in repr(n)
][-1]

def patch_device(module):
try:
graphs = [module.graph] if hasattr(module, "graph") else []
except RuntimeError:
graphs = []

if hasattr(module, "forward1"):
graphs.append(module.forward1.graph)

for graph in graphs:
for node in graph.findAllNodes("prim::Constant"):
if "value" in node.attributeNames() and str(
node["value"]
).startswith("cuda"):
node.copyAttributes(device_node)

model.apply(patch_device)
patch_device(model.encode_image)
patch_device(model.encode_text)

# patch dtype to float32 on CPU
if device == "cpu":
float_holder = torch.jit.trace(
lambda: torch.ones([]).float(), example_inputs=[]
)
float_input = list(
float_holder.graph.findNode("aten::to").inputs()
)[1]
float_node = float_input.node()

def patch_float(module):
try:
graphs = (
[module.graph] if hasattr(module, "graph") else []
)
except RuntimeError:
graphs = []

if hasattr(module, "forward1"):
graphs.append(module.forward1.graph)

for graph in graphs:
for node in graph.findAllNodes("aten::to"):
inputs = list(node.inputs())
for i in [
1,
2,
]: # dtype can be the second or third argument to aten::to()
if inputs[i].node()["value"] == 5:
inputs[i].node().copyAttributes(float_node)

model.apply(patch_float)
patch_float(model.encode_image)
patch_float(model.encode_text)
model.float()

# ensure image_size attr available at consistent location for both jit and non-jit
model.visual.image_size = model.input_resolution.item()
if precision == "fp32":
model = model.float()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can simply use load_openai_model() here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't do it because load_openai_model contains openclip's download method which disables us to download model from our s3

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, load_openai_model also accept model_path as the parameter.


model = CLIP(**model_cfg)

if pretrained:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in our case, pretrained cannot be None/empty.

if pretrained:
if model_path:
model.load_state_dict(load_state_dict(model_path))
else:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

else is not necessary?

Comment on lines 155 to 156
if precision == "fp16":
convert_weights_to_fp16(model)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change the logic, if device=cuda, then we use fp16

self._model_name = model_name
else:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should make sure all of the models are uploaded to our s3 bucket.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i uploaded pt models, should be available in any minutes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update: any hours 😅poor internet

@@ -74,4 +74,4 @@ def encode_text(
)

def encode_image(self, pixel_values: torch.Tensor, **kwargs):
return self._model.encode_image(pixel_values)
return self._model.encode_image(pixel_values, **kwargs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, we are sure here encode_image only accepts pixel_values. Thus, kwargs should not be considered.

@github-actions github-actions bot added size/s and removed size/m labels Jul 25, 2022
server/clip_server/model/openclip_model.py Show resolved Hide resolved
model.load_state_dict(load_state_dict(model_path))
model.to(device=torch.device(device))

if device == 'cuda':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if device == 'cuda':
if str(device) == 'cuda':

shan-mx and others added 6 commits July 26, 2022 10:24
@numb3r3 numb3r3 force-pushed the update-openclip-loader branch from d361f72 to ab23235 Compare July 26, 2022 02:40
Copy link
Member

@numb3r3 numb3r3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@numb3r3 numb3r3 merged commit f043b4d into main Jul 26, 2022
@numb3r3 numb3r3 deleted the update-openclip-loader branch July 26, 2022 03:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants