Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update finetuner docs #843

Merged
merged 9 commits into from
Oct 21, 2022
16 changes: 14 additions & 2 deletions docs/user-guides/finetuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ This guide will show you how to use [Finetuner](https://finetuner.jina.ai) to fi
For installation and basic usage of Finetuner, please refer to [Finetuner documentation](https://finetuner.jina.ai).
You can also [learn more details about fine-tuning CLIP](https://finetuner.jina.ai/tasks/text-to-image/).

This tutorial requires `finetuner >=v0.6.3', `clip_server >=v0.6.0'.
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved

## Prepare Training Data

Finetuner accepts training data and evaluation data in the form of {class}`~docarray.array.document.DocumentArray`.
Expand Down Expand Up @@ -91,7 +93,7 @@ run = finetuner.fit(
epochs=5,
learning_rate=1e-5,
loss='CLIPLoss',
cpu=False,
to_onnx=True,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As finetuner supports open_clip, can we finetune model='ViT-B-32::openai' in this tutorial.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this model name does not match that in finetuner

jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
)
```

Expand Down Expand Up @@ -174,10 +176,20 @@ executors:
replicas: 1
```
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved


```{warning}
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
Note that Finetuner only support ViT-B/32 CLIP model currently. The model name should match the fine-tuned model, or you will get incorrect output.
Note that `finetuner==0.6.3` doesn't support these new clip models trained on Laion2B:
- ViT-B-32::laion2b-s34b-b79k
- ViT-L-14::laion2b-s32b-b82k
- ViT-H-14::laion2b-s32b-b79k
- ViT-g-14::laion2b-s12b-b42k
```
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved

```{tip}
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
You can use finetuner.describe_models() to check the supported models.
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
```


You can now start the `clip_server` using fine-tuned model to get a performance boost:

```bash
Expand Down