-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the inference #9
Comments
No, you only need a pair <reference image, target prompt>. |
Thank you for your answer. So what are the inputs and target during the training? |
There are three kinds of training pairs:
|
Thank you,but in the paper,the qforme’s input should have the text {content} or {style},what is it |
The text input of Q-former is the word "content" or "style". |
Oh,i see,thanks for your patience |
Hi @Tianhao-Qi , does the current released code support this "Stylized Reference Object Generation" function? Basically I want to convert my given image to a different style by providing the text only, the given image is the source image rather than the style image. |
You can refer to this script. Besides, if you want to keep the structure of the source image as well, you'll need to use the controlnet. |
In Dataset part, for "style", your paper says use the same prompts to generate the reference and target image. So, i think they should be in the same subject? |
Thanks for your great work!
I want to know, when I want to do style transfer task, do I need to input a reference picture, a style word corresponding to this reference picture and a target prompt to the model? Just like This triplet <reference img,reference style word,target prompt>
The text was updated successfully, but these errors were encountered: