-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the dataset of 512 * 512 #66
Comments
Hi, thanks for your interests. Which config file did you use? |
Thanks for your reply, here is my config file. --train_yaml /data0/mfyan/Human_Attribute/composite/train_TiktokDance-coco-single_person-Lindsey_0411_youtube-SHHQ-1.0-deepfashion2-laion_human-masks-single_cap.yaml \
--val_yaml /data0/mfyan/Human_Attribute/composite/val_TiktokDance-coco-single_person-SHHQ-1.0-masks-single_cap.yaml \ |
hello, same question, if i change the image_size to 512 * 512, the model can not converge to a reasonable position while original 256 * 256 can. I only change the img_full_size = (256, 256) img_size = (256, 256) in config/ref_attn_clip_combine_controlnet_attr_pretraining/coco_S256_xformers_tsv_strongrand.py to (512,512) |
Hi, thanks for the great work. I want to train with the
512*512
dataset, and I set--image_size
to512
, but it doesn't work.The text was updated successfully, but these errors were encountered: