-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
微调llava1.5时报错图片数量不匹配,猜测llava_plugin中img_token的处理存在问题 #5344
Comments
遇到了同样的问题 |
我正在尝试在LlavaPlugin的process_message中什么也不做,只比较图片数量和img token数量是不是相同,理论上这样是可行的,以前的llama factory貌似也是类似的逻辑 |
需要更新 transformers 至最新的 4.35.0.dev0 |
这个问题咋解决呀,一直改不好 |
想请教下大佬咋弄的 |
将llava plugin中的message["content"] = content.replace("{{image}}", self.image_token * image_seqlen) 换成message["content"] = content.replace("{{image}}", IMAGE_PLACEHOLDER)就行,前者是插入image_seqlen个img token,后者是插入一个。 或者transformers升级到4.35.0.dev0,但是这个我还没测试 |
我也碰到同样的问题 transfomers 4.45。 将llava plugin中的message["content"] = content.replace("{{image}}", self.image_token * image_seqlen) 换成message["content"] = content.replace("{{image}}", IMAGE_PLACEHOLDER)就行,前者是插入image_seqlen个img token,后者是插入一个。 |
可以具体描述一下做法吗 |
可以具体描述一下做法吗 |
我理解的LlavaPlugin中处理img_token的逻辑为(1)找到content中所有即对应的img token并且替换为{{image}}(2)把每个{{image}}替换为image_seqlen个img token。在llava中img token的数量为576,即image_seqlen为576。如果content中只有1个img token,那么经过LlavaPlugin的处理后,content会带有576个img token。但是查看LlavaForConditionalGeneration的源码发现,其实模型假定的是只会输入一个img token,找到img token位置后再一次性把图片的所有576个token插入进来,这样的话就和llama-factory源码的逻辑不太符合?
实际上我遇到的报错为:
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/llava/modeling_llava.py", line 339, in _merge_input_ids_with_image_features raise ValueError( ValueError: The input provided to the model are wrong. The number of image tokens is 576 while the number of image given to the model is 1. This prevents correct indexing and breaks batch generation.
或许可以理解为图片输入了576个img token,理应对应576张图片,但是数据集只输入了一张图,所以报错。在更新qwen-vl以前似乎img token的处理和现在的llama factory版本不太一样,是为了兼容qwen-vl吗
The text was updated successfully, but these errors were encountered: