Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you have plans to add fine-tuning scripts for other multimodal large models? For example, Qwen_VL, LLaVA1.6, MiniGPT4, etc. #4174

Closed
1 task done
xdaiycl opened this issue Jun 9, 2024 · 1 comment
Labels
solved This problem has been already solved

Comments

@xdaiycl
Copy link

xdaiycl commented Jun 9, 2024

Reminder

  • I have read the README and searched the existing issues.

System Info

None

Reproduction

None

Expected behavior

None

Others

None

@github-actions github-actions bot added the pending This problem is yet to be addressed label Jun 9, 2024
@BUAADreamer
Copy link
Collaborator

We are working in #4136 to support some sota MLLMs with MLP-Connector like LLaVA-Next(1.6)/Idefics2/Video-LLaVA/LLaVA-Next-Video
For MLLMs with a q-former connector like BLIP2/Instruct-BLIP/Qwen-VL/MiniGPT-4, we will not support them for now.

@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Jun 28, 2024
@hiyouga hiyouga closed this as completed Jun 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants