Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix shape error that occurred when loading lora weight of gemma2 model. #2330

Merged
merged 2 commits into from
Dec 8, 2024

Conversation

upskyy
Copy link
Contributor

@upskyy upskyy commented Dec 3, 2024

Motivation & Modifications

The get_hidden_dim functions of the llama model and the gemma2 model differ, which causes a shape error when loading gemma2 lora weights.
For example, in the llama model, head_dim * num_attention_heads equals hidden_size, so using self.config.hidden_size, self.config.hidden_size works fine. (3072 = 128 * 24)
However, in the gemma2 model, head_dim * num_attention_heads and hidden_size are unrelated and need to be implemented differently. Specifically, in the gemma2 model, hidden_size is 2304, while head_dim * num_attention_heads is 2048. (2304 != 2048)

As a result, the q_proj in the gemma2 model should be defined as self.config.hidden_size, self.config.head_dim * self.config.num_attention_heads rather than self.config.hidden_size, self.config.hidden_size.
Since the input_dim and output_dim are different, the o_proj also needs to be adjusted.
I modified the code to ensure that the gemma2 model works correctly with multi-LoRA inference, and I have verified its functionality.

llama3 config : https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/blob/main/config.json
gemma2 config : https://huggingface.co/google/gemma-2-2b-it/blob/main/config.json
current get_hidden_dim functions : https://github.com/sgl-project/sglang/blob/v0.3.6.post2/python/sglang/srt/models/llama.py#L326-L339

The gemma2 model does not have a get_hidden_dim function implemented, which allows it to bypass the following code
(https://github.com/sgl-project/sglang/blob/v0.3.6.post2/python/sglang/srt/lora/lora_manager.py#L34-L46)

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

@merrymercy merrymercy enabled auto-merge (squash) December 8, 2024 09:03
@merrymercy merrymercy disabled auto-merge December 8, 2024 09:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants