Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stablelm-2-1_6b-chat config extracted from GGUF file differs from source model config #34426

Closed
1 of 4 tasks
Isotr0py opened this issue Oct 26, 2024 · 2 comments · Fixed by #34450
Closed
1 of 4 tasks
Labels

Comments

@Isotr0py
Copy link
Contributor

Isotr0py commented Oct 26, 2024

System Info

  • transformers version: 4.46.0
  • Platform: Linux-6.1.85+-x86_64-with-glibc2.35
  • Python version: 3.10.12
  • Huggingface_hub version: 0.24.7
  • Safetensors version: 0.4.5
  • Accelerate version: 0.34.2
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.5.0+cu121 (False)
  • Tensorflow version (GPU?): 2.17.0 (False)
  • Flax version (CPU?/GPU?/TPU?): 0.8.5 (cpu)
  • Jax version: 0.4.33
  • JaxLib version: 0.4.33
  • Using distributed or parallel set-up in script?:

Who can help?

@SunMarc
Also cc @VladOS95-cyber since you added GGUF support for StableLM :)

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

from transformers import AutoConfig

config_hf = AutoConfig.from_pretrained("stabilityai/stablelm-2-1_6b-chat")
config_gguf = AutoConfig.from_pretrained("Crataco/stablelm-2-1_6b-chat-imatrix-GGUF", gguf_file="stablelm-2-1_6b-chat.IQ4_XS.imx.gguf")
print(config_hf)
print(config_gguf)

Outputs

StableLmConfig {
  ...
  "use_qkv_bias": true,
  "vocab_size": 100352
}

StableLmConfig {
  ...
  "use_qkv_bias": false,
  "vocab_size": 100352
}

Expected behavior

The stabilityai/stablelm-2-1_6b-chat" model has use_qkv_bias=True. However, the config extracted from stablelm-2-1_6b-chat GGUF file has use_qkv_bias=False, causing model failed to initialize with qkv_proj bias.

@VladOS95-cyber
Copy link
Contributor

Hey @Isotr0py, @SunMarc! By default, use_qkv_bias is always false, because this parameter is not specified in gguf config and there is no logic to convert it somehow in https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py. Original model can have use_qkv_bias is true, or false as well, depending on attached config to the model. So, in this case, if you want gguf model to be exactly the same as original one, you should explicitly use config with use_qkv_bias = True, at least for now.

@Isotr0py
Copy link
Contributor Author

@VladOS95-cyber Thanks for explanation! I think a potential solution is to check if attn_q.bias etc present in gguf tensors, and implement it in #34450.

But I'm afraid that this will increase the CPU overhead for GGUF config extraction. WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants