-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attention implementation cannot work together with config in AutoModel #30298
Comments
Given the logic below, we cannot enforce the model to use eager attention, since transformers/src/transformers/modeling_utils.py Lines 3138 to 3150 in e4ea19b
I think we should use 1: transformers/src/transformers/configuration_utils.py Lines 406 to 420 in e4ea19b
2: transformers/src/transformers/modeling_utils.py Lines 1461 to 1466 in e4ea19b
|
cc @fxmarty |
System Info
transformers
version: 4.40.0.dev0Who can help?
@younesbelkada
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Similar to #28038
We want to pass a model config to
from_pretrained
with anattn_implementation
parameter. The attention type cannot be faithful to the one in theattn_implementation
Expected behavior
_attn_implementation
should beeager
The text was updated successfully, but these errors were encountered: