Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[megatron gpt checkpoint conversion] causal mask requires pos_embed dimension #13735

Merged
merged 1 commit into from
Sep 26, 2021

Conversation

stas00
Copy link
Contributor

@stas00 stas00 commented Sep 24, 2021

this is a follow up to #13508 - where I tried to fix the wrong side of the bug :(, this one hopefully is the correct one.

causal mask uses the positional emb dimensions / seqlen and not n_emb (hidden_size) as it was originally coded and happened to work because the original meg-gpt2 model had the same n_emb and seqlen size.

I re-tested that the original megatron_lm_345m/release/mp_rank_00/model_optim_rng.pt still produces the same converted output.

@sgugger, @LysandreJik

@stas00 stas00 marked this pull request as ready for review September 24, 2021 22:01
Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's hope this one is the right fix!

@stas00 stas00 merged commit 400c5a1 into huggingface:master Sep 26, 2021
@stas00 stas00 deleted the megatron_convert_take2 branch September 26, 2021 16:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants