Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix] update pairwise dataloader. #395

Merged
merged 3 commits into from
Apr 11, 2023

Conversation

Chen9154
Copy link
Contributor

@Chen9154 Chen9154 commented Mar 27, 2023

In forward() of reward_model.py (Line 62), if "chosen" and "rejected" are exactly the same, "inference" would turn to True, which shouldn't happen during the training procedure. However in class PairwiseDataset, "chosen" and "rejected" could be the same after truncation (this would be easily happen when prompts/posts are longer than max_length and we set padding_side = 'right'). So we filter out those cases from training data.

In forward() of reward_model.py (Line 62), if "chosen" and "rejected" are exactly the same, "inference" would turn to True, which should not happen during the training procedure. However in class PairwiseDataset, "chosen" and "rejected" could be the same after truncation. So we filter out those cases from training data.
Copy link
Collaborator

@jon-tow jon-tow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @Chen9154! Thanks for catching this edge case; looks good to me 👍

Would you be able to also make this same change to the test file for sake of completeness, please?

class PairwiseDataset(Dataset):
def __init__(self, pairs, tokenizer, max_length):
self.chosen_input_ids = []
self.chosen_attn_masks = []
self.rejected_input_ids = []
self.rejected_attn_masks = []
for pair in pairs:
chosen, rejected = pair["chosen"], pair["rejected"]
chosen_encodings_dict = tokenizer(
"<|startoftext|>" + chosen + "<|endoftext|>",
truncation=True,
max_length=max_length,
padding="max_length",
return_tensors="pt",
)
rejected_encodings_dict = tokenizer(
"<|startoftext|>" + rejected + "<|endoftext|>",
truncation=True,
max_length=max_length,
padding="max_length",
return_tensors="pt",
)
self.chosen_input_ids.append(chosen_encodings_dict["input_ids"])
self.chosen_attn_masks.append(chosen_encodings_dict["attention_mask"])
self.rejected_input_ids.append(rejected_encodings_dict["input_ids"])
self.rejected_attn_masks.append(rejected_encodings_dict["attention_mask"])

In forward() of reward_model.py (Line 62), if "chosen" and "rejected" are exactly the same, "inference" would turn to True, which should not happen during the training procedure. However in class PairwiseDataset, "chosen" and "rejected" could be the same after truncation. So we filter out those cases from training data.
@Chen9154
Copy link
Contributor Author

@jon-tow Thanks for the review! I have also made the same change to the test file.

Copy link
Collaborator

@jon-tow jon-tow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thank you, @Chen9154!

@jon-tow jon-tow merged commit adb3be2 into CarperAI:main Apr 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants