Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1810 Move mistral evaluation #1829

Merged
merged 10 commits into from
Oct 14, 2024
Merged

Conversation

Yousof-kayal
Copy link
Contributor

Context

tracker: #1810
What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • clean up

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?

  • Copied evaluation.yaml to /recipes/mistral/ directory
  • Updated evaluation.yaml to point to mistral 7b model instantiations
  • Updated the recipe registry to pick up new configuration

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

Mistral7B v0.1 Eleuther evaluation recipe output:

batch_size: 2
checkpointer:
  _component_: torchtune.training.FullModelHFCheckpointer
  checkpoint_dir: /tmp/Mistral-7B-v0.1/
  checkpoint_files:
  - pytorch_model-00001-of-00002.bin
  - pytorch_model-00002-of-00002.bin
  model_type: MISTRAL
  output_dir: /tmp/Mistral-7B-v0.1/
  recipe_checkpoint: null
device: cuda
dtype: bf16
enable_kv_cache: true
limit: null
max_seq_length: 4096
model:
  _component_: torchtune.models.mistral.mistral_7b
quantizer: null
resume_from_checkpoint: false
seed: 1234
tasks:
- truthfulqa_mc2
tokenizer:
  _component_: torchtune.models.mistral.mistral_tokenizer
  max_seq_len: null
  path: /tmp/Mistral-7B-v0.1/tokenizer.model

Model is initialized with precision torch.bfloat16.
2024-10-14:16:00:47,974 INFO     [eleuther_eval.py:524] Model is initialized with precision torch.bfloat16.
2024-10-14:16:00:48,413 INFO     [huggingface.py:129] Using device 'cuda:0'
2024-10-14:16:00:49,036 INFO     [huggingface.py:481] Using model type 'default'
2024-10-14:16:00:49,329 INFO     [huggingface.py:365] Model parallel was set to False, max memory was not set, and device map was set to {'': 'cuda:0'}
2024-10-14:16:00:55,159 INFO     [__init__.py:459] The tag 'arc_ca' is already registered as a group, this tag will not be registered. This may affect tasks you want to call.
2024-10-14:16:00:55,161 INFO     [__init__.py:459] The tag 'arc_ca' is already registered as a group, this tag will not be registered. This may affect tasks you want to call.
Running evaluation on the following tasks: ['truthfulqa_mc2']
2024-10-14:16:01:05,870 INFO     [eleuther_eval.py:568] Running evaluation on the following tasks: ['truthfulqa_mc2']
2024-10-14:16:01:05,871 INFO     [task.py:415] Building contexts for truthfulqa_mc2 on rank 0...
100%|███████████████████████████████████████████████████████████████████████████████| 817/817 [00:00<00:00, 1301.01it/s]
2024-10-14:16:01:06,535 INFO     [evaluator.py:489] Running loglikelihood requests
Running loglikelihood requests: 100%|█████████████████████████████████████████████| 5882/5882 [5:03:14<00:00,  3.09s/it]
Eval completed in 18197.52 seconds.
2024-10-14:21:04:23,386 INFO     [eleuther_eval.py:577] Eval completed in 18197.52 seconds.
Max memory allocated: 22.34 GB
2024-10-14:21:04:23,388 INFO     [eleuther_eval.py:578] Max memory allocated: 22.34 GB


|    Tasks     |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|--------------|------:|------|-----:|------|---|-----:|---|-----:|
|truthfulqa_mc2|      2|none  |     0|acc   |↑  |0.4268|±  |0.0142|

Copy link

pytorch-bot bot commented Oct 14, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1829

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 2 Cancelled Jobs

As of commit e357df3 with merge base 4107cc4 (image):

NEW FAILURE - The following job has failed:

  • Recipe Tests / recipe_test (3.10) (gh)
    tests/recipes/test_ppo_full_finetune_single_device.py::TestPPOFullFinetuneSingleDeviceRecipe::test_training_state_on_resume_with_optimizer_in_bwd

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @Yousof-kayal!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 14, 2024
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@joecummings
Copy link
Contributor

@Yousof-kayal Can you please rebase/merge the main branch?

Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good! Just a couple of nits.

.gitignore Outdated Show resolved Hide resolved
recipes/configs/mistral/evaluation.yaml Outdated Show resolved Hide resolved
.gitignore Outdated Show resolved Hide resolved
Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your patience with this one @Yousof-kayal! Looks great, I'll merge when CI passes

@Yousof-kayal
Copy link
Contributor Author

Don't worry about it!

@joecummings joecummings merged commit f639b6d into pytorch:main Oct 14, 2024
14 of 17 checks passed
@Yousof-kayal Yousof-kayal deleted the move-mistral-eval branch October 16, 2024 05:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants