Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose completion function for evaluation of modelgraded evals #1418

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

LoryPack
Copy link
Contributor

The documentation for modelgraded evals says:

In general, the evaluation model and the model being evaluated don't have to be the same, though we will assume that they are here for ease of explanation.

However, the current code did not allow to do this. Indeed, classify.py:ModelBasedClassify had the following line:

self.eval_completion_fn = self.completion_fns[-1]

I have now added an additional optional parameter to ModelBasedClassify which allows to pick a different model for the evaluation from the evaluated one. The model for the evaluation has to be specified from the .yaml file, for instance with:

<my_eval>:
  id: <my_eval>.test.v1
  metrics:
  - accuracy
<my_eval>.test.v1:
  args:
    eval_type: cot_classify
    modelgraded_spec: fact
    samples_jsonl: ../registry/data/<my_eval>/samples.jsonl
    eval_completion_fn: gpt-3.5-turbo
  class: evals.elsuite.modelgraded.classify:ModelBasedClassify

Final checklist 👀

Submission agreement

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).

  • I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request.

  • I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.

Limited availability acknowledgment

We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.

  • I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted.

Submit eval

  • I have filled out all required fields of this form
  • I have used Git LFS for the Eval JSON data
  • (Ignore if not submitting code) I have run pip install pre-commit; pre-commit install and have verified that mypy, black, isort, autoflake and ruff are running when I commit and push

Failure to fill out all required fields will result in the PR being closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant