Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looks like the updated code requires trl>=0.9.3, which isn't compatiable with unsloth #4120

Closed
1 task done
YifanDengWHU opened this issue Jun 6, 2024 · 1 comment
Closed
1 task done
Labels
solved This problem has been already solved

Comments

@YifanDengWHU
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

  • llamafactory version: 0.7.2.dev0
  • Platform: Linux-4.18.0-517.el8.x86_64-x86_64-with-glibc2.28
  • Python version: 3.9.0
  • PyTorch version: 2.0.0+cu117 (GPU)
  • Transformers version: 4.41.2
  • Datasets version: 2.19.2
  • Accelerate version: 0.30.1
  • PEFT version: 0.11.1
  • TRL version: 0.9.4
  • GPU type: NVIDIA L40
  • Bitsandbytes version: 0.43.1

Reproduction


git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .[metrics]

pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git"

Yesterday everthing worked well, but today I got error:

Traceback (most recent call last):
File "/var/lib/condor/execute/slot1/dir_677837/mm_molecule/bin/llamafactory-cli", line 5, in
from llamafactory.cli import main
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/init.py", line 3, in
from .cli import VERSION
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/cli.py", line 7, in
from . import launcher
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/launcher.py", line 1, in
from llamafactory.train.tuner import run_exp
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/train/tuner.py", line 9, in
from ..hparams import get_infer_args, get_train_args
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/hparams/init.py", line 6, in
from .parser import get_eval_args, get_infer_args, get_train_args
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 27, in
check_dependencies()
File "/var/lib/condor/execute/slot1/dir_677837/LLaMA-Factory/src/llamafactory/extras/misc.py", line 68, in check_dependencies
require_version("trl>=0.9.3", "To fix: pip install trl>=0.9.3")
File "/var/lib/condor/execute/slot1/dir_677837/mm_molecule/lib/python3.9/site-packages/transformers/utils/versions.py", line 111, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/var/lib/condor/execute/slot1/dir_677837/mm_molecule/lib/python3.9/site-packages/transformers/utils/versions.py", line 44, in _compare_versions
raise ImportError(
ImportError: trl>=0.9.3 is required for a normal functioning of this module, but found trl==0.8.6.
To fix: pip install trl>=0.9.3

Even though I can install trl==0.9.4, the unsloth will override it again:

Collecting trl<0.9.0,>=0.7.9 (from unsloth[cu118-torch230]@ git+https://github.com/unslothai/unsloth.git)
Downloading trl-0.8.6-py3-none-any.whl.metadata (11 kB)

After I delete the pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git", it works fine. But it also means I can't use unsloth for training.

Expected behavior

It seems that updated LLAMA-Factory code add the constraint for trl version>=0.9.3. Is it necessary?

Others

No response

@hiyouga
Copy link
Owner

hiyouga commented Jun 6, 2024

fixed

@hiyouga hiyouga closed this as completed in f9e818d Jun 6, 2024
@hiyouga hiyouga added the solved This problem has been already solved label Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants