Skip to content

v0.7.1: Ascend NPU Support, Yi-VL Models

Compare
Choose a tag to compare
@hiyouga hiyouga released this 15 May 18:16
· 963 commits to main since this release

🚨🚨 Core refactor 🚨🚨

  • Add CLIs usage, now we recommend using llamafactory-cli to launch training and inference, the entry point is located at the cli.py
  • Rename files: train_bash.py -> train.py, train_web.py -> webui.py, api_demo.py -> api.py
  • Remove files: cli_demo.py, evaluate.py, export_model.py, web_demo.py, use llamafactory-cli chat/eval/export/webchat instead
  • Use YAML configs in examples instead of shell scripts for a pretty view
  • Remove the sha1 hash check when loading datasets
  • Rename arguments: num_layer_trainable -> freeze_trainable_layers, name_module_trainable -> freeze_trainable_modules

The above changes are made by @hiyouga in #3596

REMINDER: Now installation is mandatory to use LLaMA Factory

New features

  • Support training and inference on the Ascend NPU 910 devices by @zhou-wjjw and @statelesshz (docker images are also provided)
  • Support stop parameter in vLLM engine by @zhaonx in #3527
  • Support fine-tuning token embeddings in freeze tuning via the freeze_extra_modules argument
  • Add Llama3 quickstart to readme

New models

  • Base models
    • Yi-1.5 (6B/9B/34B) 📄
    • DeepSeek-V2 (236B) 📄
  • Instruct/Chat models
    • Yi-1.5-Chat (6B/9B/34B) 📄🤖
    • Yi-VL-Chat (6B/34B) by @BUAADreamer in #3748 📄🖼️🤖
    • Llama3-Chinese-Chat (8B/70B) 📄🤖
    • DeepSeek-V2-Chat (236B) 📄🤖

Bug fix