Skip to content

Commit

Permalink
update docs (#1084)
Browse files Browse the repository at this point in the history
  • Loading branch information
lvyufeng authored May 17, 2024
1 parent 2b36c7d commit 978ab37
Show file tree
Hide file tree
Showing 21 changed files with 81 additions and 45 deletions.
Binary file added docs/assets/favicon.ico
Binary file not shown.
2 changes: 0 additions & 2 deletions docs/en/api/peft/MAIN_CLASSES/PEFT_TYPE.md

This file was deleted.

2 changes: 0 additions & 2 deletions docs/en/api/peft/MAIN_CLASSES/Tuner.md

This file was deleted.

File renamed without changes.
21 changes: 21 additions & 0 deletions docs/en/api/peft/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# PEFT (Prompt Engineering with Fine-Tuning)

MindNLP's PEFT (Prompt Engineering with Fine-Tuning) is a methodology for fine-tuning natural language models using prompts. It allows users to tailor their models for specific tasks or domains by providing customized prompts, enabling better performance on downstream tasks.

## Introduction

PEFT leverages the power of prompts, which are short natural language descriptions of the task, to guide the model's understanding and behavior. By fine-tuning with prompts, users can steer the model's attention and reasoning towards relevant information, improving its performance on targeted tasks.

## Supported PEFT Algorithms

| Algorithm | Description |
|------------------|--------------------------------------------------------------|
| [AdaLoRA](./tuners/adalora.md) | Adaptable Prompting with Learned Rationales (AdaLoRA) |
| [Adaption_Prompt](./tuners/adaption_prompt.md) | Adaptation Prompting |
| [IA3](./tuners/ia3.md) | Iterative Alignments for Adaptable Alignment and Prompting (IA3) |
| [LoKr](./tuners/lokr.md) | Large-scale k-shot Knowledge Representation (LoKr) |
| [LoRA](./tuners/lora.md) | Learnable Redirection of Attention (LoRA) |
| [Prompt Tuning](./tuners/prompt_tuning.md) | Prompt Tuning fine-tunes models by optimizing the prompts used during fine-tuning. |

Each algorithm offers unique approaches to prompt engineering and fine-tuning, allowing users to adapt models to diverse tasks and domains effectively.

File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
:::mindnlp.peft.tuners.adalora.config.AdaLoraConfig
::: mindnlp.peft.tuners.adalora.model.AdaLoraModel
:::mindnlp.peft.tuners.adalora.model.AdaLoraModel
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
:::mindnlp.peft.tuners.adaption_prompt.config.AdaptionPromptConfig
::: mindnlp.peft.tuners.adaption_prompt.model.AdaptionPromptModel
:::mindnlp.peft.tuners.adaption_prompt.model.AdaptionPromptModel
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
:::mindnlp.peft.tuners.ia3.config
::: mindnlp.peft.tuners.ia3.model
:::mindnlp.peft.tuners.ia3.model
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
:::mindnlp.peft.tuners.lokr.config
::: mindnlp.peft.tuners.lokr.model
:::mindnlp.peft.tuners.lokr.model
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
:::mindnlp.peft.tuners.lora.config
::: mindnlp.peft.tuners.lora.model
:::mindnlp.peft.tuners.lora.model
2 changes: 2 additions & 0 deletions docs/en/api/peft/tuners/prompt_tuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
:::mindnlp.peft.tuners.prompt_tuning.config
:::mindnlp.peft.tuners.prompt_tuning.model
File renamed without changes.
File renamed without changes.
Empty file.
Empty file.
Empty file.
2 changes: 2 additions & 0 deletions llm/inference/chatglm/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
gradio
mdtex2html
7 changes: 3 additions & 4 deletions llm/inference/chatglm/web_demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
import gradio as gr
import mdtex2html

model = AutoModelForSeq2SeqLM.from_pretrained("THUDM/chatglm-6b").half()
model = AutoModelForSeq2SeqLM.from_pretrained("ZhipuAI/ChatGLM-6B", mirror='modelscope').half()
model.set_train(False)
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b")
tokenizer = AutoTokenizer.from_pretrained("ZhipuAI/ChatGLM-6B", mirror='modelscope')

"""Override Chatbot.postprocess"""

Expand Down Expand Up @@ -81,8 +81,7 @@ def reset_state():
with gr.Row():
with gr.Column(scale=4):
with gr.Column(scale=12):
user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10).style(
container=False)
user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=10, container=False)
with gr.Column(min_width=32, scale=1):
submitBtn = gr.Button("Submit", variant="primary")
with gr.Column(scale=1):
Expand Down
80 changes: 48 additions & 32 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,27 +10,38 @@ nav:
- Supported Models: supported_models.md
- How-To Contribute: contribute.md
- API Reference:
- accelerate: api/accelerate.md
- data: api/data.md
- dataset: api/dataset.md
- engine: api/engine.md
- modules: api/modules.md
- parallel: api/parallel.md
- peft:
MAIN CLASSES:
PEFT model: api/peft/MAIN_CLASSES/peft_model.md
PEFT mapping: api/peft/MAIN_CLASSES/mapping.md
Configuration: api/peft/MAIN_CLASSES/config.md
ADAPTERS:
AdaLoRA: api/peft/ADAPTERS/AdaLoRA.md
Adaption_Prompt: api/peft/ADAPTERS/Adaption_Prompt.md
IA3: api/peft/ADAPTERS/IA3.md
LoKr: api/peft/ADAPTERS/LoKr.md
LoRA: api/peft/ADAPTERS/LoRA.md
- sentence: api/sentence.md
- transformers: api/transformers.md
- trl: api/trl.md
- utils: api/utils.md
- Accelerate: api/accelerate.md
- Data: api/data.md
- Dataset: api/dataset.md
- Engine: api/engine.md
- Modules: api/modules.md
- Parallel: api/parallel.md
- PEFT:
- api/peft/index.md
- tuners:
AdaLoRA: api/peft/tuners/adalora.md
Adaption_Prompt: api/peft/tuners/adaption_prompt.md
IA3: api/peft/tuners/ia3.md
LoKr: api/peft/tuners/lokr.md
LoRA: api/peft/tuners/lora.md
Prompt tuning: api/peft/tuners/prompt_tuning.md
- utils:
- merge_utils: api/peft/utils/merge_utils.md
- config: api/peft/config.md
- mapping: api/peft/mapping.md
- peft_model: api/peft/peft_model.md

- Sentence: api/sentence.md
- Transformers:
- api/transformers/index.md
- generation:
- api/transforemrs/generation/index.md
- models:
- api/transforemrs/models/index.md
- pipeline:
- api/transforemrs/pipeline/index.md
- TRL: api/trl.md
- Utils: api/utils.md
- Notes:
- Change Log: notes/changelog.md
- Code of Conduct: notes/code_of_conduct.md
Expand All @@ -41,24 +52,26 @@ theme:
palette:
- media: "(prefers-color-scheme: light)"
scheme: default
primary: black
primary: indigo
accent: indigo
toggle:
icon: material/weather-sunny
icon: material/brightness-7
name: Switch to dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: black
accent: indigo
toggle:
icon: material/weather-night
name: Switch to light mode
icon: material/brightness-4
name: Switch to system preference
features:
# - navigation.instant # see https://github.com/ultrabug/mkdocs-static-i18n/issues/62
- navigation.tracking
- navigation.tabs
- navigation.sections
- navigation.indexes
- navigation.top
- navigation.footer
- navigation.path
- toc.follow
- search.highlight
- search.share
Expand All @@ -69,6 +82,9 @@ theme:
- content.code.copy
- content.code.select
- content.code.annotations
favicon: assets/favicon.ico
icon:
logo: logo

markdown_extensions:
# Officially Supported Extensions
Expand Down Expand Up @@ -146,9 +162,9 @@ plugins:
extra:
generator: false
social:
# - icon: fontawesome/solid/paper-plane
# link: mailto:[email protected]
# - icon: fontawesome/brands/github
# link: https://github.com/mindspore-lab/mindcv
# - icon: fontawesome/brands/zhihu
# link: https://www.zhihu.com/people/mindsporelab
- icon: fontawesome/solid/paper-plane
link: mailto:[email protected]
- icon: fontawesome/brands/github
link: https://github.com/mindspore-lab/mindnlp
- icon: fontawesome/brands/zhihu
link: https://www.zhihu.com/people/lu-yu-feng-46-1

0 comments on commit 978ab37

Please sign in to comment.