-
Notifications
You must be signed in to change notification settings - Fork 427
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #648 from NVIDIA/feature/prompt-improvements-llama-3
Prompt improvements Llama-3
- Loading branch information
Showing
10 changed files
with
238 additions
and
9 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
models: | ||
- type: main | ||
engine: nvidia_ai_endpoints | ||
model: meta/llama-3.1-70b-instruct |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,144 @@ | ||
# Collection of all the prompts | ||
prompts: | ||
- task: general | ||
models: | ||
- llama3 | ||
- llama-3.1 | ||
|
||
messages: | ||
- type: system | ||
content: | | ||
{{ general_instructions }}{% if relevant_chunks != None and relevant_chunks != '' %} | ||
This is some relevant context: | ||
```markdown | ||
{{ relevant_chunks }} | ||
```{% endif %} | ||
- "{{ history | to_chat_messages }}" | ||
|
||
# Prompt for detecting the user message canonical form. | ||
- task: generate_user_intent | ||
models: | ||
- llama3 | ||
- llama-3.1 | ||
|
||
messages: | ||
- type: system | ||
content: | | ||
{{ general_instructions }} | ||
Your task is to generate the user intent in a conversation given the last user message similar to the examples below. | ||
Do not provide any explanations, just output the user intent. | ||
# Examples: | ||
{{ examples | verbose_v1 }} | ||
- "{{ sample_conversation | first_turns(2) | to_messages }}" | ||
- "{{ history | colang | to_messages }}" | ||
- type: assistant | ||
content: | | ||
Bot thinking: potential user intents are: {{ potential_user_intents }} | ||
output_parser: "verbose_v1" | ||
|
||
# Prompt for generating the next steps. | ||
- task: generate_next_steps | ||
models: | ||
- llama3 | ||
- llama-3.1 | ||
|
||
messages: | ||
- type: system | ||
content: | | ||
{{ general_instructions }} | ||
Your task is to generate the next steps in a conversation given the last user message similar to the examples below. | ||
Do not provide any explanations, just output the user intent and the next steps. | ||
# Examples: | ||
{{ examples | remove_text_messages | verbose_v1 }} | ||
- "{{ sample_conversation | first_turns(2) | to_intent_messages }}" | ||
- "{{ history | colang | to_intent_messages }}" | ||
|
||
output_parser: "verbose_v1" | ||
|
||
# Prompt for generating the bot message from a canonical form. | ||
- task: generate_bot_message | ||
models: | ||
- llama3 | ||
- llama-3.1 | ||
|
||
messages: | ||
- type: system | ||
content: | | ||
{{ general_instructions }}{% if relevant_chunks != None and relevant_chunks != '' %} | ||
This is some relevant context: | ||
```markdown | ||
{{ relevant_chunks }} | ||
```{% endif %} | ||
Your task is to generate the bot message in a conversation given the last user message, user intent and bot intent. | ||
Similar to the examples below. | ||
Do not provide any explanations, just output the bot message. | ||
# Examples: | ||
{{ examples | verbose_v1 }} | ||
- "{{ sample_conversation | first_turns(2) | to_intent_messages_2 }}" | ||
- "{{ history | colang | to_intent_messages_2 }}" | ||
|
||
output_parser: "verbose_v1" | ||
|
||
# Prompt for generating the user intent, next steps and bot message in a single call. | ||
- task: generate_intent_steps_message | ||
models: | ||
- llama3 | ||
- llama-3.1 | ||
|
||
messages: | ||
- type: system | ||
content: | | ||
{{ general_instructions }}{% if relevant_chunks != None and relevant_chunks != '' %} | ||
This is some relevant context: | ||
```markdown | ||
{{ relevant_chunks }} | ||
```{% endif %} | ||
Your task is to generate the user intent and the next steps in a conversation given the last user message similar to the examples below. | ||
Do not provide any explanations, just output the user intent and the next steps. | ||
# Examples: | ||
{{ examples | verbose_v1 }} | ||
- "{{ sample_conversation | first_turns(2) | to_messages }}" | ||
- "{{ history | colang | to_messages }}" | ||
- type: assistant | ||
content: | | ||
Bot thinking: potential user intents are: {{ potential_user_intents }} | ||
output_parser: "verbose_v1" | ||
|
||
# Prompt for generating the value of a context variable. | ||
- task: generate_value | ||
models: | ||
- llama3 | ||
- llama-3.1 | ||
|
||
messages: | ||
- type: system | ||
content: | | ||
{{ general_instructions }} | ||
Your task is to generate value for the ${{ var_name }} variable.. | ||
Do not provide any explanations, just output value. | ||
# Examples: | ||
{{ examples | verbose_v1 }} | ||
- "{{ sample_conversation | first_turns(2) | to_messages }}" | ||
- "{{ history | colang | to_messages }}" | ||
- type: assistant | ||
content: | | ||
Bot thinking: follow the following instructions: {{ instructions }} | ||
${{ var_name }} = | ||
output_parser: "verbose_v1" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters