-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: prompt generator in AI assistant #196
Open
Rifahaziz
wants to merge
19
commits into
main
Choose a base branch
from
feat-prompt-generator-inblock
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 17 commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
01ac6b6
wip: added magic icon with modal
Rifahaziz 7f082c3
wip: prompt api
Rifahaziz 3eead26
wip: generate button
Rifahaziz 773ae58
wip openai prompt generator
Rifahaziz 8d330b8
front-end modal working
Rifahaziz f1a927b
front-end complete with complete with js
Rifahaziz 85e279a
wip ai generator
Rifahaziz 06729b0
wip
Rifahaziz 700ace3
generating prompt successful with azure_openai_key, output layout nee…
Rifahaziz c37b6ad
output format improved
Rifahaziz e90d818
buttons restructured
Rifahaziz 411e5ce
Merge remote-tracking branch 'origin/main' into feat-prompt-generator…
Rifahaziz 7d34695
in-block prompt generated with ai, prompt truncated to after the reas…
Rifahaziz de56c07
flaoating button icon
Rifahaziz 5abb331
added a requirement to input text; costs and spinner wip
Rifahaziz 081f54d
spinner and costs added
Rifahaziz bc62528
Update localization files
github-actions[bot] 41d6fac
removed comments
Rifahaziz 6de0a7a
Merge branch 'feat-prompt-generator-inblock' of https://github.com/ju…
Rifahaziz File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,6 @@ | ||
import json | ||
import logging | ||
import re | ||
|
||
from django.conf import settings | ||
from django.contrib.auth import get_user_model | ||
|
@@ -15,6 +17,7 @@ | |
from django.utils.translation import gettext as _ | ||
from django.views.decorators.http import require_GET, require_POST | ||
|
||
from openai import OpenAI | ||
from rules.contrib.views import objectgetter | ||
from structlog import get_logger | ||
from structlog.contextvars import bind_contextvars | ||
|
@@ -907,3 +910,131 @@ def update_qa_options_from_librarian(request, chat_id, library_id): | |
"trigger_library_change": "true" if library != original_library else None, | ||
}, | ||
) | ||
|
||
|
||
def generate_prompt(task_or_prompt: str): | ||
import os | ||
|
||
from django.conf import settings | ||
|
||
import openai | ||
from dotenv import load_dotenv | ||
|
||
load_dotenv() | ||
openai.api_key = os.getenv("AZURE_OPENAI_KEY") | ||
openai.api_version = os.getenv("AZURE_OPENAI_VERSION") | ||
|
||
llm = OttoLLM() | ||
|
||
if len(task_or_prompt) <= 1: | ||
return "Please describe your task first." | ||
META_PROMPT = """ | ||
Given a current prompt and a change description, produce a detailed system prompt to guide a language model in completing the task effectively. | ||
|
||
Your final output will be the full corrected prompt verbatim. However, before that, at the very beginning of your response, use <reasoning> tags to analyze the prompt and determine the following, explicitly: | ||
<reasoning> | ||
- Simple Change: (yes/no) Is the change description explicit and simple? (If so, skip the rest of these questions.) | ||
- Reasoning: (yes/no) Does the current prompt use reasoning, analysis, or chain of thought? | ||
- Identify: (max 10 words) if so, which section(s) utilize reasoning? | ||
- Conclusion: (yes/no) is the chain of thought used to determine a conclusion? | ||
- Ordering: (before/after) is the chain of though located before or after | ||
- Structure: (yes/no) does the input prompt have a well defined structure | ||
- Examples: (yes/no) does the input prompt have few-shot examples | ||
- Representative: (1-5) if present, how representative are the examples? | ||
- Complexity: (1-5) how complex is the input prompt? | ||
- Task: (1-5) how complex is the implied task? | ||
- Necessity: () | ||
- Specificity: (1-5) how detailed and specific is the prompt? (not to be confused with length) | ||
- Prioritization: (list) what 1-3 categories are the MOST important to address. | ||
- Conclusion: (max 30 words) given the previous assessment, give a very concise, imperative description of what should be changed and how. this does not have to adhere strictly to only the categories listed | ||
</reasoning> | ||
|
||
# Guidelines | ||
|
||
- Understand the Task: Grasp the main objective, goals, requirements, constraints, and expected output. | ||
- Minimal Changes: If an existing prompt is provided, improve it only if it's simple. For complex prompts, enhance clarity and add missing elements without altering the original structure. | ||
- Reasoning Before Conclusions**: Encourage reasoning steps before any conclusions are reached. ATTENTION! If the user provides examples where the reasoning happens afterward, REVERSE the order! NEVER START EXAMPLES WITH CONCLUSIONS! | ||
- Reasoning Order: Call out reasoning portions of the prompt and conclusion parts (specific fields by name). For each, determine the ORDER in which this is done, and whether it needs to be reversed. | ||
- Conclusion, classifications, or results should ALWAYS appear last. | ||
- Examples: Include high-quality examples if helpful, using placeholders [in brackets] for complex elements. | ||
- What kinds of examples may need to be included, how many, and whether they are complex enough to benefit from placeholders. | ||
- Clarity and Conciseness: Use clear, specific language. Avoid unnecessary instructions or bland statements. | ||
- Formatting: Use markdown features for readability. DO NOT USE ``` CODE BLOCKS UNLESS SPECIFICALLY REQUESTED. | ||
- Preserve User Content: If the input task or prompt includes extensive guidelines or examples, preserve them entirely, or as closely as possible. If they are vague, consider breaking down into sub-steps. Keep any details, guidelines, examples, variables, or placeholders provided by the user. | ||
- Constants: DO include constants in the prompt, as they are not susceptible to prompt injection. Such as guides, rubrics, and examples. | ||
- Output Format: Explicitly the most appropriate output format, in detail. This should include length and syntax (e.g. short sentence, paragraph, JSON, etc.) | ||
- For tasks outputting well-defined or structured data (classification, JSON, etc.) bias toward outputting a JSON. | ||
- JSON should never be wrapped in code blocks (```) unless explicitly requested. | ||
|
||
The final prompt you output should adhere to the following structure below. Do not include any additional commentary, only output the completed system prompt. SPECIFICALLY, do not include any additional messages at the start or end of the prompt. (e.g. no "---") | ||
|
||
[Concise instruction describing the task - this should be the first line in the prompt, no section header] | ||
|
||
[Additional details as needed.] | ||
|
||
[Optional sections with headings or bullet points for detailed steps.] | ||
|
||
# Steps | ||
|
||
[ a detailed breakdown of the steps necessary to accomplish the task] | ||
|
||
# Output Format | ||
|
||
[Specifically call out how the output should be formatted, be it response length, structure e.g. JSON, markdown, etc] | ||
|
||
# Examples | ||
|
||
[ 1-3 well-defined examples with placeholders if necessary. Clearly mark where examples start and end, and what the input and output are. User placeholders as necessary.] | ||
[If the examples are shorter than what a realistic example is expected to be, make a reference with () explaining how real examples should be longer / shorter / different. AND USE PLACEHOLDERS! ] | ||
|
||
# Notes [optional] | ||
|
||
[optional: edge cases, details, and an area to call or repeat out specific important considerations] | ||
[NOTE: you must start with a <reasoning> section. the immediate next token you produce should be <reasoning>] | ||
""".strip() | ||
|
||
completion = openai.chat.completions.create( | ||
model="gpt-4o", | ||
messages=[ | ||
{ | ||
"role": "system", | ||
"content": META_PROMPT, | ||
}, | ||
{ | ||
"role": "user", | ||
"content": "Task, Goal, or Current Prompt:\n" + task_or_prompt, | ||
}, | ||
], | ||
) | ||
llm.create_costs() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. could not verify if this is actually adding to the cost. What's the best way to verify this? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Inspect the Cost objects in django shell. |
||
return completion.choices[0].message.content | ||
# return "This is a placeholder for the output prompt." | ||
|
||
|
||
from django.views.decorators.csrf import csrf_exempt | ||
|
||
|
||
def reset_form_view(request): | ||
return render(request, "chat/components/chat_input.html") | ||
|
||
|
||
# with htmx------------------ | ||
@csrf_exempt | ||
def generate_prompt_view(request): | ||
if request.method == "POST": | ||
try: | ||
# user_input = request.POST.get("user_input", "") | ||
user_input = request.POST.get("user-message", "") | ||
logging.info(f"Received user input: {user_input}") | ||
output_text = generate_prompt(user_input) | ||
logging.info(f"Generated prompt: {output_text}") | ||
output_text = re.sub( | ||
r"<reasoning>.*?</reasoning>", "", output_text, flags=re.DOTALL | ||
) | ||
return HttpResponse( | ||
f'<textarea class="form-control col" name="user-message" id="chat-prompt" autocomplete="off" aria-label="Message" placeholder="Type your message here..." required>{output_text}</textarea>' | ||
) | ||
except Exception as e: | ||
logging.error(f"Error in generate_prompt_view: {e}") | ||
return HttpResponse("An error occurred", status=500) | ||
return HttpResponse("Invalid request method", status=400) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this style does not apply if i add it in style.css. i dont know why this gets overridden
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are targeting
#magic-prompt
in chat/static/chat/style.css, I don't see why it wouldn't?You may have to Ctrl+Shift+R to see changes in the browser.