Skip to content

Commit

Permalink
add license
Browse files Browse the repository at this point in the history
  • Loading branch information
Hui Kang Tong committed Mar 26, 2024
1 parent b202424 commit 6e51ea4
Show file tree
Hide file tree
Showing 3 changed files with 77 additions and 12 deletions.
15 changes: 15 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
qiqc_truncated.csv is derived from Quora Insincere Questions Classification dataset on Kaggle

The Quora Insincere Questions Classification dataset is released under the license granted in Section 7A of the competition rules:
https://www.kaggle.com/competitions/quora-insincere-questions-classification/rules#7.-competition-data


For the remaining code:

Copyright 2024 Hui Kang Tong

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
53 changes: 51 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,51 @@
# automatic-prompt-engineer
Using Claude 3 Opus to generate and update Claude 3 Haiku prompts
# Automatic Prompt Engineer

This repository contains a notebook that generates and optimizes system and user prompts for classification purposes.

This is how classification is intended to be done.
- (system prompt, user prompt prefix + text + user prompt suffix) -Haiku-> bot response -function-> label
- The function will be defined by you (which could be just a string match)

The notebook will produce
- the system prompt
- the user prompt prefix
- the user prompt suffix

To use this notebook, you will need
- an Anthropic API key
- a dataset (text -> label)
- define the function bot_response -> label
- description for Opus on what instructions Haiku should follow

This is how prompt tuning is done
- Sample from the full dataset.
- Haiku takes in (system prompt, user prompt prefix + text + user prompt suffix) and produces bot_response.
- The function takes in bot_response and produces the label. The (text -> label) process is analogous to the forward pass.
- Sample from the mistakes.
- Opus takes in the mistakes and summarizes the mistakes (gradient).
- Opus takes in the mistake summary (gradient) and the current prompts (model parameters) updates the prompts.
- Repeat.

This notebook will also produce
- The [classification](https://tonghuikang.github.io/automatic-prompt-engineer/html_output/iteration-classification-002-diff.html) at each iteration of the prompt.
- The [history](https://tonghuikang.github.io/automatic-prompt-engineer/html_output/prompt-history-classification.html) of the prompt and relevant metrics.
- (These will be saved locally as html files)


# References

I took inspiration from these resources.

- [DSPy](https://dspy-docs.vercel.app/docs/building-blocks/solving_your_task) for describing how tuning a prompt engineering pipeline mirrors that tuning the parameters of a neural network.
- [Matt Shumer](https://twitter.com/mattshumer_/status/1770942240191373770) for showing that Opus is a very good prompt engineer, and Haiku is good at following instructions.


# Design Decisions

- I require the LLM to produce the reasoning, and I have a separate function to extract the predicted label.
Having the reasoning provides visibility to the thought process, which helps with improving the prompt.
- I minimized the packages that you will need to install.
As of commit, you will only need to install `pandas` and `anthropic` Python libraries.
- I maximized the visibility into the workflows in the abstraction-visibility tradeoff.
There is only one Python notebook with no helper Python functions.
You can easily edit the individual function to edit how prompt tuning is done.
21 changes: 11 additions & 10 deletions classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,28 +9,28 @@
"\n",
"Given (text -> label), this notebook generates and optimizes system and user prompts.\n",
"\n",
"This is how the text will be labelled\n",
"This is how classification is intended to be done.\n",
"- (system prompt, user prompt prefix + text + user prompt suffix) -Haiku-> bot response -function-> label\n",
"- The function will be defined by you (it could be just a string match)\n",
"- The function will be defined by you (which could be just a string match)\n",
"\n",
"The notebook will produce\n",
"- the system prompt\n",
"- the user prompt prefix\n",
"- the user prompt suffix\n",
"\n",
"To use this tool, you will need\n",
"To use this notebook, you will need\n",
"- an Anthropic API key\n",
"- a dataset (text -> label)\n",
"- define the function bot_response -> label\n",
"- describe the expected bot_response that Haiku should produce\n",
"- description for Opus the expected bot_response that Haiku should produce\n",
"\n",
"This is how prompt tuning is done\n",
"- Sample from the full dataset.\n",
"- Haiku takes in (system prompt, user prompt prefix + text + user prompt suffix) and produces bot_response.\n",
"- The function takes in bot_response and produces the label. The (text -> label) process is the forward pass.\n",
"- The function takes in bot_response and produces the label. The (text -> label) process is analogous to the forward pass.\n",
"- Sample from the mistakes.\n",
"- Opus takes in the mistakes and summarizes the mistakes (this is the gradient).\n",
"- Opus takes in the gradient (gradient) and the current prompts (model parameters) updates the prompts.\n",
"- Opus takes in the mistakes and summarizes the mistakes (gradient).\n",
"- Opus takes in the mistake summary (gradient) and the current prompts (model parameters) updates the prompts.\n",
"- Repeat.\n",
"\n",
"You will need to have these Python modules installed\n",
Expand Down Expand Up @@ -82,7 +82,7 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "62f546dd",
"id": "b71cce29",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -164,7 +164,7 @@
{
"cell_type": "code",
"execution_count": 7,
"id": "031f9dd8",
"id": "139792ef",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -235,6 +235,7 @@
"metadata": {},
"outputs": [],
"source": [
"# tell Opus on what instructions Haiku should follow\n",
"PROMPT_UPDATE_SYSTEM_PROMPT = \"\"\"\n",
"You will write a set of prompts for an LLM to classify where a question is insincere.\n",
"\n",
Expand Down Expand Up @@ -813,7 +814,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "bbe67012",
"id": "e526210a",
"metadata": {},
"outputs": [],
"source": []
Expand Down

0 comments on commit 6e51ea4

Please sign in to comment.