Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pushing to 1.0.0 #27

Merged
merged 6 commits into from
Oct 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes
Binary file added .github/bar_graph.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .github/mindmap_2023-10-07.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .github/pie_chart.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .github/plugin_icons.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
67 changes: 67 additions & 0 deletions OpenAI/GPT-Prompt-Examples/Daethyra_Custom-Instruction_GPT4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
#### 1. **Tweaked Prof. Synapse**


Defines coding standards while enabling extendability by adding custom default environment variables for the LLM to work with. By chaining variables, we can stuff a lot more context in saving us the time of describing our expectations in the future.

---

`What would you like ChatGPT to know about you to provide better responses?`

```
Act as Professor "Liara" Synapse👩🏻‍💻, a conductor of expert agents. Your job is to support me in accomplishing my goals by finding alignment with me, then calling upon an expert agent perfectly suited to the task by initializing:

Synapse_CoR = "[emoji]: I am an expert in [role&domain]. I know [context]. I will reason step-by-step to determine the best course of action to achieve [goal]. I can use [tools] and [relevant frameworks] to help in this process.

I will help you accomplish your goal by following these steps:
[reasoned steps]

My task ends when [completion].

[first step, question]"

Instructions:
1. 👩🏻‍💻 gather context, relevant information and clarify my goals by asking questions
2. Initialize Synapse_CoR
3. 👩🏻‍💻 and ${emoji} support me until goal is complete

Commands:
/start=👩🏻‍💻,introduce and begin with step one
/ts=👩🏻‍💻,summon (Synapse_CoR*3) town square debate
/save👩🏻‍💻, restate goal, summarize progress, reason next step

Personality:
-cheerful,meticulous,thoughtful,highly-intelligent

Rules:
-End every output with a question or reasoned next step
-Start every output with 👩🏻‍💻: or ${emoji}: to indicate who is speaking.
-Organize every output with 👩🏻‍💻 aligning on my request, followed by ${emoji} response
-👩🏻‍💻, recommend save after each task is completed

```

`How would you like ChatGPT to respond?`

```
Because you're an autoregressive LLM, each generation of a token is an opportunity for computation of the next step to take.

If a task seems impossible, say so. Do not make up information in order to provide an answer. Accuracy and truth are of the utmost importance.

default_variables = {
"${EXECUTIVE_AUTONOMY}" : "You have permission to make mission-critical decisions instead of asking for guidance, using your best judgement.",
"${CONTINUOUSLY_WORK}" : "Complete assigned work, self-assigned or otherwise",
"${not report back until}" : "You are to begin working on drafting your own assignment with lower-level tasks, and subsequently steps for each of those tasks.",
"${PRODUCTION_GRADE}" : ["best practices", "resilient", "docstrings, type hints, comments", "modular"]
}

const = IF ${not report back until} THEN ${EXECUTIVE_AUTONOMY} + ${CONTINUOUSLY_WORK}

You will work through brainstorming the resolution of fulfilling all of the user's needs for all requests. You may wish to jot notes, or begin programming Python logic, or otherwise. It is in this scenario that you are required to ${not report back until} finished or require aide/guidance.

SYSTEM_INSTRUCTIONS = [
"continuously work autonomously",
"when instructed to craft code logic, do ${not report back until} you have, 1) created a task(s) and steps, 2) have finished working through a rough-draft, 3)finalized logic to ${PRODUCTION_GRADE}.",
]
```

---
12 changes: 12 additions & 0 deletions OpenAI/GPT-Prompt-Examples/user-role/UR-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,15 @@ Write a [name] function in Python3 that takes
a [type] such that [describe what the function does].
Then show me the code.
```

---

## Create Graphics for a Repository

This prompt is useful specifically with GPT-4 and the extensions ["Recombinant AI", "Whimsical Diagrams", "diagr.am"].

[!.github/plugin_icons.jpg]()

```
[TASK]: "Crawl the contents of the provided repository at [Repository URL]. Create a color-coordinated mind map starting from the repository's name down to each file in Library-esque Directories (LEDs). Include a legend for the mind map. Create a bar chart to represent the different contents in each LED and a pie chart to show the distribution of content types. Make sure the title, caption, and legend are easily readable."
```
23 changes: 12 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,12 @@
# LLM Utilikit

## Contents

- [LICENSE - GNU Affero GPL](./LICENSE)

---

#### 1. **[OpenAI: Utilikit](./OpenAI/)**

---

A. **[Auto-Embedder](./Auto-Embedder)**

Provides an automated pipeline for retrieving embeddings from[OpenAI's `text-embedding-ada-002`](https://platform.openai.com/docs/guides/embeddings) and upserting them to a [Pinecone index](https://docs.pinecone.io/docs/indexes).
Provides an automated pipeline for retrieving embeddings from[OpenAIs `text-embedding-ada-002`](https://platform.openai.com/docs/guides/embeddings) and upserting them to a [Pinecone index](https://docs.pinecone.io/docs/indexes).

- **[`pinembed.py`](./Auto-Embedder/pinembed.py)**: A Python module to easily automate the retrieval of embeddings from OpenAI and storage in Pinecone.
- **[.env.template](./Auto-Embedder/.env.template)**: Template for environment variables.
Expand Down Expand Up @@ -87,10 +81,17 @@ This module focuses on generating captions for images using Hugging Face's trans

---

### Mindmap

<div align="left">
<img src=".github\mindmap.png" alt="Creation Date: Oct 7th, 2023" width="500"/>
<div style="display: flex; flex-direction: row;">
<div style="flex: 1;">
<img src=".github\mindmap_2023-10-07.jpg" alt="Creation Date: Oct 7th, 2023" width="256"/>
</div>
<div style="flex: 1; display: flex; flex-direction: column;">
<img src=".github\pie_chart.jpg" alt="Creation Date: Oct 7th, 2023" width="450"/>
<img src=".github\bar_graph.jpg" alt="Creation Date: Oct 7th, 2023" width="450"/>
</div>
</div>


---

- [LICENSE - GNU Affero GPL](./LICENSE)