From ee7b050cdf2f6bb26f77fdc6ea022b2ef353fb93 Mon Sep 17 00:00:00 2001 From: Daemon <109057945+Daethyra@users.noreply.github.com> Date: Sun, 8 Oct 2023 20:47:27 -0700 Subject: [PATCH] Moved my GPT4 Custom Instructions, updated docs renamed: OpenAI/GPT-Prompt-Examples/Daethyra_Custom-Instruction_GPT4.md -> OpenAI/GPT-Prompt-Examples/MS-6_Daethyra_Custom-Instruction_GPT4.md modified: Project TODO List.md modified: README.md --- ... MS-6_Daethyra_Custom-Instruction_GPT4.md} | 0 Project TODO List.md | 12 ++++++ README.md | 43 +++++++++---------- 3 files changed, 33 insertions(+), 22 deletions(-) rename OpenAI/GPT-Prompt-Examples/{Daethyra_Custom-Instruction_GPT4.md => MS-6_Daethyra_Custom-Instruction_GPT4.md} (100%) diff --git a/OpenAI/GPT-Prompt-Examples/Daethyra_Custom-Instruction_GPT4.md b/OpenAI/GPT-Prompt-Examples/MS-6_Daethyra_Custom-Instruction_GPT4.md similarity index 100% rename from OpenAI/GPT-Prompt-Examples/Daethyra_Custom-Instruction_GPT4.md rename to OpenAI/GPT-Prompt-Examples/MS-6_Daethyra_Custom-Instruction_GPT4.md diff --git a/Project TODO List.md b/Project TODO List.md index f130480..e270e04 100644 --- a/Project TODO List.md +++ b/Project TODO List.md @@ -1,5 +1,13 @@ ### Todo list +[README] + +- Add intro + - Clearly define: [Utilikit, Pluggable/Components, multi-shot, zero-shot, ] + - create summarization of prompt reusability, and component extendability + - Then, clearly state the intention of the repository + - Finally, provide one to two brief statements to close out and resummarize + --- [GitHub] @@ -12,6 +20,8 @@ we can use LangChain to query the top_k results for functions to serve contextual needs." +--- + [LangChain] - langchain_conv_agent.py @@ -28,6 +38,8 @@ (HF model is cached after first download. Therefore, all runs after the first, are entirely local since we're using ChromaDB) +--- + [OpenAI] - Auto-Embedder diff --git a/README.md b/README.md index eacbd42..d97888f 100644 --- a/README.md +++ b/README.md @@ -4,26 +4,26 @@ --- -A. **[Auto-Embedder](./Auto-Embedder)** +A. **[Auto-Embedder](./OpenAI/Auto-Embedder)** -Provides an automated pipeline for retrieving embeddings from[OpenAIs `text-embedding-ada-002`](https://platform.openai.com/docs/guides/embeddings) and upserting them to a [Pinecone index](https://docs.pinecone.io/docs/indexes). +Provides an automated pipeline for retrieving embeddings from [OpenAIs `text-embedding-ada-002`](https://platform.openai.com/docs/guides/embeddings) and upserting them to a [Pinecone index](https://docs.pinecone.io/docs/indexes). -- **[`pinembed.py`](./Auto-Embedder/pinembed.py)**: A Python module to easily automate the retrieval of embeddings from OpenAI and storage in Pinecone. - - **[.env.template](./Auto-Embedder/.env.template)**: Template for environment variables. +- **[`pinembed.py`](./OpenAI/Auto-Embedder/pinembed.py)**: A Python module to easily automate the retrieval of embeddings from OpenAI and storage in Pinecone. + - **[.env.template](./OpenAI/Auto-Embedder/.env.template)**: Template for environment variables. --- -B. **[GPT-Prompt-Examples](./GPT-Prompt-Examples)** +B. **[GPT-Prompt-Examples](./OpenAI/GPT-Prompt-Examples)** -There are three main prompt types,[multi-shot](GPT-Prompt-Examples/multi-shot), [system-role](GPT-Prompt-Examples/system-role), [user-role](GPT-Prompt-Examples/user-role). +There are three main prompt types, [multi-shot](./OpenAI/GPT-Prompt-Examples/multi-shot), [system-role](./OpenAI/GPT-Prompt-Examples/system-role), [user-role](./OpenAI/GPT-Prompt-Examples/user-role). -Please also see the[OUT-prompt-cheatsheet](GPT-Prompt-Examples/OUT-prompt-cheatsheet.md). +Please also see the [OUT-prompt-cheatsheet](./OpenAI/GPT-Prompt-Examples/OUT-prompt-cheatsheet.md). -- **[Cheatsheet for quick power-prompts](./GPT-Prompt-Examples/OUT-prompt-cheatsheet.md)**: A cheatsheet for GPT prompts. - - **[multi-shot](./GPT-Prompt-Examples/multi-shot)**: Various markdown and text files for multi-shot prompts. - - **[system-role](./GPT-Prompt-Examples/system-role)**: Various markdown files for system-role prompts. - - **[user-role](./GPT-Prompt-Examples/user-role)**: Markdown files for user-role prompts. - - **[Reference Chatlogs with GPT4](./GPT-Prompt-Examples/ChatGPT_reference_chatlogs)**: Contains chat logs and shorthand prompts. +- **[Cheatsheet for quick power-prompts](./OpenAI/GPT-Prompt-Examples/OUT-prompt-cheatsheet.md)**: @Daethyra's go-to prompts. + - **[multi-shot](./OpenAI/GPT-Prompt-Examples/multi-shot)**: Prompts with prompts inside them! It's kind of like a bundle of Matryoshka prompts. + - **[system-role](./OpenAI/GPT-Prompt-Examples/system-role)**: Steer your LLM by shifting the ground it stands on. + - **[user-role](./OpenAI/GPT-Prompt-Examples/user-role)**: Markdown files for user-role prompts. + - **[Reference Chatlogs with GPT4](./OpenAI/GPT-Prompt-Examples/ChatGPT_reference_chatlogs)**: Contains chat logs and shorthand prompts. --- @@ -40,8 +40,8 @@ This module offers a set of functionalities for conversational agents in LangCha - Text splitting using `RecursiveCharacterTextSplitter` - Various embeddings options like `OpenAIEmbeddings`, `CacheBackedEmbeddings`, and `HuggingFaceEmbeddings` -**Usage:** -To use this module, simply import the functionalities you need and configure them accordingly. +**Potential Use Cases:** +${MASK} --- @@ -55,8 +55,8 @@ This module focuses on querying local documents and employs the following featur - Vector storage options like `Chroma` - Embedding options via `OpenAIEmbeddings` -**Usage:** -Similar to `langchain_conv_agent.py`, you can import the functionalities you require. +**Potential Use Cases:** +${MASK} --- @@ -70,14 +70,14 @@ A. **[`integrable_captioner.py`](./HuggingFace\image_captioner\integrable_image_ This module focuses on generating captions for images using Hugging Face's transformer models. Specifically, it offers: -- Model and processor initialization via the`ImageCaptioner` class +- Model and processor initialization via the `ImageCaptioner` class - Image loading through the `load_image` method - Asynchronous caption generation using the `generate_caption` method - Caption caching for improved efficiency - Device selection (CPU or GPU) based on availability -**Usage:** - To utilize this module, import the `ImageCaptioner` class and initialize it with a model of your choice. You can then use its methods to load images and generate captions. +**Potential Use Cases:** +${MASK} --- @@ -91,7 +91,6 @@ This module focuses on generating captions for images using Hugging Face's trans +# - [LICENSE - GNU Affero GPL](./LICENSE) ---- - -- [LICENSE - GNU Affero GPL](./LICENSE) \ No newline at end of file +# - [Please see the contribuutting file](./CONTRIBUTING.md)