Skip to content

Latest commit

 

History

History
7 lines (6 loc) · 552 Bytes

README.md

File metadata and controls

7 lines (6 loc) · 552 Bytes

RagImplementation

Implementing Retrieval Augmented Generation (RAG) via using GPT-3.5-Turbo as the LLM model and Langchain to simplify the implementation, with the data being feed is in the form of list to make it easier to understand.

How to Run

  • Open the attached Google Colab File.
  • Set the Open Ai Api key with the name OPENAI_API, Pinecone Api key with the name PINECONE_API_KEY, and Pinecone Env with the name PINECONE_ENV.
  • Run the Colab file to get the results with the RAG benifit over the extra data provided in the list named texts.