Implementing Retrieval Augmented Generation (RAG) via using GPT-3.5-Turbo as the LLM model and Langchain to simplify the implementation, with the data being feed is in the form of list to make it easier to understand.
- Open the attached Google Colab File.
- Set the Open Ai Api key with the name OPENAI_API, Pinecone Api key with the name PINECONE_API_KEY, and Pinecone Env with the name PINECONE_ENV.
- Run the Colab file to get the results with the RAG benifit over the extra data provided in the list named texts.