Knowledge Gathering: Generate targeted questions that maximize information gathering by having a dialogue with people.
Surveys predominantly use closed-ended questions Closed-ended questions lack engagement, leading to low participation and superficial insights. They fail to adapt to respondents' perspectives. Generative AI can interact with users within their context and also enable nuanced interpretation of text-based responses.
Introduce an LLM-based survey tool which interacts dynamically, understanding and evaluating responses in real-time. Tailors follow-up questions based on individual answers and nuances. This leads to Enhances engagement, participation rates, and ensures meaningful and deeper insights.
- Setup backend, frontend, lm-server as mentioned in
README.md
in their respective folders - Then UI can be access at
http://localhost:8501
- Recommended hardware Nvidia T4 GPU (you need about 16GB GPU RAM)
User opens our application - what do they see, what can they do there. With every user action what changes and are the the next possible user actions
- Create a new Survey Add fixed questions, maximum followups allowed, and the fixed question's objective. Creator specified configs - 1. Maximum follow up questions allowed per fixed question 2. Objectives to evaluate whether to ask follow up questions or not 1. Was answer specific? 2. Was answer ambiguous? 3. Was an example given in the answer? 4. Did user understand the question? Did they answer the asked question or something else? 5. Did user find the question irrelevant? 6. Is question objective reached?
Basic UI were user answers the configured questions one after the other
A fronted app written using streamlit can be used to create surveys and for filling survey The fronted app interacts with a backend service written using FastAPI The backend service contains the survey bot which use two agent - objective met agent, question generation agent to generate follow up questions wherever needed The data for survey questions, conversation done with a survey participant and state of survey is stored in mongodb. For LLM capabilities we host the model using vLLM which comes with a lot of LLM inference optimisations out of the box. LLM used is quantised gemma-7b-it
Test bench creation
- 20 surveys
- User personas
- Survey simulation
- Manual annotation within conversations for objective met agent
All the generations were done by prompt engineering and using Mixtral 8x7B
We generated 20 surveys with questions (about 3 questions each survey) and associated motivation (some motivation were also added manually). We generated associated survey participant descriptions and question answers conversation based on survey questions. Then we sliced the conversations into multiple as the expected input by the agent and manually annotated the data (i.e. manually marked which conversation slice had which objectives met). This gave use approximately 100 test cases which we used to evaluate different prompts and thresholds for prompts
- Huggingface models integrated (tested on Gemma-7B)
- ChatGPT API integrated
- vLLM optimised server - batched requests, quantised, faster kernel
- Google Auth for survey fillers
Priority - P0 to P4
- Multiple type of questions
- MCQ (Single select and multi select) P1
- Text paragraph P0
- Multilingual Support P1
- Survey Bot (Collection of agents) P0
- Authentication P1
- Conversation summarizer into stats and insights : segmented by questions P0
- Voice integration
- STT P3
- TTS P4
Members from Search Relevance Team at Myntra built this to present at a hasgeek hackathon. With collective expertise, we aim to innovate solutions. Our team has worked on Gen AI enabled features and Deep Learning tools ex. MyFashionGPT for Myntra