Replies: 3 comments 5 replies
-
Update: here's my entire project for a bit more context. https://github.com/superfly/llm-describer Not much content there since it's a demo I'm putting together for a blog post, but describer.go contains the code of interest. |
Beta Was this translation helpful? Give feedback.
-
I believe that if you use agent with memory , you can solve your problem this way by being able to have the image description in context, or you can persist it somewhere and inject this description into the prompt using PromptTemplate |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Hey folks, new to LLM development so hope I'm on the right path. I'm working on a small image describing service based on Ollama. I have the base case working for simply describing the image,but now I want to add the ability to ask followup questions and feel like I've hit a wall.
First, I wasn't clear whether chains or agents were the tool I needed here. It looks like agents can have memory but for this case might be overkill. Is that correct? I don't need any other tools for this, just image descriptions.
My service has a web frontend and I'm caching user questions/followup. So when the user asks a followup question, now I create a chain. I'm not sure how to add the image binary data either to the memory or to the request alongside the memory. Is this possible?
For reference, here's what I have so far. This uses PocketBase for persistence but it should be pretty obvious what's going on:
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions