You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently working on building a test chatbot using langchain-go and I need to be able to flush messages from the chat history when the token size of the full prompt hits a certain limit.
To tackle this, I've been digging into the codebase and exploring options to contribute to the repo. The python version has this nifty reference to BaseLanguageModel, which handles the logic for measuring the token size of the stored memory buffer. However, the Go version doesn't yet have anything similar built-in, so I ended up using the [titoken-go module](https://github.com/pkoukk/tiktoken-go) to get the token size of my history.
Since I'm still a bit new to GoLang, I was wondering if this difference in design approach is something specific to Golang or just a design choice made by the author. As far as I understand, Golang has its own version of inheritance through embeddings so I created my own memory wrapper by embedding the memory Buffer struct.
I might be missing something, but the main pickle for me and the reason I needed the wrapper in the first place was because I couldn't access ChatHistory.messages since it's a "private" property, and I couldn't find a way to pop/slice out the messages that were overflowing the token limit, I needed my own storage that I can manipulate and access.
So I was wondering if it would be a good idea, and the simplest solution for now to just extend the existing type Buffer struct with something like:
I’m wondering if this should be included in a way in every type of memory since I don’t see a case where I won’t worry about the token size of the prompt. Maybe a separate package that will take care of token counting?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hey everyone!
I'm currently working on building a test chatbot using langchain-go and I need to be able to flush messages from the chat history when the token size of the full prompt hits a certain limit.
To tackle this, I've been digging into the codebase and exploring options to contribute to the repo. The python version has this nifty reference to
BaseLanguageModel
, which handles the logic for measuring the token size of the stored memory buffer. However, the Go version doesn't yet have anything similar built-in, so I ended up using the [titoken-go module](https://github.com/pkoukk/tiktoken-go) to get the token size of my history.Since I'm still a bit new to GoLang, I was wondering if this difference in design approach is something specific to Golang or just a design choice made by the author. As far as I understand, Golang has its own version of inheritance through embeddings so I created my own memory wrapper by embedding the memory
Buffer
struct.I might be missing something, but the main pickle for me and the reason I needed the wrapper in the first place was because I couldn't access
ChatHistory.messages
since it's a "private" property, and I couldn't find a way to pop/slice out the messages that were overflowing the token limit, I needed my own storage that I can manipulate and access.So I was wondering if it would be a good idea, and the simplest solution for now to just extend the existing
type Buffer struct
with something like:I’m wondering if this should be included in a way in every type of memory since I don’t see a case where I won’t worry about the token size of the prompt. Maybe a separate package that will take care of token counting?
Beta Was this translation helpful? Give feedback.
All reactions