-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memgpt load directory with hugging-face embeddings fails to parse embed response #723
Comments
My memgpt config file:
|
I started from scratch and reproduced the error. I did run the quickstart and it seemed to run without problems. I then did the
After fixing this problems it seemed like I could run memgpt and carry out a dialog. (Though I had a typo when I tried to give it my correct name and wrote Finally, I ran the same |
Seems like this is an issue with TEI embeddings, looking into it. Works fine with OpenAI:
Doesn't work with
|
This is looks like the issue I raised a few weeks ago (#587). Would be nice to see it fixed. |
Co-authored-by: Mindy Long <[email protected]>
Describe the bug
This is the issue for the problem first reported in Discord https://discordapp.com/channels/1161736243340640419/1162177332350558339/1189677007915720754
I ran this command:
Where
{component}
is a relatively small component in the large{project}
.The error is:
I am running
text-embeddings-router
locally and observed that it served about 50 requests with log messages looking like:It seems to me that
memgpt
had loaded the documents, split them into about 50 nodes, ran the embeddings requests on the text for all 50, and then began iterating over the responses and failed on the first iteration. The failure is becausememgpt
was expecting a simple array of 1024 floats, but instead had received an JSON object with multiple properties. The embeddings vector was contained in the object but in an unexpected location.Please describe your setup
I have over several days been hacking towards using
memgpt
as a coding assistant with a large C++ codebase. I probably cannot given a concise & accurate list a commands to how I got here but after filing this I will try to reproduce the problem from scratch. In the meantime, here is what I can provide for my current setup:Powerbook M2 64Gb RAM
Sonoma 14.2
MemGPT version: 0.2.10
which memgpt
: /Users/jim.lloyd/.pyenv/shims/memgptjim.lloyd@jimsm2 ~ % pyenv version
3.11.7 (set by PYENV_VERSION environment variable)
I'm pretty sure I installed all dependencies this in this 3.11.7 environment this way:
python -m pip install memgpt
And then repeated for any other requirements
I run
memgpt
via iTerm or VS Code shells.Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
If you're not using OpenAI, please provide additional information on your local LLM setup:
Local LLM details
Currently using
llama.cpp
. I cloned the llama.cpp repo, built it withmake -j 8
, and then run it with:The text was updated successfully, but these errors were encountered: