-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: BertModel.__init__() got an unexpected keyword argument 'safe_serialization' #1195
Comments
I've got the same problem trying to contact ollama on windows. |
I have the same problem on Google Colab (llama.cpp). I also tried memgpt 0.3.7 -> 0.3.6 and got the same error with both versions. I suspected that Google Colab's default python and pip environment was having an effect, so I tried Python in the order of 3.10 -> 3.11, but the same error occurred with every version. The error seems to be related to the transformer, so the situation may change if you clone memgpt from git, change the transformer version, and then install it. (Sorry, I don't have time to try this right now) I look forward to this issue being resolved soon. |
I don't quite understand why you think building from git will make a difference - can you expand on that idea? |
If you make use of llama_index, this may help: run-llama/llama_index#11939 (comment) (bug in llama-index-embeddings-huggingface 0.1.5, solved in 0.2.0) |
Solved my issue: llama-index-embeddings-huggingface 0.1.5, solved in 0.2.0 |
Not mine: pip install llama-index-embeddings-huggingface upgrade I am using an old Fedora (38) so I can use Python3.11 - which I needed to do to fix an earlier problem . . |
@YanSte , Did you read my response? - I tried to update llama-index-embeddings-huggingface to 0.2.0 but CAN'T . . |
Sorry for the late reply. As for my ideas, I'm sorry that they weren't helpful. The issue was resolved in my environment by upgrading "llama-index-embeddings-huggingface" as already discussed in this issue. Specifically, it was resolved by the following steps:
In my environment, I checked with pymemgpt version 0.3.6. Also, in pymemgpt version 0.3.6, "llama-index-embeddings-huggingface" is installed with version 0.1.5, and can be changed by the command "pip install --upgrade llama-index-embeddings-huggingface". , has been upgraded to version 0.2.0. |
I wasn't able to reproduce this error, but am upgrading the In general, I'd also recommend using the endpoint provided by |
Describe the bug:
Error for memgpt talking to local LLM.
Please describe your setup:
What is the output of memgpt version? (eg "0.2.4")
0.3.7
How did you install memgpt?
pip install pymemgpt
and:
pip install 'pymemgpt[local]'
Describe your setup:
What's your OS:
Fedora Linux v38 (for Python3.11)
How are you running memgpt?
tmux Terminal
I have successfully installed oobabooga in Fedora Linux, downloaded and loaded "ehartford_dolphin-2.2.1-mistral-7b" and can chat happily from the oobabooga console but when I try and connect with memgpt using either the airoboros-l2-70b-2.1 or the dolphin-2.1-mistral-7b wrapper I get an error:
The text was updated successfully, but these errors were encountered: