Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.2.12 error with local models #809

Closed
6 tasks
jimlloyd opened this issue Jan 13, 2024 · 1 comment
Closed
6 tasks

0.2.12 error with local models #809

jimlloyd opened this issue Jan 13, 2024 · 1 comment

Comments

@jimlloyd
Copy link
Contributor

Describe the bug
My first attempts at memgpt run using a local model failed with an error while attempting: SQL: INSERT INTO memgpt_recall_memory_agent. The error was null value in column "model" of relation "memgpt_recall_memory_agent" violates not-null constraint.

I am using a local model with llama.cpp.

The memgpt configure did not ask for a model. Running memgpt run --help mentioned a --model option so I ran memgpt run --model neuralhermes-2.5-mistral-7b but still get the null value in column "model" error.

Community support (Thanks MaxPower!) gave me the workaround of hacking db.py so that insert_many adds a line to set the model to an arbitrary string:

    def insert_many(self, records: List[Record], show_progress=False):
        iterable = tqdm(records) if show_progress else records
        for record in iterable:
            db_record = self.db_model(**vars(record))
            db_record.__dict__["model"] = "neuralhermes-2.5-mistral-7b"
            self.session.add(db_record)
        self.session.commit()

Please describe your setup
I'm using llama.cpp specifically. I have looked a little at the code and it seems there is special support for ollama and vllm. Maybe they work fine.

  • MemGPT version
    0.2.12

  • How did you install memgpt?

    • poetry shell; poetry install --all-extras
  • Describe your setup

    • MacOS Sonoma
  • [] How are you running memgpt?

  • Terminal in VS Code

Screenshots
N/A

Additional context
N/A


If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run MemGPT with local LLMs, please provide the following information:

  • The exact model you're trying to use (e.g. dolphin-2.1-mistral-7b.Q6_K.gguf)
  • The local LLM backend you are using (web UI? LM Studio?)
  • Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
@sarahwooders
Copy link
Collaborator

Should be fixed with #831

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants