Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using the /v1/chat/completions API and I am getting a python error send_message_to_agent() missing 1 required positional argument: 'stream_legacy' #1591

Closed
5 tasks
vysona-scott opened this issue Jul 29, 2024 · 0 comments

Comments

@vysona-scott
Copy link
Contributor

Describe the bug
A clear and concise description of what the bug is.

Using the /v1/chat/completions API and I am getting a python error send_message_to_agent() missing 1 required positional argument: 'stream_legacy'

It looks like the method that implements this API https://github.com/cpacker/MemGPT/blob/main/memgpt/server/rest_api/openai_chat_completions/chat_completions.py#L73C4-L82C14 requires a boolean to be set that does not have a default value https://github.com/cpacker/MemGPT/blob/main/memgpt/server/rest_api/agents/message.py#L90C1-L91C1 .

Please describe your setup

Additional context

curl --location 'http://localhost:8083/v1/chat/completions' \
--header 'Authorization: Bearer <USER TOKEN>' \
--header 'Content-Type: application/json' \
--data '{
    "messages": [
        {
            "id": "UnzuNq8",
            "createdAt": "2024-07-29T14:14:10.412Z",
            "role": "user",
            "content": "ssssss"
        }
    ],
    "user": "355e3a6a-7176-42d6-a4b0-09db2d202a0f",
    "model":"gpt-4o",
    "requestData": {}
}'

MemGPT Config

[defaults]
preset = memgpt_chat
persona = sam_pov
human = basic

[model]
model = gpt-4o
model_endpoint = https://api.openai.com/v1
model_endpoint_type = openai
context_window = 8192

[embedding]
embedding_endpoint_type = openai
embedding_endpoint = https://api.openai.com/v1
embedding_model = text-embedding-ada-002
embedding_dim = 1536
embedding_chunk_size = 300

[archival_storage]
type = postgres
path = /home/scott/.memgpt/chroma

[recall_storage]
type = postgres
path = /home/scott/.memgpt

[metadata_storage]
type = postgres
path = /home/scott/.memgpt

[version]
memgpt_version = 0.3.21

[client]
anon_clientid = 00000000-0000-0000-0000-000000000000

If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run MemGPT with local LLMs, please provide the following information:

  • The exact model you're trying to use (e.g. dolphin-2.1-mistral-7b.Q6_K.gguf)
  • The local LLM backend you are using (web UI? LM Studio?)
  • Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant