You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confirm this is a feature request for the Node library and not the underlying OpenAI API.
This is a feature request for the Node library
Describe the feature or improvement you're requesting
Hi, I belive that max_prompt_tokens and max_completion_tokens are essential features for using the whole Assistants API. Without being able to specify the max tokens, the thread is just getting longer, and running the AI model getting very expensive. Please add the max_prompt_tokens and max_completion_tokens parameters to the openai.beta.threads.runs.create. Thanks!
Additional context
No response
The text was updated successfully, but these errors were encountered:
Confirm this is a feature request for the Node library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
Hi, I belive that
max_prompt_tokens
andmax_completion_tokens
are essential features for using the whole Assistants API. Without being able to specify the max tokens, the thread is just getting longer, and running the AI model getting very expensive. Please add themax_prompt_tokens
andmax_completion_tokens
parameters to theopenai.beta.threads.runs.create
. Thanks!Additional context
No response
The text was updated successfully, but these errors were encountered: