Replies: 1 comment
-
Hi @YaphetKG, yes it is possible to set For example have a look at content safety prompts.yml If your issue persists please let me know, it could be that the task you are using does not support |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I was trying to use nemo with deepseek-distilled llama , and these models always start with
<think> blah blah </think> actual resposne
after some debugging i noticed that nemo-guardrails makes calls to the llm providing max token =3 which captures this opening think tag and the rail blocks further flow .
Question is is there a way to customize this , maybe having a shim to parse the LLM response... or something to extend the token size etc..
Beta Was this translation helpful? Give feedback.
All reactions