Not getting a label for toxic text prompt. #61
Unanswered
VectorAnalytics
asked this question in
Q&A
Replies: 1 comment
-
Hi @VectorAnalytics , |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm working thru text_classification.ipynb. In the section on toxicity detection, I can change the text and generate new "non-tonic" labels. But when I enter text that depicts racism or a text with a "bad" word, I don't get any response from the model. I was expecting to receive the "toxic" label. Do we not get any label for toxic text?
Beta Was this translation helpful? Give feedback.
All reactions