Prompt Shielding lab #53
vieiraae
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Prompt Shielding lab
Playground to try Prompt Shields from Azure AI Content Safety service that analyzes LLM inputs and detects User Prompt attacks and Document attacks, which are two common types of adversarial inputs.
Get started
Proceed by opening the Jupyter notebook, and follow the steps provided.
Add questions or share feedback bellow
Beta Was this translation helpful? Give feedback.
All reactions