Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feat] should be possible to deploy guardrails services on premise with out needing any guard rails token #1218

Open
vlnrd opened this issue Jan 22, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@vlnrd
Copy link

vlnrd commented Jan 22, 2025

Description
Should be possible to deploy guardrails services on premise with out needing any guard rails token

Why is this needed
I want to deploy guard rails hub services like API , validators etc in my premises. I don't want to use guardrails hub token in the start up of the services. reason being I don't want to send any information of my org clients to guard rails hub . I don't want to also depend on guard rail hub token. this may have maintenance issues when expiry and reconfiguration of guard rails hub token.

Implementation details
Implement a way to offer code to not needing guardrails hub token

End result
Gurad rails hub services available to deploy in on premises my own infra with out the guardrails hub token

@vlnrd vlnrd added the enhancement New feature or request label Jan 22, 2025
@CalebCourier
Copy link
Collaborator

@vlnrd A Guardrails API Key is required in order to install validators from the Guardrails Hub as it is used for authentication purposes within our private package index. However, after the installation process is complete it is certainly possible to keep everything after the install step within your own environment/infrastructure making the token a build-time only dependency.

When configuring the guardrails cli, you can opt out of both anonymous metrics and remote inference by specifying the flags below:

guardrails configure --disable-metrics --disable-remote-inferencing

Note that when you opt out of remote inferencing you must run the ML models that back the validators on your own. You can do this by allowing them to be downloaded as part of the install process by using the --install-local-models flag like in the below example:

guardrails hub install hub://guardrails/detect_pii --install-local-models

Also note that running the models as part of the app/server, like shown above, will increase the deployable size. Another alternative, is to host the models that back these validators on their own dedicated hardware and utilize our remote inferencing functionality, but pointing to where you are hosting them instead of our servers. We document how to do that here: https://www.guardrailsai.com/docs/how_to_guides/hosting_validator_models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants