Quickstart
Last updated
Last updated
This guide walks you through the steps to use the Guardrails feature in RagaAI Catalyst to ensure accurate and contextually appropriate LLM outputs.
Step 1: Navigate to Guardrails
From the main menu, select the Guardrails tab.
Step 2: Create a New Deployment
In the Guardrails tab, click on New Deployment.
Provide a name for your deployment in the dialog box that appears and click Create.
Step 3: Select Guardrails in Your Deployment
After creating a deployment, click on it to view and configure.
Click on Add Guardrails and select the guardrails that match your needs. Guardrails you can select include:
Context Adherence
NSFW Text
Bias Detection
Prompt Injection, etc.
(Full list of available guardrails can be found in the here).
Step 4: Configure Actions (Fail Condition and Alternate Response)
In the Actions section, set the Fail Condition. You can choose from:
One Fail: Fails if one guardrail fails.
All Fail: Fails if all guardrails fail.
High Risk Fail: Fails if one high-risk guardrail fails.
All High Risk Fail: Fails if all high-risk guardrails fail.
Optionally, provide an Alternate Response that the system should return if the fail condition is triggered.
Step 5: Save Changes
Click Save Changes to apply your configurations. Make sure all selected guardrails and actions are correctly set before saving.
Step 6: Deploy and Retrieve the Code
After saving your deployment, click the Deploy button.
The deployment will generate code that you can paste into your Python environment to execute guardrails programmatically.
Step 7: Example Code to Execute Guardrails
This code applies the guardrails configured in your deployment to evaluate LLM inputs or outputs based on the selected conditions.