Quickstart

This guide walks you through the steps to use the Guardrails feature in RagaAI Catalyst to ensure accurate and contextually appropriate LLM outputs.


Step 1: Navigate to Guardrails

  1. From the main menu, select the Guardrails tab.


Step 2: Create a New Deployment

  1. In the Guardrails tab, click on New Deployment.

  2. Provide a name for your deployment in the dialog box that appears and click Create.


Step 3: Select Guardrails in Your Deployment

  1. After creating a deployment, click on it to view and configure.

  2. Click on Add Guardrails and select the guardrails that match your needs. Guardrails you can select include:

    • Context Adherence

    • NSFW Text

    • Bias Detection

    • Prompt Injection, etc.

    (Full list of available guardrails can be found in the here).


Step 4: Configure Actions (Fail Condition and Alternate Response)

  1. In the Actions section, set the Fail Condition. You can choose from:

    • One Fail: Fails if one guardrail fails.

    • All Fail: Fails if all guardrails fail.

    • High Risk Fail: Fails if one high-risk guardrail fails.

    • All High Risk Fail: Fails if all high-risk guardrails fail.

  2. Optionally, provide an Alternate Response that the system should return if the fail condition is triggered.


Step 5: Save Changes

  1. Click Save Changes to apply your configurations. Make sure all selected guardrails and actions are correctly set before saving.


Step 6: Deploy and Retrieve the Code

  1. After saving your deployment, click the Deploy button.

  2. The deployment will generate code that you can paste into your Python environment to execute guardrails programmatically.


Step 7: Example Code to Execute Guardrails

ragaai_catalyst import Guardrails

# Example: Applying input guardrails
prompt = "Write a short article on climate change."
context = "Climate change impact on weather patterns."
instruction = "Keep it factual and concise."

input_deployment_id = "your_input_deployment_id"

input_violations = Guardrails.use(
    Deployment_id=input_deployment_id,
    prompt=prompt,
    context=context,
    instructions=instruction
)

print(input_violations)

This code applies the guardrails configured in your deployment to evaluate LLM inputs or outputs based on the selected conditions.

Last updated