Quickstart
Last updated
Last updated
This guide walks you through the steps to use the Guardrails feature in RagaAI Catalyst to ensure accurate and contextually appropriate LLM outputs.
Get started with Guardrails using this Colab Link
Step 1: Navigate to Guardrails
From the main menu, select the Guardrails tab.
Step 2: Create a New Deployment
In the Guardrails tab, click on New Deployment.
Provide a name for your deployment in the dialog box that appears and click Create.
Deployment: Deployment is a group of guardrails selected by the user. When user creates a deployment and selects a metric, it provides a deployment id to the user which the user can use in the Python SDK as shown below
Step 3: Select Guardrails in Your Deployment
After creating a deployment, click on it to view and configure.
Click on Add Guardrails and select the guardrails that match your needs. Guardrails you can select include:
Context Adherence
NSFW Text
Bias Detection
Prompt Injection, etc.
(Full list of available guardrails can be found in the here).
Step 4: Configure Actions (Fail Condition and Alternate Response)
In the Actions section, set the Fail Condition. You can choose from:
One Fail: Fails if one guardrail fails.
All Fail: Fails if all guardrails fail.
High Risk Fail: Fails if one high-risk guardrail fails.
All High Risk Fail: Fails if all high-risk guardrails fail.
Optionally, provide an Alternate Response that the system should return if the fail condition is triggered.
Step 5: Save Changes
Click Save Changes to apply your configurations. Make sure all selected guardrails and actions are correctly set before saving.
Step 6: Deploy and Retrieve the Code
After saving your deployment, click the Deploy button.
The deployment will generate code that you can paste into your Python environment to execute guardrails programmatically.
Step 7: Example Code to Execute Guardrails
This code applies the guardrails configured in your deployment to evaluate LLM inputs or outputs based on the selected conditions.
1. Environment Setup
Configures the API base URL for RagaAI Catalyst and the OpenAI API key.
RAGAAI_CATALYST_BASE_URL
: Specifies the endpoint for RagaAI Catalyst API.
OPENAI_API_KEY
: Authorizes the LLM interactions with OpenAI's API.
2. Initializing RagaAICatalyst
Authenticates access to RagaAI Catalyst.
access_key
and secret_key
: Credentials to authenticate user.
Obtain these keys from the Catalyst dashboard under the "Settings" tab.
3. Creating a GuardrailsManager
Sets up a guardrails manager for a specific project.
project_name
: Links the guardrails to a specific project in RagaAI.
4. Initialising the GuardExecutor
Executes guardrails against the LLM's responses.
<Deployment ID>
: A unique identifier for your guardrails setup. Create this in the Catalyst Guardrails Configuration UI.
gdm
: Passes the guardrails manager instance.
field_map
: Maps input fields (context
) in your prompt to expected keys in your dataset or document.
5. Defining the Message
Represents the user’s input query for the LLM.
role
: Defines the role of the message's author (e.g., user
).
content
: The actual query.
6. Providing Prompt Parameters
Supplies additional context for the LLM to generate a response.
document
: The text or data providing context for the query.
7. Setting Model Parameters
Configures LLM behavior.
temperature
: Controls randomness in the model’s output (lower is more deterministic, higher is more creative).
model
: Specifies the model variant to use (e.g., gpt-4o-mini
).
8. Specifying the LLM Caller
Indicates the LLM library or API to use for generating the response.
litellm
: Refers to a lightweight LLM library compatible with RagaAI.
9. Executing the Guardrails
Runs the guardrails evaluation for the user query and context.
[message]
: A list containing the user query.
prompt_params
: Context passed to the model.
model_params
: Configures how the model generates the response.
llm_caller
: Specifies which LLM API or library to use.