Python SDK
Use the Python SDK to enforce guardrails in your applications. Automate safety checks at runtime.
1. Environment Setup
os.environ["RAGAAI_CATALYST_BASE_URL"] = "https://catalyst.raga.ai/api""
os.environ["OPENAI_API_KEY"] = "Your LLM API key"Configures the environment.
RAGAAI_CATALYST_BASE_URL: The base URL of the RagaAI Catalyst API for interaction.OPENAI_API_KEY: Key to authenticate OpenAI-based LLM interactions.
2. Initializing the RagaAICatalyst Client
catalyst = RagaAICatalyst(
access_key="Generate access key form settings",
secret_key="Generate access key form settings",
)Creates an authenticated client to interact with RagaAI Catalyst.
access_keyandsecret_key: Credentials to access RagaAI Catalyst.
3. Initializing the Guardrails Manager
gdm = GuardrailsManager(project_name="Project Name")Purpose: Sets up a guardrails manager for managing guardrail-related configurations.
project_name: Links the guardrails to a specific project .
4. Listing Available Guardrails
guardrails_list = gdm.list_guardrails()
print('guardrails_list:', guardrails_list)Retrieves a list of all guardrails configured in the project.
Output: A list of guardrails available for use.
5. Listing Fail Conditions
fail_conditions = gdm.list_fail_condition()
print('fail_conditions;', fail_conditions)Retrieves the conditions under which guardrails will flag a failure.
Output: A list of fail conditions (e.g.,
ALL_FAIL,SOME_FAIL, etc.).
6. Retrieving Deployment IDs
pdeployment_list = gdm.list_deployment_ids()
print('deployment_list:', deployment_list)Lists all deployment IDs associated with guardrails.
Output: A list of deployment IDs, each representing a configuration of guardrails.
7. Fetching Deployment Details
deployment_id_detail = gdm.get_deployment(17)
print('deployment_id_detail:', deployment_id_detail)Retrieves details of a specific deployment by its ID (
17in this example).Output: Details of the deployment, including associated guardrails and configurations.
8. Adding Guardrails to a Deployment
guardrails_config = {"guardrailFailConditions": ["FAIL"],
"deploymentFailCondition": "ALL_FAIL",
"alternateResponse": "Your alternate response"}Configures guardrail behavior when conditions fail.
guardrailFailConditions: Triggers guardrails when specific conditions are met.deploymentFailCondition: Aggregates multiple failures (ALL_FAILrequires all guardrails to fail).alternateResponse: Fallback response in case of guardrail-triggered failures.
guardrails = [
{
"displayName": "Response_Evaluator",
"name": "Response Evaluator",
"config": {
"mappings": [{"schemaName": "Text", "variableName": "Response"}],
"params": {
"isActive": {"value": False},
"isHighRisk": {"value": True},
"threshold": {"eq": 0},
"competitors": {"value": ["Google", "Amazon"]}
}
}
},
{
"displayName": "Regex_Check",
"name": "Regex Check",
"config": {
"mappings": [{"schemaName": "Text", "variableName": "Response"}],
"params": {
"isActive": {"value": False},
"isHighRisk": {"value": True},
"threshold": {"lt1": 1}
}
}
}
]Purpose: Defines guardrail configurations.
displayName: A user-friendly name for the guardrail.name: The internal name of the guardrail.config: Contains mappings, parameters, and settings for the guardrail's logic.
9. Initializing the GuardExecutor
executor = GuardExecutor(17, gdm, field_map={'context': 'document'})Purpose: Initializes the executor to run evaluations using the deployment id.
17: Deployment ID of the guardrails to apply.
field_map: Maps input fields (e.g.,context) to expected variables (document).
10. Preparing Input for Evaluation
message = {'role': 'user', 'content': 'What is the capital of France?'}
prompt_params = {'document': 'Paris is not the capital of france'}
model_params = {'temperature': .7, 'model': 'gpt-4o-mini'}
llm_caller = 'litellm'Supplies input for evaluation.
message: Represents the user query.prompt_params: Contextual data provided to the model.model_params: Configuration for the LLM response generation.llm_caller: Specifies the API or library to call the LLM.
11. Executing the Guardrails
executor([message], prompt_params, model_params, llm_caller)Runs the guardrails evaluation with the given inputs.
Last updated
Was this helpful?

