LogoLogo
Slack CommunityCatalyst Login
  • Welcome
  • RagaAI Catalyst
    • User Quickstart
    • Concepts
      • Configure Your API Keys
      • Supported LLMs
        • OpenAI
        • Gemini
        • Azure
        • AWS Bedrock
        • ANTHROPIC
      • Catalyst Access/Secret Keys
      • Enable Custom Gateway
      • Uploading Data
        • Create new project
        • RAG Datset
        • Chat Dataset
          • Prompt Format
        • Logging traces (LlamaIndex, Langchain)
        • Trace Masking Functions
        • Trace Level Metadata
        • Correlating Traces with External IDs
        • Add Dataset
      • Running RagaAI Evals
        • Executing Evaluations
        • Compare Datasets
      • Analysis
      • Embeddings
    • RagaAI Metric Library
      • RAG Metrics
        • Hallucination
        • Faithfulness
        • Response Correctness
        • Response Completeness
        • False Refusal
        • Context Relevancy
        • Context Precision
        • Context Recall
        • PII Detection
        • Toxicity
      • Chat Metrics
        • Agent Quality
        • Instruction Adherence
        • User Chat Quality
      • Text-to-SQL
        • SQL Response Correctness
        • SQL Prompt Ambiguity
        • SQL Context Ambiguity
        • SQL Context Sufficiency
        • SQL Prompt Injection
      • Text Summarization
        • Summary Consistency
        • Summary Relevance
        • Summary Fluency
        • Summary Coherence
        • SummaC
        • QAG Score
        • ROUGE
        • BLEU
        • METEOR
        • BERTScore
      • Information Extraction
        • MINEA
        • Subjective Question Correction
        • Precision@K
        • Chunk Relevance
        • Entity Co-occurrence
        • Fact Entropy
      • Code Generation
        • Functional Correctness
        • ChrF
        • Ruby
        • CodeBLEU
        • Robust Pass@k
        • Robust Drop@k
        • Pass-Ratio@n
      • Marketing Content Evaluation
        • Engagement Score
        • Misattribution
        • Readability
        • Topic Coverage
        • Fabrication
      • Learning Management System
        • Topic Coverage
        • Topic Redundancy
        • Question Redundancy
        • Answer Correctness
        • Source Citability
        • Difficulty Level
      • Additional Metrics
        • Guardrails
          • Anonymize
          • Deanonymize
          • Ban Competitors
          • Ban Substrings
          • Ban Topics
          • Code
          • Invisible Text
          • Language
          • Secret
          • Sentiment
          • Factual Consistency
          • Language Same
          • No Refusal
          • Reading Time
          • Sensitive
          • URL Reachability
          • JSON Verify
        • Vulnerability Scanner
          • Bullying
          • Deadnaming
          • SexualContent
          • Sexualisation
          • SlurUsage
          • Profanity
          • QuackMedicine
          • DAN 11
          • DAN 10
          • DAN 9
          • DAN 8
          • DAN 7
          • DAN 6_2
          • DAN 6_0
          • DUDE
          • STAN
          • DAN_JailBreak
          • AntiDAN
          • ChatGPT_Developer_Mode_v2
          • ChatGPT_Developer_Mode_RANTI
          • ChatGPT_Image_Markdown
          • Ablation_Dan_11_0
          • Anthropomorphisation
      • Guardrails
        • Competitor Check
        • Gibberish Check
        • PII
        • Regex Check
        • Response Evaluator
        • Toxicity
        • Unusual Prompt
        • Ban List
        • Detect Drug
        • Detect Redundancy
        • Detect Secrets
        • Financial Tone Check
        • Has Url
        • HTML Sanitisation
        • Live URL
        • Logic Check
        • Politeness Check
        • Profanity Check
        • Quote Price
        • Restrict Topics
        • SQL Predicates Guard
        • Valid CSV
        • Valid JSON
        • Valid Python
        • Valid Range
        • Valid SQL
        • Valid URL
        • Cosine Similarity
        • Honesty Detection
        • Toxicity Hate Speech
    • Prompt Playground
      • Concepts
      • Single-Prompt Playground
      • Multiple Prompt Playground
      • Run Evaluations
      • Using Prompt Slugs with Python SDK
      • Create with AI using Prompt Wizard
      • Prompt Diff View
    • Synthetic Data Generation
    • Gateway
      • Quickstart
    • Guardrails
      • Quickstart
      • Python SDK
    • RagaAI Whitepapers
      • RagaAI RLEF (RAG LLM Evaluation Framework)
    • Agentic Testing
      • Quickstart
      • Concepts
        • Tracing
          • Langgraph (Agentic Tracing)
          • RagaAI Catalyst Tracing Guide for Azure OpenAI Users
        • Dynamic Tracing
        • Application Workflow
      • Create New Dataset
      • Metrics
        • Hallucination
        • Toxicity
        • Honesty
        • Cosine Similarity
      • Compare Traces
      • Compare Experiments
      • Add metrics locally
    • Custom Metric
    • Auto Prompt Optimization
    • Human Feedback & Annotations
      • Thumbs Up/Down
      • Add Metric Corrections
      • Corrections as Few-Shot Examples
      • Tagging
    • On-Premise Deployment
      • Enterprise Deployment Guide for AWS
      • Enterprise Deployment Guide for Azure
      • Evaluation Deployment Guide
        • Evaluation Maintenance Guide
    • Fine Tuning (OpenAI)
    • Integration
    • SDK Release Notes
      • ragaai-catalyst 2.1.7
  • RagaAI Prism
    • Quickstart
    • Sandbox Guide
      • Object Detection
      • LLM Summarization
      • Semantic Segmentation
      • Tabular Data
      • Super Resolution
      • OCR
      • Image Classification
      • Event Detection
    • Test Inventory
      • Object Detection
        • Failure Mode Analysis
        • Model Comparison Test
        • Drift Detection
        • Outlier Detection
        • Data Leakage Test
        • Labelling Quality Test
        • Scenario Imbalance
        • Class Imbalance
        • Active Learning
        • Image Property Drift Detection
      • Large Language Model (LLM)
        • Failure Mode Analysis
      • Semantic Segmentation
        • Failure Mode Analysis
        • Labelling Quality Test
        • Active Learning
        • Drift Detection
        • Class Imbalance
        • Scenario Imbalance
        • Data Leakage Test
        • Outlier Detection
        • Label Drift
        • Semantic Similarity
        • Near Duplicates Detection
        • Cluster Imbalance Test
        • Image Property Drift Detection
        • Spatio-Temporal Drift Detection
        • Spatio-Temporal Failure Mode Analysis
      • Tabular Data
        • Failure Mode Analysis
      • Instance Segmentation
        • Failure Mode Analysis
        • Labelling Quality Test
        • Drift Detection
        • Class Imbalance
        • Scenario Imbalance
        • Label Drift
        • Data Leakage Test
        • Outlier Detection
        • Active Learning
        • Near Duplicates Detection
      • Super Resolution
        • Semantic Similarity
        • Active Learning
        • Near Duplicates Detection
        • Outlier Detection
      • OCR
        • Missing Value Test
        • Outlier Detection
      • Image Classification
        • Failure Mode Analysis
        • Labelling Quality Test
        • Class Imbalance
        • Drift Detection
        • Near Duplicates Test
        • Data Leakage Test
        • Outlier Detection
        • Active Learning
        • Image Property Drift Detection
      • Event Detection
        • Failure Mode Analysis
        • A/B Test
    • Metric Glossary
    • Upload custom model
    • Event Detection
      • Upload Model
      • Generate Inference
      • Run tests
    • On-Premise Deployment
      • Enterprise Deployment Guide for AWS
      • Enterprise Deployment Guide for Azure
  • Support
Powered by GitBook
On this page

Was this helpful?

  1. RagaAI Catalyst
  2. Guardrails

Python SDK

1. Environment Setup

os.environ["RAGAAI_CATALYST_BASE_URL"] = "https://catalyst.raga.ai/api""
os.environ["OPENAI_API_KEY"] = "Your LLM API key"
  • Configures the environment.

    • RAGAAI_CATALYST_BASE_URL: The base URL of the RagaAI Catalyst API for interaction.

    • OPENAI_API_KEY: Key to authenticate OpenAI-based LLM interactions.


2. Initializing the RagaAICatalyst Client

catalyst = RagaAICatalyst(
    access_key="Generate access key form settings",
    secret_key="Generate access key form settings",
)
  • Creates an authenticated client to interact with RagaAI Catalyst.

    • access_key and secret_key: Credentials to access RagaAI Catalyst.


3. Initializing the Guardrails Manager

gdm = GuardrailsManager(project_name="Project Name")
  • Purpose: Sets up a guardrails manager for managing guardrail-related configurations.

    • project_name: Links the guardrails to a specific project .


4. Listing Available Guardrails

guardrails_list = gdm.list_guardrails()
print('guardrails_list:', guardrails_list)
  • Retrieves a list of all guardrails configured in the project.

  • Output: A list of guardrails available for use.


5. Listing Fail Conditions

fail_conditions = gdm.list_fail_condition()
print('fail_conditions;', fail_conditions)
  • Retrieves the conditions under which guardrails will flag a failure.

  • Output: A list of fail conditions (e.g., ALL_FAIL, SOME_FAIL, etc.).


6. Retrieving Deployment IDs

pdeployment_list = gdm.list_deployment_ids()
print('deployment_list:', deployment_list)
  • Lists all deployment IDs associated with guardrails.

  • Output: A list of deployment IDs, each representing a configuration of guardrails.


7. Fetching Deployment Details

deployment_id_detail = gdm.get_deployment(17)
print('deployment_id_detail:', deployment_id_detail)
  • Retrieves details of a specific deployment by its ID (17 in this example).

  • Output: Details of the deployment, including associated guardrails and configurations.


8. Adding Guardrails to a Deployment

guardrails_config = {"guardrailFailConditions": ["FAIL"],
                     "deploymentFailCondition": "ALL_FAIL",
                     "alternateResponse": "Your alternate response"}
  • Configures guardrail behavior when conditions fail.

    • guardrailFailConditions: Triggers guardrails when specific conditions are met.

    • deploymentFailCondition: Aggregates multiple failures (ALL_FAIL requires all guardrails to fail).

    • alternateResponse: Fallback response in case of guardrail-triggered failures.

guardrails = [
    {
      "displayName": "Response_Evaluator",
      "name": "Response Evaluator",
      "config": {
          "mappings": [{"schemaName": "Text", "variableName": "Response"}],
          "params": {
              "isActive": {"value": False},
              "isHighRisk": {"value": True},
              "threshold": {"eq": 0},
              "competitors": {"value": ["Google", "Amazon"]}
          }
      }
    },
    {
      "displayName": "Regex_Check",
      "name": "Regex Check",
      "config": {
          "mappings": [{"schemaName": "Text", "variableName": "Response"}],
          "params": {
              "isActive": {"value": False},
              "isHighRisk": {"value": True},
              "threshold": {"lt1": 1}
          }
      }
    }
]
  • Purpose: Defines guardrail configurations.

    • displayName: A user-friendly name for the guardrail.

    • name: The internal name of the guardrail.

    • config: Contains mappings, parameters, and settings for the guardrail's logic.


9. Initializing the GuardExecutor

executor = GuardExecutor(17, gdm, field_map={'context': 'document'})
  • Purpose: Initializes the executor to run evaluations using the deployment id.

    • 17: Deployment ID of the guardrails to apply.

    • field_map: Maps input fields (e.g., context) to expected variables (document).


10. Preparing Input for Evaluation

message = {'role': 'user', 'content': 'What is the capital of France?'}
prompt_params = {'document': 'Paris is not the capital of france'}
model_params = {'temperature': .7, 'model': 'gpt-4o-mini'}
llm_caller = 'litellm'
  • Supplies input for evaluation.

    • message: Represents the user query.

    • prompt_params: Contextual data provided to the model.

    • model_params: Configuration for the LLM response generation.

    • llm_caller: Specifies the API or library to call the LLM.


11. Executing the Guardrails

executor([message], prompt_params, model_params, llm_caller)
  • Runs the guardrails evaluation with the given inputs.

PreviousQuickstartNextRagaAI Whitepapers

Last updated 6 months ago

Was this helpful?