LogoLogo
Slack CommunityCatalyst Login
  • Welcome
  • RagaAI Catalyst
    • User Quickstart
    • Concepts
      • Configure Your API Keys
      • Supported LLMs
        • OpenAI
        • Gemini
        • Azure
        • AWS Bedrock
        • ANTHROPIC
      • Catalyst Access/Secret Keys
      • Enable Custom Gateway
      • Uploading Data
        • Create new project
        • RAG Datset
        • Chat Dataset
          • Prompt Format
        • Logging traces (LlamaIndex, Langchain)
        • Trace Masking Functions
        • Trace Level Metadata
        • Correlating Traces with External IDs
        • Add Dataset
      • Running RagaAI Evals
        • Executing Evaluations
        • Compare Datasets
      • Analysis
      • Embeddings
    • RagaAI Metric Library
      • RAG Metrics
        • Hallucination
        • Faithfulness
        • Response Correctness
        • Response Completeness
        • False Refusal
        • Context Relevancy
        • Context Precision
        • Context Recall
        • PII Detection
        • Toxicity
      • Chat Metrics
        • Agent Quality
        • Instruction Adherence
        • User Chat Quality
      • Text-to-SQL
        • SQL Response Correctness
        • SQL Prompt Ambiguity
        • SQL Context Ambiguity
        • SQL Context Sufficiency
        • SQL Prompt Injection
      • Text Summarization
        • Summary Consistency
        • Summary Relevance
        • Summary Fluency
        • Summary Coherence
        • SummaC
        • QAG Score
        • ROUGE
        • BLEU
        • METEOR
        • BERTScore
      • Information Extraction
        • MINEA
        • Subjective Question Correction
        • Precision@K
        • Chunk Relevance
        • Entity Co-occurrence
        • Fact Entropy
      • Code Generation
        • Functional Correctness
        • ChrF
        • Ruby
        • CodeBLEU
        • Robust Pass@k
        • Robust Drop@k
        • Pass-Ratio@n
      • Marketing Content Evaluation
        • Engagement Score
        • Misattribution
        • Readability
        • Topic Coverage
        • Fabrication
      • Learning Management System
        • Topic Coverage
        • Topic Redundancy
        • Question Redundancy
        • Answer Correctness
        • Source Citability
        • Difficulty Level
      • Additional Metrics
        • Guardrails
          • Anonymize
          • Deanonymize
          • Ban Competitors
          • Ban Substrings
          • Ban Topics
          • Code
          • Invisible Text
          • Language
          • Secret
          • Sentiment
          • Factual Consistency
          • Language Same
          • No Refusal
          • Reading Time
          • Sensitive
          • URL Reachability
          • JSON Verify
        • Vulnerability Scanner
          • Bullying
          • Deadnaming
          • SexualContent
          • Sexualisation
          • SlurUsage
          • Profanity
          • QuackMedicine
          • DAN 11
          • DAN 10
          • DAN 9
          • DAN 8
          • DAN 7
          • DAN 6_2
          • DAN 6_0
          • DUDE
          • STAN
          • DAN_JailBreak
          • AntiDAN
          • ChatGPT_Developer_Mode_v2
          • ChatGPT_Developer_Mode_RANTI
          • ChatGPT_Image_Markdown
          • Ablation_Dan_11_0
          • Anthropomorphisation
      • Guardrails
        • Competitor Check
        • Gibberish Check
        • PII
        • Regex Check
        • Response Evaluator
        • Toxicity
        • Unusual Prompt
        • Ban List
        • Detect Drug
        • Detect Redundancy
        • Detect Secrets
        • Financial Tone Check
        • Has Url
        • HTML Sanitisation
        • Live URL
        • Logic Check
        • Politeness Check
        • Profanity Check
        • Quote Price
        • Restrict Topics
        • SQL Predicates Guard
        • Valid CSV
        • Valid JSON
        • Valid Python
        • Valid Range
        • Valid SQL
        • Valid URL
        • Cosine Similarity
        • Honesty Detection
        • Toxicity Hate Speech
    • Prompt Playground
      • Concepts
      • Single-Prompt Playground
      • Multiple Prompt Playground
      • Run Evaluations
      • Using Prompt Slugs with Python SDK
      • Create with AI using Prompt Wizard
      • Prompt Diff View
    • Synthetic Data Generation
    • Gateway
      • Quickstart
    • Guardrails
      • Quickstart
      • Python SDK
    • RagaAI Whitepapers
      • RagaAI RLEF (RAG LLM Evaluation Framework)
    • Agentic Testing
      • Quickstart
      • Concepts
        • Tracing
          • Langgraph (Agentic Tracing)
          • RagaAI Catalyst Tracing Guide for Azure OpenAI Users
        • Dynamic Tracing
        • Application Workflow
      • Create New Dataset
      • Metrics
        • Hallucination
        • Toxicity
        • Honesty
        • Cosine Similarity
      • Compare Traces
      • Compare Experiments
      • Add metrics locally
    • Custom Metric
    • Auto Prompt Optimization
    • Human Feedback & Annotations
      • Thumbs Up/Down
      • Add Metric Corrections
      • Corrections as Few-Shot Examples
      • Tagging
    • On-Premise Deployment
      • Enterprise Deployment Guide for AWS
      • Enterprise Deployment Guide for Azure
      • Evaluation Deployment Guide
        • Evaluation Maintenance Guide
    • Fine Tuning (OpenAI)
    • Integration
    • SDK Release Notes
      • ragaai-catalyst 2.1.7
  • RagaAI Prism
    • Quickstart
    • Sandbox Guide
      • Object Detection
      • LLM Summarization
      • Semantic Segmentation
      • Tabular Data
      • Super Resolution
      • OCR
      • Image Classification
      • Event Detection
    • Test Inventory
      • Object Detection
        • Failure Mode Analysis
        • Model Comparison Test
        • Drift Detection
        • Outlier Detection
        • Data Leakage Test
        • Labelling Quality Test
        • Scenario Imbalance
        • Class Imbalance
        • Active Learning
        • Image Property Drift Detection
      • Large Language Model (LLM)
        • Failure Mode Analysis
      • Semantic Segmentation
        • Failure Mode Analysis
        • Labelling Quality Test
        • Active Learning
        • Drift Detection
        • Class Imbalance
        • Scenario Imbalance
        • Data Leakage Test
        • Outlier Detection
        • Label Drift
        • Semantic Similarity
        • Near Duplicates Detection
        • Cluster Imbalance Test
        • Image Property Drift Detection
        • Spatio-Temporal Drift Detection
        • Spatio-Temporal Failure Mode Analysis
      • Tabular Data
        • Failure Mode Analysis
      • Instance Segmentation
        • Failure Mode Analysis
        • Labelling Quality Test
        • Drift Detection
        • Class Imbalance
        • Scenario Imbalance
        • Label Drift
        • Data Leakage Test
        • Outlier Detection
        • Active Learning
        • Near Duplicates Detection
      • Super Resolution
        • Semantic Similarity
        • Active Learning
        • Near Duplicates Detection
        • Outlier Detection
      • OCR
        • Missing Value Test
        • Outlier Detection
      • Image Classification
        • Failure Mode Analysis
        • Labelling Quality Test
        • Class Imbalance
        • Drift Detection
        • Near Duplicates Test
        • Data Leakage Test
        • Outlier Detection
        • Active Learning
        • Image Property Drift Detection
      • Event Detection
        • Failure Mode Analysis
        • A/B Test
    • Metric Glossary
    • Upload custom model
    • Event Detection
      • Upload Model
      • Generate Inference
      • Run tests
    • On-Premise Deployment
      • Enterprise Deployment Guide for AWS
      • Enterprise Deployment Guide for Azure
  • Support
Powered by GitBook
On this page

Was this helpful?

  1. RagaAI Catalyst
  2. Prompt Playground

Using Prompt Slugs with Python SDK

Prompt slugs created on the RagaAI Playground can be used with the Python SDK. It covers the initialisation of the SDK, managing prompts, retrieving prompt versions, and compiling prompts using the SDK.

1. Prerequisites

  • Ensure you have the RagaAI Python SDK installed:

    pip install ragaai-catalyst
  • Obtain your access_key and secret_key from the RagaAI admin, and have your base_url for the API endpoint.

2. Initialise RagaAICatalyst and PromptManager

First, you need to initialise the RagaAICatalyst instance and the PromptManager for your specific project.

from ragaai_catalyst import RagaAICatalyst
from ragaai_catalyst.prompt_manager import PromptManager

# Initialize RagaAICatalyst instance
catalyst = RagaAICatalyst(
    access_key="your_access_key",
    secret_key="your_secret_key",
    base_url="https://your-api-base-url.com/api"
)

# Create a PromptManager for your project
project_name = "your-project-name"
prompt_manager = PromptManager(project_name)

Replace the placeholders your_access_key, your_secret_key, your-api-base-url.com, and your-project-name with your actual credentials and project details.

3. List Available Prompts

You can list all the prompts available in your project using the list_prompts method.

pythonCopy code# List available prompts in the project
prompts = prompt_manager.list_prompts()
print("Available prompts:", prompts)

This will output a list of prompt slugs that have been created in the project.

4. List Prompt Versions

Each prompt can have multiple versions. You can list the versions of a specific prompt using its slug.

# List available versions for a specific prompt
prompt_name = "your_prompt_name"
versions = prompt_manager.list_prompt_versions(prompt_name)
print("Available versions for the prompt:", versions)

Replace your_prompt_name with the name of the prompt you want to query.

5. Get a Prompt Object

You can retrieve a prompt object by its name or by specifying both its name and version. This will allow you to access its details and use it further.

# Retrieve a prompt object by name
prompt_name = "your_prompt_name"
prompt = prompt_manager.get_prompt(prompt_name)
print("Prompt details:", prompt)

# Retrieve a specific prompt object by name and version
version = "your_version"
prompt = prompt_manager.get_prompt(prompt_name, version)
print(f"Prompt details for version {version}:", prompt)

Replace your_prompt_name and your_version with the appropriate prompt name and version.

6. Get Prompt Variables

You can get a list of variables required by the prompt to generate a response. These variables need to be provided when compiling the prompt.

# Get the variables required for the prompt
prompt_variables = prompt.get_variables()
print("Prompt variables:", prompt_variables)

This will output a dictionary of variables required by the prompt.

7. Compile Prompt

Once you have the prompt object and its required variables, you can compile the prompt by providing the necessary values for the variables.

# Compile the prompt with the required variables
compiled_prompt = prompt.compile(
    query="What's the weather?", 
    context="sunny", 
    llm_response="It's sunny today"
)
print("Compiled prompt:", compiled_prompt)

Replace the query, context, and llm_response with appropriate values based on the prompt variables.

8. Get Prompt Parameters

You can also retrieve additional parameters associated with the prompt, which provide further configuration options.

# Get the additional parameters for the prompt
parameters = prompt.get_parameters()
print("Prompt parameters:", parameters)

This process enables you to effectively use prompts created in the RagaAI Playground through the Python SDK.

PreviousRun EvaluationsNextCreate with AI using Prompt Wizard

Last updated 7 months ago

Was this helpful?