# Guardrails

- [Competitor Check](/ragaai-catalyst/ragaai-metric-library/guardrails/competitor-check.md): Block competitor mentions in outputs. Apply guardrails to align with business strategy and compliance.
- [Gibberish Check](/ragaai-catalyst/ragaai-metric-library/guardrails/gibberish-check.md): Detect nonsensical outputs from LLMs. Apply guardrails to ensure clarity and coherence.
- [PII](/ragaai-catalyst/ragaai-metric-library/guardrails/pii.md): Flag and block personal data exposure. Protect privacy in AI-generated outputs.
- [Regex Check](/ragaai-catalyst/ragaai-metric-library/guardrails/regex-check.md): Validate outputs using regex guardrails. Catch unsafe, malformed, or non-compliant responses automatically.
- [Response Evaluator](/ragaai-catalyst/ragaai-metric-library/guardrails/response-evaluator.md): Use automated guardrails to evaluate responses. Ensure outputs align with expected standards.
- [Toxicity](/ragaai-catalyst/ragaai-metric-library/guardrails/toxicity.md): Detect toxic language in LLM responses. Apply strict guardrails to keep interactions safe.
- [Unusual Prompt](/ragaai-catalyst/ragaai-metric-library/guardrails/unusual-prompt.md): Flag unusual or suspicious prompts. Guard against adversarial inputs that could compromise AI behavior.
- [Ban List](/ragaai-catalyst/ragaai-metric-library/guardrails/ban-list.md): Block specific banned terms or patterns. Use guardrails to control AI output safety and compliance.
- [Detect Drug](/ragaai-catalyst/ragaai-metric-library/guardrails/detect-drug.md): Identify and block drug-related outputs. Enforce safe and compliant AI usage.
- [Detect Redundancy](/ragaai-catalyst/ragaai-metric-library/guardrails/detect-redundancy.md): Spot redundant outputs in AI responses. Optimize for efficiency and clarity in generated content.
- [Detect Secrets](/ragaai-catalyst/ragaai-metric-library/guardrails/detect-secrets.md): Detect secrets like API keys or passwords in model responses. Prevent accidental exposure with automated guardrails.
- [Financial Tone Check](/ragaai-catalyst/ragaai-metric-library/guardrails/financial-tone-check.md): Keep AI-generated financial content professional and compliant. Detect tone mismatches and maintain credibility.
- [Has Url](/ragaai-catalyst/ragaai-metric-library/guardrails/has-url.md): Automatically detect URLs in AI outputs. Ensure link presence is flagged and handled correctly.
- [HTML Sanitisation](/ragaai-catalyst/ragaai-metric-library/guardrails/html-sanitisation.md): Strip unsafe HTML from model responses. Prevent injection risks and ensure content safety.
- [Live URL](/ragaai-catalyst/ragaai-metric-library/guardrails/live-url.md): Check if AI-generated URLs are live and valid. Ensure outputs contain only working, trustworthy links.
- [Logic Check](/ragaai-catalyst/ragaai-metric-library/guardrails/logic-check.md): Identify logical flaws or contradictions in model responses. Improve reliability with automated checks.
- [Politeness Check](/ragaai-catalyst/ragaai-metric-library/guardrails/politeness-check.md): Detect impolite or harsh tones in AI responses. Guarantee positive, user-friendly interactions.
- [Profanity Check](/ragaai-catalyst/ragaai-metric-library/guardrails/profanity-check.md): Automatically detect and block profanity in LLM responses. Maintain safe and professional content.
- [Quote Price](/ragaai-catalyst/ragaai-metric-library/guardrails/quote-price.md): Ensure AI-generated content includes correct, valid price quotes. Improve trust in financial AI outputs.
- [Restrict Topics](/ragaai-catalyst/ragaai-metric-library/guardrails/restrict-topics.md): Enforce topic restrictions in LLM outputs. Keep AI conversations safe, relevant, and policy-aligned.
- [SQL Predicates Guard](/ragaai-catalyst/ragaai-metric-library/guardrails/sql-predicates-guard.md): Prevent dangerous SQL predicates in AI outputs. Protect databases from risky or malicious statements.
- [Valid CSV](/ragaai-catalyst/ragaai-metric-library/guardrails/valid-csv.md): Ensure AI outputs produce correct CSV structure. Detect errors before data is processed or used
- [Valid JSON](/ragaai-catalyst/ragaai-metric-library/guardrails/valid-json.md): Ensure AI outputs produce correct JSON structure. Detect errors before data is processed or used.
- [Valid Python](/ragaai-catalyst/ragaai-metric-library/guardrails/valid-python.md): Detect syntax errors in AI-generated Python code. Guarantee safe, executable programming outputs.
- [Valid Range](/ragaai-catalyst/ragaai-metric-library/guardrails/valid-range.md): Ensure numeric values generated by AI fall within allowed ranges. Keep calculations safe and accurate.
- [Valid SQL](/ragaai-catalyst/ragaai-metric-library/guardrails/valid-sql.md): Check correctness of SQL outputs from AI models. Prevent invalid or broken database statements.
- [Valid URL](/ragaai-catalyst/ragaai-metric-library/guardrails/valid-url.md): Automatically check if URLs generated by AI are valid. Reduce errors with automated guardrails.
- [Cosine Similarity](/ragaai-catalyst/ragaai-metric-library/guardrails/cosine-similarity.md): Compare AI outputs with reference text using cosine similarity. Improve semantic consistency and accuracy.
- [Honesty Detection](/ragaai-catalyst/ragaai-metric-library/guardrails/honesty-detection.md): Detect dishonest or misleading AI outputs. Improve transparency and build trust in model responses.
- [Toxicity Hate Speech](/ragaai-catalyst/ragaai-metric-library/guardrails/toxicity-hate-speech.md): Identify toxic or hateful language in AI outputs. Enforce safe, respectful, and policy-compliant content.
