# Toxicity RAG Metric

**Objective**: This guardrail uses pre-trained multi-label models to check whether the generated text is toxic. If the model predicts any label of: toxicity, severe\_toxicity, obscene, threat, insult, identity\_attack, or sexual\_explicit with confidence higher than the specified threshold, the guardrail fails.

**Required Parameters**: `Response`

**Interpretation**: Higher score indicates the model response was toxic.

**Metric Execution via UI:**<br>

<figure><img src="https://1811327582-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYbIiNdp1QbG4avl7VShw%2Fuploads%2FJf5TNDz7BYdo7Yjt4nzH%2Fimage.png?alt=media&#x26;token=7ac90850-c5b4-4d36-8f47-63ba49e90d13" alt=""><figcaption></figcaption></figure>

**Code Execution:**

```python
metrics=[
    {"name": "Toxicity", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-column-identifier", "schema_mapping": schema_mapping}
]
```

Model and Provider parameters are not mandatory to run Guardrails.

The "schema\_mapping" variable needs to be defined first and is a pre-requisite for evaluation runs. Learn how to set this variable [here](https://docs.raga.ai/ragaai-catalyst/concepts/running-ragaai-evals/executing-evaluations).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.raga.ai/ragaai-catalyst/ragaai-metric-library/rag-metrics/toxicity.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
