Toxicity
Detect toxic language in LLM responses. Apply strict guardrails to keep interactions safe.
metrics = [
{
"name": "Toxicity",
"config": {
"model": "gpt-4o-mini",
"provider": "openai"
},
"column_name": "your-column-identifier",
"schema_mapping": schema_mapping
}
]Last updated
Was this helpful?

