Profanity Check
Automatically detect and block profanity in LLM responses. Maintain safe and professional content.
metrics = [
{
"name": "Profanity Check",
"config": {
"model": "gpt-4o-mini",
"provider": "openai"
},
"column_name": "your-column-identifier",
"schema_mapping": schema_mapping
}
]Last updated
Was this helpful?

