Toxicity RAG Metric
Evaluate LLM responses for toxic or offensive language. Flag unsafe outputs and apply filters to ensure safe, respectful interactions.

Last updated
Was this helpful?
Evaluate LLM responses for toxic or offensive language. Flag unsafe outputs and apply filters to ensure safe, respectful interactions.

Last updated
Was this helpful?
Was this helpful?
metrics=[
{"name": "Toxicity", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-column-identifier", "schema_mapping": schema_mapping}
]