Response Correctness RAG Metric
Assess if the LLM’s response directly answers the user’s question using context. Evaluate accuracy to check if the model found the right information.

Last updated
Was this helpful?
Assess if the LLM’s response directly answers the user’s question using context. Evaluate accuracy to check if the model found the right information.

Last updated
Was this helpful?
Was this helpful?
metrics=[
{"name": "Response Correctness", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-column-identifier", "schema_mapping": schema_mapping}
]