# Response Correctness RAG Metric

**Objective**: This metric measures how accurate and factually grounded the entire response is, as compared to the expected response (ground truth).

**Parameters:** `Prompt`, `Response` ,`Expected Response`&#x20;

**Interpretation**: Higher score indicates the model response was correct for the prompt. Failed result indicates the response is not factually correct compared to the expected response.

<figure><img src="https://1811327582-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYbIiNdp1QbG4avl7VShw%2Fuploads%2Fz0Ugf3xaUxOwO3DTfYmk%2Fimage.png?alt=media&#x26;token=402c1e7f-22ee-4c90-be76-403091793070" alt=""><figcaption></figcaption></figure>

**Code Execution:**

```python
metrics=[
    {"name": "Response Correctness", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-column-identifier", "schema_mapping": schema_mapping}
]
```

The "schema\_mapping" variable needs to be defined first and is a pre-requisite for evaluation runs. Learn how to set this variable [here](https://docs.raga.ai/ragaai-catalyst/concepts/running-ragaai-evals/executing-evaluations).

**Example**:

* Prompt: Who was the first person to walk on the moon and when did it happen?
* Expected Response (Ground Truth): The first person to walk on the moon was Neil Armstrong, and it happened on July 20, 1969.
* Response: The first person to walk on the moon was Buzz Aldrin, and it happened on July 20, 1970.
* *Metric Output*: {‘score’: 0, ‘reason’: ‘Neil Armstrong is the first person to walk on moon on July 20, 1969.’}
