# Context Recall RAG Metric

**Objective**: This metric measures the ability to retrieve documents containing ground truth facts. Simply, it returns the proportion of context documents which had impact on the ground truth response.

**Required Parameters**: `Prompt`, `Expected Response`, `Context`

**Interpretation**: Higher score signifies major proportion of contexts supplied to the LLM were helpful in answering the prompt question

<figure><img src="https://1811327582-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYbIiNdp1QbG4avl7VShw%2Fuploads%2FPSbVtjsACUqU1Lyitwnc%2Fimage.png?alt=media&#x26;token=20b282ab-3bbe-4d36-aa11-39282e5da6e6" alt=""><figcaption></figcaption></figure>

**Code Execution:**

```python
metrics=[
    {"name": "Context Recall", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-column-identifier", "schema_mapping": schema_mapping}
]
```

The "schema\_mapping" variable needs to be defined first and is a pre-requisite for evaluation runs. Learn how to set this variable [here](https://docs.raga.ai/ragaai-catalyst/concepts/running-ragaai-evals/executing-evaluations).

**Example**:

* Prompt: What is the chemical formula for water and what are different elements in it?
* Expected Response: The chemical formula for water is H2O and it is composed of two elements: hydrogen and oxygen.
* Context: \[‘Water is essential for all known forms of life and is a major component of the Earth's hydrosphere.’,‘Water chemical formula is H2O.’, ‘The chemical formula for carbon dioxide is CO2, which is a greenhouse gas.’]
* *Metric Output*: {‘score’:0.5, ‘reason’:‘’context does not contain any information about the elements of water’}
