# Precision\@K

**Objective:**&#x20;

Precision\@K measures how many of the top K retrieved or generated items are relevant to a query. It is a rank-based evaluation metric commonly used in information retrieval and recommendation systems. A higher Precision\@K score means the model has higher relevance among its top K results, indicating strong early retrieval performance, particularly when precision in high-rank positions is crucial.

**Required Columns in Dataset:**

`Prompt`, `Ranked Context`, `Labeled Text`

**Interpretation:**

* **High Precision\@K**: Shows that the top K results are highly relevant to the query, indicating the model's effectiveness in prioritizing the most appropriate outputs early in the ranking.
* **Low Precision\@K**: Suggests that the top K results contain irrelevant information, reflecting poor retrieval performance in terms of precision.

**Execution via UI:**

<figure><img src="https://1811327582-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYbIiNdp1QbG4avl7VShw%2Fuploads%2FYurLyLH12c5YhfSWahdi%2Fimage.png?alt=media&#x26;token=c4b30ab4-c8b6-4aab-8745-1fa37d2623e3" alt=""><figcaption></figcaption></figure>

**Execution via SDK:**

Precision\_K doesn't require LLM for computation.

```python
metrics=[
    {"name": "Precision_K", "column_name": "your-text", "schema_mapping": schema_mapping}
]
```
