# RAG Metrics

**RAG (Retrieval-Augmented Generation) Metrics** help you measure how well your retrieval pipelines and generation layers are performing inside **RagaAI Catalyst**. Since RAG systems combine search + LLM reasoning, monitoring both sides is critical to ensure reliability, accuracy, and efficiency.

### Why RAG Metrics matter

* **Detect gaps in retrieval**: Spot when your retriever fails to surface the most relevant passages.
* **Evaluate generated answers**: Check if the model’s outputs are grounded in retrieved context.
* **Compare retrievers and models**: Benchmark different embeddings, vector stores, or LLMs with the same dataset.
* **Optimize cost vs quality**: Find the balance between wider retrieval vs faster responses.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.raga.ai/ragaai-catalyst/ragaai-metric-library/rag-metrics.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
