# QAG Score

**Objective:**&#x20;

QAG Score evaluates the quality of a generated summary or response by formulating questions based on the generated output and then checking if answers to those questions match the original text. This method leverages question generation and answering tasks to verify factual alignment and information consistency, particularly useful in summarization and knowledge extraction tasks. It focuses on the factual relevance of the generated content and its ability to respond accurately to pertinent questions.

**Required Columns in Dataset:**

`LLM Summary`, `Original Document`

**Interpretation:**

* **High QAG Score**: Suggests that the generated output answers key questions accurately, meaning it is factually aligned with the original source.
* **Low QAG Score**: Implies that the generated content may fail to answer or misrepresent important information, reflecting factual inconsistencies.

**Execution via UI:**

<figure><img src="/files/TMjEjPxhvWFM09RAxElQ" alt=""><figcaption></figcaption></figure>

**Execution via SDK:**

```python
metrics=[
    {"name": "QAGScore", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-text", "schema_mapping": schema_mapping}
]
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.raga.ai/ragaai-catalyst/ragaai-metric-library/text-summarization/qag-score.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
