# Human Feedback & Annotations

**Human Feedback & Annotations** in **RagaAI Catalyst** empower teams to enrich AI evaluations with expert oversight, providing granular insights into correctness, quality, nuance, and domain-specific preferences. By combining human judgments with automated metrics, you get more reliable, high-quality model assessment and continuous improvement.

#### Why Human Feedback & Annotations Matter

* **Add qualitative nuance** where automated metrics fall short (e.g., nuance in tone, subjective quality, interpretability).
* **Validate model outputs** for sensitive use cases like customer support, legal, or brand-critical scenarios.
* **Enable data-driven training loops**—collect human annotations to fine-tune models, prompts, or retriever selections.
* **Support A/B comparisons** by embedding side-by-side human-rated performance evaluations.

{% content-ref url="human-feedback-and-annotations/thumbs-up-down" %}
[thumbs-up-down](https://docs.raga.ai/ragaai-catalyst/human-feedback-and-annotations/thumbs-up-down)
{% endcontent-ref %}

{% content-ref url="human-feedback-and-annotations/add-metric-corrections" %}
[add-metric-corrections](https://docs.raga.ai/ragaai-catalyst/human-feedback-and-annotations/add-metric-corrections)
{% endcontent-ref %}

{% content-ref url="human-feedback-and-annotations/corrections-as-few-shot-examples" %}
[corrections-as-few-shot-examples](https://docs.raga.ai/ragaai-catalyst/human-feedback-and-annotations/corrections-as-few-shot-examples)
{% endcontent-ref %}

{% content-ref url="human-feedback-and-annotations/tagging" %}
[tagging](https://docs.raga.ai/ragaai-catalyst/human-feedback-and-annotations/tagging)
{% endcontent-ref %}
