# Hallucination

The **Hallucination Metric** helps users evaluate whether an agent has provided fabricated or inaccurate responses. This metric is particularly useful for validating trace-level evaluations. Below are the steps to configure and execute the hallucination metric effectively.

***

**About Hallucination Metric**

* **Type**: Trace-level Metric
* **Evaluation Level**: Can be run as a **trace evaluation**.
* **Purpose**: Identifies instances where the agent generates responses that deviate from the expected or factual outputs.

***

#### **How to Configure the Hallucination Metric**

**Step 1: Prerequisites**

To configure the hallucination metric, you need the following:

1. **Vertex AI Service Account**:
   * Ensure you have a Vertex AI-enabled project in Google Cloud.
   * Refer to [Create Service Accounts](https://cloud.google.com/iam/docs/service-accounts-create) for setup guidance.
2. **Service Account Role**:
   * Assign the role **"Vertex AI Administrator"** to the service account.
3. **Service Account Key**:
   * Create a key for the service account. Refer to [Create and Manage Service Account Keys](https://cloud.google.com/iam/docs/keys-create-delete).
   * You will receive a `.json` file as the key.

***

**Step 2: Adding Configuration to RagaAI Catalyst**

1. **Access Settings**:
   * Navigate to `Settings > API Key > Create New Parameter`.
2. **Add Keys**:

   * **Key**: `GOOGLE_APPLICATION_CREDENTIALS`
     * **Value**: Paste the content of the `.json` file.
   * **Key**: `vertex_location`
     * **Value**: The location of the Vertex AI project (e.
   * **Key**: `vertex_project`

     * **Value**: The Project ID where Vertex AI is enabled.

   <figure><img src="https://1811327582-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYbIiNdp1QbG4avl7VShw%2Fuploads%2FJLGDLXkerhumikggpgZj%2FScreenshot%202025-01-06%20at%2010.43.56%E2%80%AFAM.png?alt=media&#x26;token=07008739-061f-41b2-8386-df080fb6355f" alt=""><figcaption></figcaption></figure>

***

#### **How to Run the Hallucination Metric**

**Steps to Execute**

1. **Access the Dataset**:
   * Navigate to the dataset and click on **Evaluate**.
2. **Select the Metric**:
   * Choose **Hallucination-Alteryx** from the available metric options.
   * You can rename the metric if needed.
3. **Set Evaluation Type**:
   * Select the evaluation type:
     * **Trace Evaluation** or
     * **Conversation Evaluation**.
4. **Define the Schema**:
   * Specify the schema as `_trace` .
5. **Model Configuration**:
   * Choose the model configuration for the evaluation.
6. **Passing Criteria**:
   * Define pass/fail thresholds to set success criteria.
7. **Run Evaluation**:
   * Click on **Run** to initiate the hallucination metric evaluation.

***

#### **When to Run the Hallucination Metric?**

Run the hallucination metric:

* **Trace Evaluation**: When you need to evaluate individual traces for inaccurate or fabricated responses.

This metric is most effective when you want to identify and analyse instances where the agent's output deviates from expected or factual information.
