# Quickstart

**1.Sign Up and Authentication**

**1.1 Sign Up**

* Start by signing up for an account @ <https://catalyst.raga.ai/>

**1.2 Setup RagaAI Access keys in your code**

To generate a pair of Catalyst Access and Secret Keys, simply:

1. Navigate to Settings -> Authenticate.
2. Click on "Generate New Key".
3. Use the copy buttons against each key and easily paste in your Python environment.

And authenticate in your code:

* Download Catalyst SDK

```python
!pip install ragaai-catalyst
```

* **Import Required Modules**

```python
from ragaai_catalyst import RagaAICatalyst
from ragaai_catalyst import Tracer
```

* **Initialize RagaAI Catalyst**

```python
catalyst = RagaAICatalyst(
    access_key="access_key",
    secret_key="secret_key",
```

<figure><img src="/files/GO99UcILkFtaDHWEpHaO" alt=""><figcaption></figcaption></figure>

#### **2. Create a Project**

* Log in to the RagaAI Catalyst platform.
* Navigate to the **Projects** section and click **Create New Project**.
* Select the use case as **"Agentic Application"**.
* Provide a name for your project and click **Create**.

***

#### **3. Create a Dataset Using Tracing (Refer** [**Doc**](/ragaai-catalyst/agentic-testing/concepts/tracing.md)**)**

* Once your project is created, you'll be redirected to the **Dataset** page.
* Use tracing to record interactions and instrument your application. Refer to the documentation on how to set up tracing.
* Commit the traces to create a dataset. A dataset represents an experiment and contains all the recorded interactions for evaluation.
* View the dataset columns, which include essential information such as TraceID, Timestamp, Trace URL, Feedback, Response, and Metrics.

***

#### **4. Run Evaluations and Metrics**

* Navigate to your dataset and click on the **Evaluate** button.
* Choose from pre-configured metrics such as **Hallucination**, **Cosine Similarity**, **Honesty**, or **Toxicity**.
* Configure the metric:
  * Select the evaluation type (e.g., LLM, Agent, or Tool).
  * Define the schema by choosing the span name and parameters to evaluate.
  * Configure the model and set pass/fail thresholds.
* Run the metric and analyze the results for insights into your application's performance.

***

#### **5. Compare Traces ( Refer** [**Doc**](/ragaai-catalyst/agentic-testing/compare-traces.md)**)**

* Within the dataset, click on the **Compare** button.
* Select up to 3 datapoints (traces) to compare.
* View the **diff view**, which highlights differences in code and attributes between traces.
* Use this feature to analyze performance variations across different spans of the application.

***

#### **6. Compare Experiments (Refer** [**Doc**](/ragaai-catalyst/agentic-testing/compare-experiments.md)**)**

* In the **Dataset** view, select **Compare Datasets**.
* Choose up to 3 experiments for comparison.
* Click **Compare Experiments** to open a new **diff view**:
  * Compare code versions and toggle between different configurations.
  * Change the baseline experiment for a more detailed comparison.
  * Analyze graphs such as:
    * **Tool Usage Count Bar Chart**: Understand patterns in tool usage.
    * **Time vs. Tool Calls Chart**: Compare execution times across experiments.
    * **Cost Analysis**: Evaluate cost efficiency between experiments.

***


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.raga.ai/ragaai-catalyst/agentic-testing/quickstart.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
