# Quickstart

**1.Sign Up and Authentication**

**1.1 Sign Up**

* Start by signing up for an account @ <https://catalyst.raga.ai/>

**1.2 Setup RagaAI Access keys in your code**

To generate a pair of Catalyst Access and Secret Keys, simply:

1. Navigate to Settings -> Authenticate.
2. Click on "Generate New Key".
3. Use the copy buttons against each key and easily paste in your Python environment.

And authenticate in your code:

* Download Catalyst SDK

```python
!pip install ragaai-catalyst
```

* **Import Required Modules**

```python
from ragaai_catalyst import RagaAICatalyst
from ragaai_catalyst import Tracer
```

* **Initialize RagaAI Catalyst**

```python
catalyst = RagaAICatalyst(
    access_key="access_key",
    secret_key="secret_key",
```

<figure><img src="https://1811327582-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FYbIiNdp1QbG4avl7VShw%2Fuploads%2FO4cbHHtZGnGWGQHUgv6h%2FScreenRecording2024-10-06at10.36.48AM-ezgif.com-video-to-gif-converter.gif?alt=media&#x26;token=9be2b8d4-6880-4ff0-8b4a-3e93136b9de2" alt=""><figcaption></figcaption></figure>

#### **2. Create a Project**

* Log in to the RagaAI Catalyst platform.
* Navigate to the **Projects** section and click **Create New Project**.
* Select the use case as **"Agentic Application"**.
* Provide a name for your project and click **Create**.

***

#### **3. Create a Dataset Using Tracing (Refer** [**Doc**](https://docs.raga.ai/ragaai-catalyst/agentic-testing/concepts/tracing)**)**

* Once your project is created, you'll be redirected to the **Dataset** page.
* Use tracing to record interactions and instrument your application. Refer to the documentation on how to set up tracing.
* Commit the traces to create a dataset. A dataset represents an experiment and contains all the recorded interactions for evaluation.
* View the dataset columns, which include essential information such as TraceID, Timestamp, Trace URL, Feedback, Response, and Metrics.

***

#### **4. Run Evaluations and Metrics**

* Navigate to your dataset and click on the **Evaluate** button.
* Choose from pre-configured metrics such as **Hallucination**, **Cosine Similarity**, **Honesty**, or **Toxicity**.
* Configure the metric:
  * Select the evaluation type (e.g., LLM, Agent, or Tool).
  * Define the schema by choosing the span name and parameters to evaluate.
  * Configure the model and set pass/fail thresholds.
* Run the metric and analyze the results for insights into your application's performance.

***

#### **5. Compare Traces ( Refer** [**Doc**](https://docs.raga.ai/ragaai-catalyst/agentic-testing/compare-traces)**)**

* Within the dataset, click on the **Compare** button.
* Select up to 3 datapoints (traces) to compare.
* View the **diff view**, which highlights differences in code and attributes between traces.
* Use this feature to analyze performance variations across different spans of the application.

***

#### **6. Compare Experiments (Refer** [**Doc**](https://docs.raga.ai/ragaai-catalyst/agentic-testing/compare-experiments)**)**

* In the **Dataset** view, select **Compare Datasets**.
* Choose up to 3 experiments for comparison.
* Click **Compare Experiments** to open a new **diff view**:
  * Compare code versions and toggle between different configurations.
  * Change the baseline experiment for a more detailed comparison.
  * Analyze graphs such as:
    * **Tool Usage Count Bar Chart**: Understand patterns in tool usage.
    * **Time vs. Tool Calls Chart**: Compare execution times across experiments.
    * **Cost Analysis**: Evaluate cost efficiency between experiments.

***
