LogoLogo
Slack CommunityCatalyst Login
  • Welcome
  • RagaAI Catalyst
    • User Quickstart
    • Concepts
      • Configure Your API Keys
      • Supported LLMs
        • OpenAI
        • Gemini
        • Azure
        • AWS Bedrock
        • ANTHROPIC
      • Catalyst Access/Secret Keys
      • Enable Custom Gateway
      • Uploading Data
        • Create new project
        • RAG Datset
        • Chat Dataset
          • Prompt Format
        • Logging traces (LlamaIndex, Langchain)
        • Trace Masking Functions
        • Trace Level Metadata
        • Correlating Traces with External IDs
        • Add Dataset
      • Running RagaAI Evals
        • Executing Evaluations
        • Compare Datasets
      • Analysis
      • Embeddings
    • RagaAI Metric Library
      • RAG Metrics
        • Hallucination
        • Faithfulness
        • Response Correctness
        • Response Completeness
        • False Refusal
        • Context Relevancy
        • Context Precision
        • Context Recall
        • PII Detection
        • Toxicity
      • Chat Metrics
        • Agent Quality
        • Instruction Adherence
        • User Chat Quality
      • Text-to-SQL
        • SQL Response Correctness
        • SQL Prompt Ambiguity
        • SQL Context Ambiguity
        • SQL Context Sufficiency
        • SQL Prompt Injection
      • Text Summarization
        • Summary Consistency
        • Summary Relevance
        • Summary Fluency
        • Summary Coherence
        • SummaC
        • QAG Score
        • ROUGE
        • BLEU
        • METEOR
        • BERTScore
      • Information Extraction
        • MINEA
        • Subjective Question Correction
        • Precision@K
        • Chunk Relevance
        • Entity Co-occurrence
        • Fact Entropy
      • Code Generation
        • Functional Correctness
        • ChrF
        • Ruby
        • CodeBLEU
        • Robust Pass@k
        • Robust Drop@k
        • Pass-Ratio@n
      • Marketing Content Evaluation
        • Engagement Score
        • Misattribution
        • Readability
        • Topic Coverage
        • Fabrication
      • Learning Management System
        • Topic Coverage
        • Topic Redundancy
        • Question Redundancy
        • Answer Correctness
        • Source Citability
        • Difficulty Level
      • Additional Metrics
        • Guardrails
          • Anonymize
          • Deanonymize
          • Ban Competitors
          • Ban Substrings
          • Ban Topics
          • Code
          • Invisible Text
          • Language
          • Secret
          • Sentiment
          • Factual Consistency
          • Language Same
          • No Refusal
          • Reading Time
          • Sensitive
          • URL Reachability
          • JSON Verify
        • Vulnerability Scanner
          • Bullying
          • Deadnaming
          • SexualContent
          • Sexualisation
          • SlurUsage
          • Profanity
          • QuackMedicine
          • DAN 11
          • DAN 10
          • DAN 9
          • DAN 8
          • DAN 7
          • DAN 6_2
          • DAN 6_0
          • DUDE
          • STAN
          • DAN_JailBreak
          • AntiDAN
          • ChatGPT_Developer_Mode_v2
          • ChatGPT_Developer_Mode_RANTI
          • ChatGPT_Image_Markdown
          • Ablation_Dan_11_0
          • Anthropomorphisation
      • Guardrails
        • Competitor Check
        • Gibberish Check
        • PII
        • Regex Check
        • Response Evaluator
        • Toxicity
        • Unusual Prompt
        • Ban List
        • Detect Drug
        • Detect Redundancy
        • Detect Secrets
        • Financial Tone Check
        • Has Url
        • HTML Sanitisation
        • Live URL
        • Logic Check
        • Politeness Check
        • Profanity Check
        • Quote Price
        • Restrict Topics
        • SQL Predicates Guard
        • Valid CSV
        • Valid JSON
        • Valid Python
        • Valid Range
        • Valid SQL
        • Valid URL
        • Cosine Similarity
        • Honesty Detection
        • Toxicity Hate Speech
    • Prompt Playground
      • Concepts
      • Single-Prompt Playground
      • Multiple Prompt Playground
      • Run Evaluations
      • Using Prompt Slugs with Python SDK
      • Create with AI using Prompt Wizard
      • Prompt Diff View
    • Synthetic Data Generation
    • Gateway
      • Quickstart
    • Guardrails
      • Quickstart
      • Python SDK
    • RagaAI Whitepapers
      • RagaAI RLEF (RAG LLM Evaluation Framework)
    • Agentic Testing
      • Quickstart
      • Concepts
        • Tracing
          • Langgraph (Agentic Tracing)
          • RagaAI Catalyst Tracing Guide for Azure OpenAI Users
        • Dynamic Tracing
        • Application Workflow
      • Create New Dataset
      • Metrics
        • Hallucination
        • Toxicity
        • Honesty
        • Cosine Similarity
      • Compare Traces
      • Compare Experiments
      • Add metrics locally
    • Custom Metric
    • Auto Prompt Optimization
    • Human Feedback & Annotations
      • Thumbs Up/Down
      • Add Metric Corrections
      • Corrections as Few-Shot Examples
      • Tagging
    • On-Premise Deployment
      • Enterprise Deployment Guide for AWS
      • Enterprise Deployment Guide for Azure
      • Evaluation Deployment Guide
        • Evaluation Maintenance Guide
    • Fine Tuning (OpenAI)
    • Integration
    • SDK Release Notes
      • ragaai-catalyst 2.1.7
  • RagaAI Prism
    • Quickstart
    • Sandbox Guide
      • Object Detection
      • LLM Summarization
      • Semantic Segmentation
      • Tabular Data
      • Super Resolution
      • OCR
      • Image Classification
      • Event Detection
    • Test Inventory
      • Object Detection
        • Failure Mode Analysis
        • Model Comparison Test
        • Drift Detection
        • Outlier Detection
        • Data Leakage Test
        • Labelling Quality Test
        • Scenario Imbalance
        • Class Imbalance
        • Active Learning
        • Image Property Drift Detection
      • Large Language Model (LLM)
        • Failure Mode Analysis
      • Semantic Segmentation
        • Failure Mode Analysis
        • Labelling Quality Test
        • Active Learning
        • Drift Detection
        • Class Imbalance
        • Scenario Imbalance
        • Data Leakage Test
        • Outlier Detection
        • Label Drift
        • Semantic Similarity
        • Near Duplicates Detection
        • Cluster Imbalance Test
        • Image Property Drift Detection
        • Spatio-Temporal Drift Detection
        • Spatio-Temporal Failure Mode Analysis
      • Tabular Data
        • Failure Mode Analysis
      • Instance Segmentation
        • Failure Mode Analysis
        • Labelling Quality Test
        • Drift Detection
        • Class Imbalance
        • Scenario Imbalance
        • Label Drift
        • Data Leakage Test
        • Outlier Detection
        • Active Learning
        • Near Duplicates Detection
      • Super Resolution
        • Semantic Similarity
        • Active Learning
        • Near Duplicates Detection
        • Outlier Detection
      • OCR
        • Missing Value Test
        • Outlier Detection
      • Image Classification
        • Failure Mode Analysis
        • Labelling Quality Test
        • Class Imbalance
        • Drift Detection
        • Near Duplicates Test
        • Data Leakage Test
        • Outlier Detection
        • Active Learning
        • Image Property Drift Detection
      • Event Detection
        • Failure Mode Analysis
        • A/B Test
    • Metric Glossary
    • Upload custom model
    • Event Detection
      • Upload Model
      • Generate Inference
      • Run tests
    • On-Premise Deployment
      • Enterprise Deployment Guide for AWS
      • Enterprise Deployment Guide for Azure
  • Support
Powered by GitBook
On this page
  • Introduction
  • Sample Code
  • Prerequisites
  • Step 1: Setting Up RagaAI Catalyst
  • Step 2: Instrumenting LangGraph Components
  • Step 3: Running the LangGraph Application
  • Step 4: Viewing and Analyzing Traces
  • Conclusion

Was this helpful?

  1. RagaAI Catalyst
  2. Agentic Testing
  3. Concepts
  4. Tracing

Langgraph (Agentic Tracing)

PreviousTracingNextRagaAI Catalyst Tracing Guide for Azure OpenAI Users

Last updated 1 month ago

Was this helpful?

Introduction

This document provides a step-by-step guide on how to integrate RagaAI Catalyst with LangGraph to enhance observability, tracing, and instrumentation in your LangGraph applications. By leveraging RagaAI Catalyst, you can gain comprehensive insights into the execution of LangGraph components, including Langchain tools, custom methods, and more. This guide will walk you through the setup, tracing, and evaluation process using a sample LangGraph application.

RagaAI introduces seamless tracing for key LangGraph components, including:

  • Langchain Tools: Automatically track the execution of Langchain tools within your LangGraph workflows.

  • Methods Annotated as LangGraph Tools (@tool): Gain visibility into methods specifically designed as LangGraph tools and marked with the @tool annotation.

  • Custom Methods with @trace_tool: Utilize the new @trace_tool annotation to instrument and monitor any method within your LangGraph application, offering flexible and granular tracing.


Sample Code

Refer to the provided for the complete implementation.


Prerequisites

  1. API Keys: Access keys for RagaAI Catalyst, OpenAI, Anthropic, and Tavily(or other dependencies).

  2. Dependencies: Install the required libraries using the following command:

!pip install -U ragaai-catalyst

Step 1: Setting Up RagaAI Catalyst

1.1 Initialize RagaAI Catalyst

To begin, initialize the RagaAI Catalyst client with your access and secret keys:

from ragaai_catalyst import RagaAICatalyst

catalyst = RagaAICatalyst(
    access_key="access_key",
    secret_key="secret_key"
)

1.2 Initialize the Tracer

Create a Tracer object to define the project and dataset for storing traces:

from ragaai_catalyst.tracers import Tracer

tracer = Tracer(
    project_name="Langgraph_testing",
    dataset_name="customer_support1",
    tracer_type="agentic/langgraph",
)
init_tracing(catalyst=catalyst, tracer=tracer)


Step 2: Instrumenting LangGraph Components

RagaAI Catalyst provides decorators to trace specific components of your LangGraph application. Below are the key decorators:

  1. @trace_tool: Traces custom methods or LangGraph tools.

  2. @trace_llm: Traces LLM calls.

  3. @trace_agent: Traces agent executions.

  4. @trace_custom: Traces custom components.

2.1 Tracing LangGraph Tools

To trace a LangGraph tool, use the @trace_tool decorator. For example:

from ragaai_catalyst import trace_tool

@tool
@trace_tool("lookup_policy")
def lookup_policy(query: str) -> str:
    """Consult the company policies to check whether certain options are permitted."""
    docs = retriever.query(query, k=2)
    return "\n\n".join([doc["page_content"] for doc in docs])

Step 3: Running the LangGraph Application

3.1 Define the LangGraph Workflow

Create a LangGraph workflow by defining nodes and edges. For example:

from langgraph.graph import StateGraph, START, END

builder = StateGraph(State)
builder.add_node("assistant", Assistant(part_1_assistant_runnable))
builder.add_node("tools", create_tool_node_with_fallback(part_1_tools))
builder.add_edge(START, "assistant")
builder.add_conditional_edges("assistant", tools_condition)
builder.add_edge("tools", "assistant")

Step 4: Viewing and Analyzing Traces

4.1 View Traces in RagaAI Catalyst

Once the traces are uploaded, navigate to your dataset in the RagaAI Catalyst dashboard to view individual traces.

4.2 Run Evaluations and Metrics

  1. Navigate to your dataset and click on the Evaluate button.

  2. Choose from pre-configured metrics such as Hallucination, Cosine Similarity, Honesty, or Toxicity.

  3. Configure the metric by selecting the evaluation type (e.g., LLM, Agent, or Tool) and defining the schema.

  4. Run the metric and analyze the results for insights into your application's performance.

4.3 Compare Traces

  1. Within the dataset, click on the Compare button.

  2. Select up to 3 datapoints (traces) to compare.

  3. View the diff view, which highlights differences in code and attributes between traces.

4.4 Compare Experiments

  1. In the Dataset view, select Compare Datasets.

  2. Choose up to 3 experiments for comparison.

  3. Analyze your graphs:


Conclusion

By integrating RagaAI Catalyst with LangGraph, you can achieve enhanced observability and traceability in your LangGraph applications. This documentation provides a comprehensive guide to setting up, instrumenting, and analyzing your application using RagaAI Catalyst.


For complete code refer

For any queries or support, contact the RagaAI Catalyst team at .

Google Colab notebook
colab
support@ragaai.com