Context Relevancy

The Context Relevancy metric evaluates the quality of the retriever used in the RAG pipeline. This metric is vital to ensure that the the documents retrieved by the retriever is relevant for answering the prompt and the retriever mechanism in the RAG pipeline is working as expected.

Required Parameters: prompt, context

Interpretation:

Lower metric score indicates one of these:

  • The retrieval mechanism is not working poorly.

  • The Knowledge Base doesn't have sufficient data to supply documents to the prompt.

Code Execution

experiment_manager = Experiment(project_name="project_name",
                                experiment_name="experiment_name",
                                dataset_name="dataset_name")

response = experiment_manager.add_metrics(
    metrics=[
        {"name":"Context Relevancy", "config": {"reason": True, "model": "gpt-4o-mini", "batch_size" : 5, "provider": "OpenAI"}}
    ]
)

Refer Executing tests page to learn about Metric Configurations

Last updated