Context Recall

Objective: This metric measures the ability to retrieve documents containing ground truth facts. Simply, it returns the proportion of context documents which had impact on the ground truth response.

Required Parameters: Expected Response, Context

Interpretation: Higher score signifies major proportion of contexts supplied to the LLM were helpful in answering the prompt question

Code Execution:

experiment_manager = Experiment(project_name="project_name",
                                experiment_name="experiment_name",
                                dataset_name="dataset_name")

response = experiment_manager.add_metrics(
    metrics=[
        {"name":"Context Recall","config": {"reason": True, "model": "gpt-4o-mini", "batch_size" : 5, "provider": "OpenAI"}}
    ]
)

Refer Executing tests page to learn about Metric Configurations

Example:

  • Prompt: What is the chemical formula for water and what are different elements in it?

  • Expected Response: The chemical formula for water is H2O and it is composed of two elements: hydrogen and oxygen.

  • Context: [‘Water is essential for all known forms of life and is a major component of the Earth's hydrosphere.’,‘Water chemical formula is H2O.’, ‘The chemical formula for carbon dioxide is CO2, which is a greenhouse gas.’]

  • Metric Output: {‘score’:0.5, ‘reason’:‘’context does not contain any information about the elements of water’}

Last updated