Context Precision
Last updated
Last updated
Objective: This metric calculates the ratio of the total number of relevant documents retrieved out of the total number of retrieved documents. The test measures the proportion of available contextual information that could prove useful in answering the prompt.
Required Parameters: Prompt
, Expected Response
, Context
Interpretation: A higher score signifies more major proportion of contexts supplied to the LLM helped answer the prompt question
Code Execution:
The "schema_mapping" variable needs to be defined first and is a pre-requisite for evaluation runs. Learn how to set this variable here.
Example:
Prompt: What is the tallest mountain in the world?
Expected Response: The tallest mountain in the world is Mount Everest, which has a peak that reaches 8,848 metres (29,029 feet) above sea level.
Context: [‘Mount Everest is the tallest mountain in the world, with a peak that reaches 8,848 metres (29,029 feet) above sea level.’,’The Himalayas, where Mount Everest is located, is a mountain range in Asia, separating the plains of the Indian subcontinent from the Tibetan Plateau.’,’K2, also known as Mount Godwin-Austen, is the second-highest mountain in the world and is part of the Karakoram Range.’]
Metric Output: {‘score’: 0.33, ‘reason’: ‘Only one context is directly relevant to answering the prompt. The other two contexts, while related to mountains, do not directly address the question about the tallest mountain.’}