Hallucination

Objective: This metric evaluates the overlap of facts between the Response and the Context. It penalises any fabricated, incorrect or contradictory facts mentioned in the Response that are not found in the Context.

Required Parameters: Prompt, Response, Context

Interpretation: A higher score indicates the model response was hallucinated.

Code Execution:

metrics=[
    {"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai"}, "column_name": "your-column-identifier", "schema_mapping": schema_mapping}
]

The "schema_mapping" variable needs to be defined first and is a pre-requisite for evaluation runs. Learn how to set this variable here.

Example:

  • Prompt: What is the capital of Brazil?

  • Context: Brazil is the largest country in South America, known for its diverse culture and the Amazon rainforest. Its official language is Portuguese and its capital is Brasília.

  • Response: The capital of Brazil is Rio de Janeiro, which is famous for its Copacabana beach, Christ the Redeemer statue, and vibrant carnival celebrations.

  • Metric Output: {‘score’: 1, ‘reason’: ‘The capital of Brazil is Brasília, not Rio de Janeiro’}

Last updated