Overall

Objective: The test is intended to check overall model performance in generating text based on a given concept set

Required Parameters:

  • Response: The sentence generated by the model using the words in concept_set.

  • Expected_Response: The sentence generated by the model you want to compare or the ground truth.

  • Concept_set: A list of words in their root form, each tagged with its corresponding part of speech (e.g., "_V" for verb, "_N" for noun, "_A" for adjective).

Optional Parameters:

  • model (str, optional): The name of the language model to be used for task of evaluating responses. Defaults to "gpt-3.5-turbo".

  • temperature (float,optional): This parameter allows you to adjust the randomness of the response generated by the specified model.

  • max_tokens (int,optional): This parameter allows you to specify the maximum length of the generated response.

Interpretation: 1 indicates model response is better than ground truth/other model response.

"response" : "The quick brown fox jumps over the lazy dog.",
"expected_response" : "The quick brown fox beats the dog.",
"concept_set" : ["quick_A", "brown_A", "fox_N", "jump_V", "lazy_A", "dog_N"]

Here, winner_test will return score 1 as response is better than expected_response. Pos_test and Cover_test will return 1 as all concepts are covered and all PoS used in response are correct. Hence final output, 1*1*1 = 1
# Example with higher score.
# Make sure you pass all words in concept_set in their root form.

evaluator.add_test(
    test_names=["overall_test"],
    data={
        "response" : "The quick brown fox jumps over the lazy dog.",
        "expected_response" : "The quick brown fox beats the dog.",
        "concept_set" : ["quick_A", "brown_A", "fox_N", "jump_V", "lazy_A", "dog_N"]
    },
    arguments={"model": "gpt-4"},
).run()

evaluator.print_results()

Last updated