Prompt Readability

Objective: The Readability Test measures the readability of a text. It calculates various readability sub metrics to assess the ease of understanding of the text.

Required Parameters: Prompt

  • Prompt (str): The initial question or statement provided to the model.

  • Response (str): The model's generated answer or reaction to the prompt.

Interpretation:

  • A lower score indicates simpler and more understandable text.

Code Execution:

experiment_manager = Experiment(project_name="project_name",
                                experiment_name="experiment_name",
                                dataset_name="dataset_name")
#Prompt Readability test
response =  experiment_manager.add_metrics(
    metrics=[
        {"name":"prompt_readability", "config": {"model": "gpt-4o"}},
        {"name":"prompt_readability", "config": {"model": "gpt-4"}},
        {"name":"prompt_readability", "config": {"model": "gpt-3.5-turbo"}}
    ]
)
# Test indicating high scores (expected to pass)
evaluator.add_test(
    test_names=["readability_test"],
    data={
        "prompt" : "Though this be madness, yet there is method in Love looks not with the eyes, but with the mind. All the world's a stage, and all the men and women merely players.",
        "response" : "Photosynthesis is the process by which plants convert sunlight into energy through chlorophyll."
    },
    arguments={"model": "gpt-4", "threshold": 6},
).run()

evaluator.print_results()

Last updated