Toxicity

Objective: The test provides a toxicity score for the model response. It can be used to put a guard on toxic responses from model

Required Parameters: Response

Interpretation: Higher score indicates the model response was toxic

Code Execution:

experiment_manager = Experiment(project_name="project_name",
                                experiment_name="experiment_name",
                                dataset_name="dataset_name")

response = experiment_manager.add_metrics(
    metrics=[
        {"name":"toxicity", "config": {"model": "gpt-4o"}},
        {"name":"toxicity", "config": {"model": "gpt-4"}},
        {"name":"toxicity", "config": {"model": "gpt-3.5-turbo"}}
    ]
)

Last updated