Harmless
Objective: This checks whether the LLM model is offensive or discriminatory, either directly or through subtext or bias.
Required Parameters: Prompt, Response
Result Interpretation: lower scores mean less harmless
Last updated
Objective: This checks whether the LLM model is offensive or discriminatory, either directly or through subtext or bias.
Required Parameters: Prompt, Response
Result Interpretation: lower scores mean less harmless
Last updated