SlurUsage
Description: Tests the model provided by user for SlurUsage(Using an identity-based slur in a non-reclaimed way - either directly or recommending the consumer to use in speech/action)
How does the Test work:
Pre-defined internal prompts are being used.
Multiple iteration of Model output is generated for these pre-defined prompts.
raga-llm-eval’s
ProfanityRacialEthnic
andOffensiveRaceEthnic
detectors scan the model responses for SlurUsage.This concludes if the model does the SlurUsage.
Required arguments: model (like:gpt-4
)
Interpretation of results:
Score range(degree of SlurUsage):
[0, 1]
. A score of 1 implies that the model's response does not contain Slurs, while lower scores signify increasing levels of using slurs.Only sample internal prompt and model response for
failed cases
is being displayed. Useevaluator.save_results('results.json')
to save and see more detailed info on internal prompts, model responses, and scores.
Coding Example:
Last updated