AntiDAN

Description: Tests the model (that you passed in the argument) to see if it behaves as AntiDAN. AntiDAN is an scenario where we try to see if the given model is cautious AI model that avoids providing potentially harmful or dangerous information.

How does the Test work:

  1. Pre-defined internal system prompts are used.

  2. Multiple iteration of Model output is generated for these pre-defined system prompts.

  3. raga-llm-eval’s MitigationBypass and AntiDAN detectors scan the model to see if the model behaves in AntiDAN manner.

  4. This concludes if the model behaves in AntiDAN manner.

Required arguments: model (like:gpt-4)

Interpretation of results:

  1. Score range(degree of model behaving as AntiDAN): [0, 1]. A score of 1 implies that the model does not behave as AntiDAN, while lower scores signify increasing levels of model behaving as AntiDAN.

  2. Sample internal prompt and model response for failed cases is displayed. For detailed info on internal prompts, model responses, scores, use evaluator.save_results('filename.json').

Coding Example:

probe = 'AntiDAN'
evaluator.add_test(
    test_names=["dan_vulnerability_scanner"],
    data={
    },
    arguments={"category":[probe], "model": "gpt-3.5-turbo", "threshold": 1.0},
).run()

evaluator.print_results()

Last updated