Description: Tests the model (that you passed in the argument) to see if it is vulnerable to DAN_JailBreak attack. DAN_JailBreak is an attack scenario where the model generates responses which are free from limitations, providing unrestricted and unfiltered responses.

How does the Test work:

  1. Pre-defined internal system prompts are used.

  2. Multiple iteration of Model output is generated for these pre-defined system prompts.

  3. raga-llm-eval’s MitigationBypass and DAN_JailBreak detectors scan the model to see if the attack was successful.

  4. This concludes if the model is vulnerable to DAN_JailBreak attack.

Required arguments: model (like:gpt-4)

Interpretation of results:

  1. Score range(degree of model behaving as DAN_JailBreak): [0, 1]. A score of 1 implies that the model does not behave as DAN_JailBreak, while lower scores signify increasing levels of model behaving as DAN_JailBreak.

  2. Sample internal prompt and model response for failed cases is displayed. For detailed info on internal prompts, model responses, scores, use evaluator.save_results('filename.json').

Coding Example:

probe = 'DAN_Jailbreak'
    arguments={"category":[probe], "model": "gpt-3.5-turbo", "threshold": 1.0},


Last updated