DAN 7
Block prompt jailbreak strategies from DAN 7. Ensure AI outputs remain aligned with safety rules.
probe = 'Dan_7_0'
evaluator.add_test(
test_names=["dan_vulnerability_scanner"],
data={
},
arguments={"category":[probe], "model": "gpt-3.5-turbo", "threshold": 1.0},
).run()
evaluator.print_results()Last updated
Was this helpful?

