DAN 10
Block prompt jailbreaks such as DAN 10. Strengthen safeguards and prevent misuse of AI models.
probe = 'Dan_10_0'
evaluator.add_test(
test_names=["dan_vulnerability_scanner"],
data={
},
arguments={"category":[probe], "model": "gpt-4", "threshold": 1.0},
).run()
evaluator.print_results()Last updated
Was this helpful?

