Secret
Safeguard against LLMs leaking secrets. Use this guardrail to prevent exposure of sensitive information.
evaluator.add_test(
test_names=["secrets_guardrail"],
data={"prompt" : '''
my secret is OPEN_API_KEY
''',
},
arguments={"redact_mode": "all"}
).run()
Result = "my secret is OPEN_API_KEY"Code Example:
Last updated
Was this helpful?

