The BanCode
scanner is designed to detect and ban code in the prompt.
There are scenarios where the insertion of code in user prompts might be deemed undesirable. For example, when employees are sharing proprietary code snippets or when users are trying to exploit vulnerabilities.
It relies on the following models:
- vishnun/codenlbert-tiny
- [DEFAULT] codenlbert-sm
from llm_guard.input_scanners import BanCode
scanner = BanCode()
sanitized_prompt, is_valid, risk_score = scanner.scan(prompt)
Test setup:
- Platform: Amazon Linux 2
- Python Version: 3.11.6
- Input Length: 248
- Test Times: 5
Run the following script:
python benchmarks/run.py input BanCode
Results:
Instance | Latency Variance | Latency 90 Percentile | Latency 95 Percentile | Latency 99 Percentile | Average Latency (ms) | QPS |
---|---|---|---|---|---|---|
AWS r6a.xlarge (AMD) | 0.00 | 23.37 | 23.97 | 24.45 | 21.71 | 11424.20 |
AWS r6a.xlarge (AMD) with ONNX | 0.02 | 22.34 | 24.71 | 26.60 | 17.54 | 14142.09 |