Sizhe Chen, Arman Zharmagambetov, Saeed Mahloujifar, Kamalika Chaudhuri, David Wagner, Chuan Guo
Large language models (LLMs) are becoming increasingly prevalent in modern software systems, interfacing between the user and the Internet to assist with tasks that require advanced language understanding. To accomplish these tasks, the LLM often uses external data sources such as user documents, web retrieval, results from API calls, etc. This opens up new avenues for attackers to manipulate the LLM via prompt injection. Adversarial prompts can be injected into external data sources to override the system's intended instruction and instead execute a malicious instruction. To mitigate this vulnerability, we propose a new defense called SecAlign based on the technique of preference optimization. Our defense first constructs a preference dataset with prompt-injected inputs, secure outputs (ones that respond to the legitimate instruction), and insecure outputs (ones that respond to the injection). We then perform preference optimization on this dataset to teach the LLM to prefer the secure output over the insecure one. This provides the first known method that reduces the success rates of various prompt injections to around 0%, even against attacks much more sophisticated than ones seen during training. This indicates our defense generalizes well against unknown and yet-to-come attacks. Also, our defended models are still practical with similar utility to the one before our defensive training.
SecAlign Overview | SecAlign Main Results |
---|---|
![]() |
![]() |
- Training SecAlign / StruQ LLMs requires 4 80G A100s. Testing utility and manual attacks requires 1 16G GPU. Testing GCG requires 1 80G A100. Testing AdvPrompter requires 2 80G A100s.
- Install environment dependencies
git clone https://github.com/facebookresearch/SecAlign
cd SecAlign
conda create -n secalign python==3.10
- Install package dependencies
pip install -r requirements.txt
- Download data dependencies
python setup.py
- Configure openai dependencies for utility evaluation: create
data/openai_configs.yaml
followingdata/openai_configs_examle.yaml
- [optional] Play with SecAlign Instruct models. Run
python setup.py --instruct
to download SecAlign Mistral-7B-Instruct (2.8GB) and Llama3-8B-Instruct (0.8GB) LoRA adapters. - [optional] Play with SecAlign Alpaca models. Run
python setup.py --alpaca
to download SFTed (on alpaca_data_cleaned.json) Llama-7B (26G), Mistral-7B (27G), Llama3-8B (30G) and the corresponding SecAlign LoRA adapters (0.4G) - [optional] Automatic and efficient testing by specifying your training/testing slurm configurations in the
slurm_prefix
variables inrun.py
, which generates slurm scripts, run them, and delete them. It supports an additional thread fromnohup
to moniter the training, and automatically tests after the training finishes if--do_test
is specified
- Get the slurm and python commands, and run by yourself. The
[model_path]
below stands for the huggingface Instruct model ID (currently supportingmistralai/Mistral-7B-v0.1-Instruct
andmeta-llama/Meta-Llama-3-8B-Instruct
) or your local SFTed (see next section) model path.
bash scripts/secalign.sh [model_path]
- Run the training, and test immediately after the training simultaneously on multiple GPUs, with the default
--test_attack none ignore completion_real completion_realcmb gcg
(none
for utility)
bash scripts/undefended.sh [model_path] run
- To tune the learning rate, change dpo_lr in LR_CONFIG in
run.py
- SecAlign starts on an SFTed model. If you do not want to use public SFTed models (as in the last section), you can SFT your own model from a base model.
- The below command SFTs on alpaca_data_cleaned.json. The
[model_path]
stands for the huggingface model ID (currently supportinghuggyllama/llama-7b
,mistralai/Mistral-7B-v0.1
, andmeta-llama/Meta-Llama-3-8B
).
bash scripts/undefended.sh [model_path]
bash scripts/undefended.sh [model_path] run
- We also support the reproduction of the previous SOTA defense StruQ by defensive SFT
bash scripts/struq.sh [model_path]
bash scripts/struq.sh [model_path] run
- To tune the learning rate, change lr in LR_CONFIG in
run.py
- All logs on training, utility evaluation, and security evaluation are saved to
[model_path]/summary.tsv
if you usebash [script_path] [model_path] run
- The default setting is
--test_attack none ignore completion_real completion_realcmb gcg
(none
for utility)
bash scripts/test.sh [model_path]
bash scripts/test.sh [model_path] run
- Customize the
--test_attack
, prompting-based--defense
, and testing--data_path
. The--defense
could be ['none', 'sandwich', 'instructional', 'reminder', 'isolation', 'incontext'], and--test_attack
could be ['naive', 'ignore', 'completion_real', 'completion_realcmb', 'gcg', 'advp']
python run.py --do_test --test_attack [test_attack1] [test_attack2] [test_attack3] -m [model_path1] [model_path2] [model_path3] -d [defense] --data_path [data_path]
- This triggers multiple below commands.
python test.py -a [test_attack] -m [model_path] --defense [defense] --data_path [data_path]
- Log the GCG and AdvPrompter testing results to
[model_path]/summary.tsv
. To support this automatic logging, AdvPrompter has to be run throughbash
orpython run.py
, which produces aadvp_jobID.out
in[model_path]
python test.py --log -m [model_path]
The majority of SecAlign and the included StruQ and AdvPrompter are licensed under CC-BY-NC, however portions of the project are available under separate license terms: Stanford Alpaca is licensed Apache 2.0; LLM Attacks is licensed MIT. Code under gcg/
is adapted from LLM Attacks. Code under advprompter/
is adapted from AdvPrompter. This software and/or data was deposited in the BAIR open research Commons repository in 2025.