Beyond Uniformity: Robust Backdoor Attacks on Deep Neural Networks with Trigger Selection

Shixiong Li1
Xingyu Lyu1
1 University of Massachusetts Lowell
2University of South Florida
3Purdue University
4North Carolina State University

Abstract


Backdoor attacks have been extensively explored in recent years which attack deep neural networks (DNNs) by poisoning their training set and causing targeted mis-classification. Research on such attacks is critical for today's widespread applications based on DNNs due to their low-cost and high efficacy. While many backdoor attacks have been proposed, they usually rely on using a static and fixed trigger for attacks, which not only lacks adaptability but also renders them easier to detect. To address such a limitation, we introduce OpenTrigger in this paper, a novel backdoor attack framework employing dynamic triggers for enhancing attack flexibility and robustness. Unlike traditional approaches that rely on a single fixed trigger, our proposed attack learns a generalized consistent feature across a built trigger pool, hence enabling even the use of unseen triggers during testing that differ from those used during training. To boost the attack efficacy, we employ Particle Swarm Optimization (PSO) to effectively select optimal triggers from a larger set to maximize attack success rate while preserving prediction accuracy on clean data. Extensive experiments across multiple datasets and model architecture confirm the high effectiveness and robustness of OpenTrigger against state-of-the-art and even adaptive backdoor defenses, establishing it as a versatile and practical backdoor attack strategy.

  • We propose OpenTrigger, a novel backdoor attack that learns dynamic and generalized backdoor triggers rather than a fixed trigger.
  • We improve the effectiveness of OpenTrigger by designing a trigger selection method based Particle Swarm Optimization (PSO).
  • We perform comprehensive evaluations, performance comparison, and ablation study for OpenTrigger across various datasets and victim models. The results show that \texttt{OpenTrigger} consistently achieves high ASR without sacrificing prediction accuracy for clean inputs. Most importantly, our attack performs well under state-of-the-art (SOTA) backdoor defenses. We evaluate our attack even under a strong adaptive defense as well, confirming its robustness towards strong defenses.

Workflow


Following figure illustrates the workflow of OpenTrigger, which consists of the following five steps.

  • Building a trigger pool. The attacker builds a trigger pool Ω with the help of a substitute model Θs and a custom PSO algorithm. The attacker further divides Ω into Ωtrain and Ωtest dedicated to training and test phases, respectively.
  • Trigger planting for training. To obtain for a given x∈Da,clean, the attacker randomly selects a t from Ωtrain and computes ; following Equation x̃ = P(x,t) = α·t + (1-α)·x. In the end, the attacker obtains Dpoison.
  • Victim model training. The victim model Θ is trained from Dtrain which the attacker inserts Dpoison into.
  • Trigger planting for test. Similarly, during test phase, the attacker obtains ; for a given x by randomly selecting a t from Ωtest and computing with Equation x̃ = P(x,t) = α·t + (1-α)·x.
  • Testing backdoor attack. The attacker uses clean and poisoned samples to evaluate the attack performance.

Results


We summarize ASR and ACC of our attack and baselines across various datasets in following table. Note that the higher ASR and ACC, the better backdoor attack. As seen from the results, OpenTrigger is able to achieve high ASR across all datasets investigated, demonstrated its wide applicability. In comparison to the baselines, OpenTrigger achieves slightly lower yet comparable attack performance with CTRL, which is the best among all backdoor attacks in our experiments. As expected, the adverage ASR and ACC of OpenTrigger are lower than those of Blend simply because Blend used the same fixed trigger for all victim samples during training and test phase. As a result, the victim model learns the feature of the fixed trigger in Blend better. However, as will be shown later, our proposed OpenTrigger demonstrates much better resilience towards backdoor defenses than Blend does, showcasing the advantage of our trigger planting strategy. Overall, the experimental results indicate the effectiveness of our attack in most settings, rendering it a lightweight and effective backdoor attack.

Paper


Beyond Uniformity: Robust Backdoor Attacks on Deep Neural Networks with Trigger Selection

Shixiong Li, Xingyu Lyu, Ning Wang, Tao Li, Danjue Chen, Yimin Chen.

description PDF (NOT available now and will be updated after officially published)
integration_instructions Code

Citation


@inproceedings{Li2025beyond,
          author={Li, Shixiong and Lyu, Xingyu and Wang, Ning and Li, Tao and Chen, Danjue and Chen, Yimin},
          booktitle={Pacific-Asia Conference on Knowledge Discovery and Data Mining}, 
          title={Beyond Uniformity: Robust Backdoor Attacks on Deep Neural Networks with Trigger Selection}, 
          year={2025},
          organization={Springer}
	}