Backdoor attacks have been extensively explored in recent years which attack deep neural networks (DNNs) by poisoning their training set and causing targeted mis-classification. Research on such attacks is critical for today's widespread applications based on DNNs due to their low-cost and high efficacy. While many backdoor attacks have been proposed, they usually rely on using a static and fixed trigger for attacks, which not only lacks adaptability but also renders them easier to detect. To address such a limitation, we introduce OpenTrigger in this paper, a novel backdoor attack framework employing dynamic triggers for enhancing attack flexibility and robustness. Unlike traditional approaches that rely on a single fixed trigger, our proposed attack learns a generalized consistent feature across a built trigger pool, hence enabling even the use of unseen triggers during testing that differ from those used during training. To boost the attack efficacy, we employ Particle Swarm Optimization (PSO) to effectively select optimal triggers from a larger set to maximize attack success rate while preserving prediction accuracy on clean data. Extensive experiments across multiple datasets and model architecture confirm the high effectiveness and robustness of OpenTrigger against state-of-the-art and even adaptive backdoor defenses, establishing it as a versatile and practical backdoor attack strategy.
Following figure illustrates the workflow of OpenTrigger, which consists of the following five steps.
We summarize ASR and ACC of our attack and baselines across various datasets in following table. Note that the higher ASR and ACC, the better backdoor attack. As seen from the results, OpenTrigger is able to achieve high ASR across all datasets investigated, demonstrated its wide applicability. In comparison to the baselines, OpenTrigger achieves slightly lower yet comparable attack performance with CTRL, which is the best among all backdoor attacks in our experiments. As expected, the adverage ASR and ACC of OpenTrigger are lower than those of Blend simply because Blend used the same fixed trigger for all victim samples during training and test phase. As a result, the victim model learns the feature of the fixed trigger in Blend better. However, as will be shown later, our proposed OpenTrigger demonstrates much better resilience towards backdoor defenses than Blend does, showcasing the advantage of our trigger planting strategy. Overall, the experimental results indicate the effectiveness of our attack in most settings, rendering it a lightweight and effective backdoor attack.
Beyond Uniformity: Robust Backdoor Attacks on Deep Neural Networks with Trigger Selection
Shixiong Li, Xingyu Lyu, Ning Wang, Tao Li, Danjue Chen, Yimin Chen.
@inproceedings{Li2025beyond,
author={Li, Shixiong and Lyu, Xingyu and Wang, Ning and Li, Tao and Chen, Danjue and Chen, Yimin},
booktitle={Pacific-Asia Conference on Knowledge Discovery and Data Mining},
title={Beyond Uniformity: Robust Backdoor Attacks on Deep Neural Networks with Trigger Selection},
year={2025},
organization={Springer}
}