Since training a large-scale backdoored model from scratch requires a large
training dataset, several recent attacks have considered to inject backdoors
into a trained clean model without altering model behaviors on the clean data.
Previous work finds that backdoors can be injected into a trained clean model
with Adversarial Weight Perturbation (AWP). Here AWPs refers to the variations
of parameters that are small in backdoor learning. In this work, we observe an
interesting phenomenon that the variations of parameters are always AWPs when
tuning the trained clean model to inject backdoors. We further provide
theoretical analysis to explain this phenomenon. We formulate the behavior of
maintaining accuracy on clean data as the consistency of backdoored models,
which includes both global consistency and instance-wise consistency. We
extensively analyze the effects of AWPs on the consistency of backdoored
models. In order to achieve better consistency, we propose a novel anchoring
loss to anchor or freeze the model behaviors on the clean data, with a
theoretical guarantee. Both the analytical and the empirical results validate
the effectiveness of the anchoring loss in improving the consistency,
especially the instance-wise consistency.

By admin