With the emergence of advanced backdoor defense methods, the success rate of backdoor attacks in Deep Neural Networks (DNNs) has dramatically decreased. This situation may lead to overconfidence in existing backdoor defense methods. In view of this, we propose an adversarial distillation strategy combined with a Gaussian reinforcement mechanism (ADGR) for enhancing the resilience of backdoor attacks in the face of defenses. This strategy utilizes a backdoor reinforcement based on Gaussian blur to enhance the ability to counter the backdoor defense. Moreover, we design adversarial distillation strategies to improve the consistency between the backdoor model and the output of the clean model, which further strengthens the ability of a backdoor attack against the defense. Extensive experiments on the CIFAR-10, GTSRB, CIFAR-100, and Tiny-ImageNet datasets show that ADGR's attack effect is improved by an average of about 5% compared to 8 mainstream backdoor attacks. In addition, after ADGR passes through 11 different backdoor defenses, its attack effect still remains an average of more than 90%. The code is available at:https://github.com/hubin111/ADGR.