Difference between revisions of "Adversarial/LitRev"
Jump to navigation
Jump to search
Line 4: | Line 4: | ||
* Conference: NIPS 2019 | * Conference: NIPS 2019 | ||
* URL: [https://arxiv.org/abs/1904.12843] | * URL: [https://arxiv.org/abs/1904.12843] | ||
+ | |||
+ | * Propose "recycling" of the gradients for adversarial training. | ||
+ | * Count each "replay" as one (non-true) epochs, therefore reducing time used. | ||
== Fast is better than free: Revisiting adversarial training == | == Fast is better than free: Revisiting adversarial training == |
Revision as of 20:18, 16 July 2020
Literature Reviews on selected adversarial papers!
Contents
- 1 Adversarial Training for Free!
- 2 Fast is better than free: Revisiting adversarial training
- 3 Adversarial Training Can Hurt Generalization
- 4 Initializing Perturbations in Multiple Directions for Fast Adversarial Training
- 5 Towards Understanding Fast Adversarial Training
- 6 Overfitting in adversarially robust deep learning
- 7 Certified Adversarial Robustness with Additive Noise
- 8 Randomization matters: How to defend against strong adversarial attacks
Adversarial Training for Free!
- Conference: NIPS 2019
- URL: [1]
- Propose "recycling" of the gradients for adversarial training.
- Count each "replay" as one (non-true) epochs, therefore reducing time used.
Fast is better than free: Revisiting adversarial training
- URL: [2]
Adversarial Training Can Hurt Generalization
- Conference: ICML 2019 Workshop
- URL: [3]
Initializing Perturbations in Multiple Directions for Fast Adversarial Training
- Conference: N/A
- URL: [4]
Towards Understanding Fast Adversarial Training
- Conference: N/A
- URL: [5]
Overfitting in adversarially robust deep learning
- Conference: ICML 2020
- URL: [6]
Certified Adversarial Robustness with Additive Noise
- Conference: NIPS 2019
- URL: [7]
Randomization matters: How to defend against strong adversarial attacks
- Conference: ICML 2020
- URL: [8]