Difference between revisions of "Adversarial/LitRev"
Jump to navigation
Jump to search
(Created page with "Literature Reviews on selected adversarial papers! == Randomization matters: How to defend against strong adversarial attacks == Conference: ICML 2020 URL: [https://proceedin...") |
|||
(8 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Literature Reviews on selected adversarial papers! | Literature Reviews on selected adversarial papers! | ||
+ | |||
+ | == Adversarial Training for Free! == | ||
+ | * Conference: NIPS 2019 | ||
+ | * URL: [https://arxiv.org/abs/1904.12843] | ||
+ | |||
+ | * Propose "recycling" of the gradients for adversarial training. | ||
+ | * Count each "replay" as one (non-true) epochs, therefore reducing time used. | ||
+ | ** The perturbation for retraining is updated in every replay. | ||
+ | * Claims contribution on providing multiple adversarial attacks against each images. | ||
+ | |||
+ | == Fast is better than free: Revisiting adversarial training == | ||
+ | * Conference: ICLR 2020 | ||
+ | * URL: [https://arxiv.org/abs/2001.03994] | ||
+ | |||
+ | * FGSM training did works, by a simple random initialisation | ||
+ | ** <math>\delta = \mathrm{Uniform}(-\epsilon, \epsilon)</math> | ||
+ | ** <math>\delta = \delta + \alpha \cdot \mathrm{FGSM}(\mathrm{model}, x, y)</math> (then capped properly) | ||
+ | * The parameter <math>\alpha</math> were introduced, the ideal value for it should be slightly more than <math>\epsilon</math> | ||
+ | * Other techniques, like early stopping, also contributes to better performance when applied into training process. | ||
+ | |||
+ | == Adversarial Training Can Hurt Generalization == | ||
+ | * Conference: ICML 2019 Workshop | ||
+ | * URL: [https://arxiv.org/abs/1906.06032] | ||
+ | |||
+ | * Adversarial Learning, instead of minimising the empirical risk over training points, does instead find the model which performs the best in a worst-case balance in attacking. | ||
+ | * Adversarially robust models require a more complex model to learn, therefore causing risks over generalisation loss. | ||
+ | * The tradeoff can be paid by adding the amount of training data. It can also be eliminated by sampling more training points, using the robust self training methods. | ||
+ | * On simple models, adversarial training can act as generalisation. | ||
+ | |||
+ | == Initializing Perturbations in Multiple Directions for Fast Adversarial Training == | ||
+ | * Conference: N/A | ||
+ | * URL: [https://arxiv.org/abs/2005.07606] | ||
+ | |||
+ | == Towards Understanding Fast Adversarial Training == | ||
+ | * Conference: N/A | ||
+ | * URL: [https://arxiv.org/abs/2006.03089] | ||
+ | |||
+ | == Overfitting in adversarially robust deep learning == | ||
+ | * Conference: ICML 2020 | ||
+ | * URL: [https://arxiv.org/abs/2002.11569] | ||
+ | |||
+ | == Certified Adversarial Robustness with Additive Noise == | ||
+ | * Conference: NIPS 2019 | ||
+ | * URL: [https://papers.nips.cc/paper/9143-certified-adversarial-robustness-with-additive-noise.pdf] | ||
== Randomization matters: How to defend against strong adversarial attacks == | == Randomization matters: How to defend against strong adversarial attacks == | ||
− | Conference: ICML 2020 | + | * Conference: ICML 2020 |
− | URL: [https://proceedings.icml.cc/static/paper_files/icml/2020/2479-Paper.pdf] | + | * URL: [https://proceedings.icml.cc/static/paper_files/icml/2020/2479-Paper.pdf] |
Latest revision as of 14:12, 17 July 2020
Literature Reviews on selected adversarial papers!
Contents
- 1 Adversarial Training for Free!
- 2 Fast is better than free: Revisiting adversarial training
- 3 Adversarial Training Can Hurt Generalization
- 4 Initializing Perturbations in Multiple Directions for Fast Adversarial Training
- 5 Towards Understanding Fast Adversarial Training
- 6 Overfitting in adversarially robust deep learning
- 7 Certified Adversarial Robustness with Additive Noise
- 8 Randomization matters: How to defend against strong adversarial attacks
Adversarial Training for Free!
- Conference: NIPS 2019
- URL: [1]
- Propose "recycling" of the gradients for adversarial training.
- Count each "replay" as one (non-true) epochs, therefore reducing time used.
- The perturbation for retraining is updated in every replay.
- Claims contribution on providing multiple adversarial attacks against each images.
Fast is better than free: Revisiting adversarial training
- Conference: ICLR 2020
- URL: [2]
- FGSM training did works, by a simple random initialisation
- Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \delta = \mathrm{Uniform}(-\epsilon, \epsilon)}
- Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \delta = \delta + \alpha \cdot \mathrm{FGSM}(\mathrm{model}, x, y)} (then capped properly)
- The parameter Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \alpha} were introduced, the ideal value for it should be slightly more than Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \epsilon}
- Other techniques, like early stopping, also contributes to better performance when applied into training process.
Adversarial Training Can Hurt Generalization
- Conference: ICML 2019 Workshop
- URL: [3]
- Adversarial Learning, instead of minimising the empirical risk over training points, does instead find the model which performs the best in a worst-case balance in attacking.
- Adversarially robust models require a more complex model to learn, therefore causing risks over generalisation loss.
- The tradeoff can be paid by adding the amount of training data. It can also be eliminated by sampling more training points, using the robust self training methods.
- On simple models, adversarial training can act as generalisation.
Initializing Perturbations in Multiple Directions for Fast Adversarial Training
- Conference: N/A
- URL: [4]
Towards Understanding Fast Adversarial Training
- Conference: N/A
- URL: [5]
Overfitting in adversarially robust deep learning
- Conference: ICML 2020
- URL: [6]
Certified Adversarial Robustness with Additive Noise
- Conference: NIPS 2019
- URL: [7]
Randomization matters: How to defend against strong adversarial attacks
- Conference: ICML 2020
- URL: [8]