Difference between revisions of "Adversarial/LitRev"

From srakrn | Wiki
Jump to navigation Jump to search
 
(3 intermediate revisions by the same user not shown)
Line 7: Line 7:
 
* Propose "recycling" of the gradients for adversarial training.
 
* Propose "recycling" of the gradients for adversarial training.
 
* Count each "replay" as one (non-true) epochs, therefore reducing time used.
 
* Count each "replay" as one (non-true) epochs, therefore reducing time used.
 +
** The perturbation for retraining is updated in every replay.
 +
* Claims contribution on providing multiple adversarial attacks against each images.
  
 
== Fast is better than free: Revisiting adversarial training ==
 
== Fast is better than free: Revisiting adversarial training ==
 +
* Conference: ICLR 2020
 
* URL: [https://arxiv.org/abs/2001.03994]
 
* URL: [https://arxiv.org/abs/2001.03994]
 +
 +
* FGSM training did works, by a simple random initialisation
 +
** <math>\delta = \mathrm{Uniform}(-\epsilon, \epsilon)</math>
 +
** <math>\delta = \delta + \alpha \cdot \mathrm{FGSM}(\mathrm{model}, x, y)</math> (then capped properly)
 +
* The parameter <math>\alpha</math> were introduced, the ideal value for it should be slightly more than <math>\epsilon</math>
 +
* Other techniques, like early stopping, also contributes to better performance when applied into training process.
  
 
== Adversarial Training Can Hurt Generalization ==
 
== Adversarial Training Can Hurt Generalization ==
 
* Conference: ICML 2019 Workshop
 
* Conference: ICML 2019 Workshop
 
* URL: [https://arxiv.org/abs/1906.06032]
 
* URL: [https://arxiv.org/abs/1906.06032]
 +
 +
* Adversarial Learning, instead of minimising the empirical risk over training points, does instead find the model which performs the best in a worst-case balance in attacking.
 +
* Adversarially robust models require a more complex model to learn, therefore causing risks over generalisation loss.
 +
* The tradeoff can be paid by adding the amount of training data. It can also be eliminated by sampling more training points, using the robust self training methods.
 +
* On simple models, adversarial training can act as generalisation.
  
 
== Initializing Perturbations in Multiple Directions for Fast Adversarial Training ==
 
== Initializing Perturbations in Multiple Directions for Fast Adversarial Training ==

Latest revision as of 14:12, 17 July 2020

Literature Reviews on selected adversarial papers!

Adversarial Training for Free!

  • Conference: NIPS 2019
  • URL: [1]
  • Propose "recycling" of the gradients for adversarial training.
  • Count each "replay" as one (non-true) epochs, therefore reducing time used.
    • The perturbation for retraining is updated in every replay.
  • Claims contribution on providing multiple adversarial attacks against each images.

Fast is better than free: Revisiting adversarial training

  • Conference: ICLR 2020
  • URL: [2]
  • FGSM training did works, by a simple random initialisation
    • Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \delta = \mathrm{Uniform}(-\epsilon, \epsilon)}
    • Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \delta = \delta + \alpha \cdot \mathrm{FGSM}(\mathrm{model}, x, y)} (then capped properly)
  • The parameter Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \alpha} were introduced, the ideal value for it should be slightly more than Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \epsilon}
  • Other techniques, like early stopping, also contributes to better performance when applied into training process.

Adversarial Training Can Hurt Generalization

  • Conference: ICML 2019 Workshop
  • URL: [3]
  • Adversarial Learning, instead of minimising the empirical risk over training points, does instead find the model which performs the best in a worst-case balance in attacking.
  • Adversarially robust models require a more complex model to learn, therefore causing risks over generalisation loss.
  • The tradeoff can be paid by adding the amount of training data. It can also be eliminated by sampling more training points, using the robust self training methods.
  • On simple models, adversarial training can act as generalisation.

Initializing Perturbations in Multiple Directions for Fast Adversarial Training

  • Conference: N/A
  • URL: [4]

Towards Understanding Fast Adversarial Training

  • Conference: N/A
  • URL: [5]

Overfitting in adversarially robust deep learning

  • Conference: ICML 2020
  • URL: [6]

Certified Adversarial Robustness with Additive Noise

  • Conference: NIPS 2019
  • URL: [7]

Randomization matters: How to defend against strong adversarial attacks

  • Conference: ICML 2020
  • URL: [8]