Difference between revisions of "Adversarial/LitRev"

From srakrn | Wiki
Jump to navigation Jump to search
Line 7: Line 7:
 
* Propose "recycling" of the gradients for adversarial training.
 
* Propose "recycling" of the gradients for adversarial training.
 
* Count each "replay" as one (non-true) epochs, therefore reducing time used.
 
* Count each "replay" as one (non-true) epochs, therefore reducing time used.
    * The perturbation for retraining is updated in every replay.
+
** The perturbation for retraining is updated in every replay.
 
* Claims contribution on providing multiple adversarial attacks against each images.
 
* Claims contribution on providing multiple adversarial attacks against each images.
  

Revision as of 10:12, 17 July 2020

Literature Reviews on selected adversarial papers!

Adversarial Training for Free!

  • Conference: NIPS 2019
  • URL: [1]
  • Propose "recycling" of the gradients for adversarial training.
  • Count each "replay" as one (non-true) epochs, therefore reducing time used.
    • The perturbation for retraining is updated in every replay.
  • Claims contribution on providing multiple adversarial attacks against each images.

Fast is better than free: Revisiting adversarial training

Adversarial Training Can Hurt Generalization

  • Conference: ICML 2019 Workshop
  • URL: [3]

Initializing Perturbations in Multiple Directions for Fast Adversarial Training

  • Conference: N/A
  • URL: [4]

Towards Understanding Fast Adversarial Training

  • Conference: N/A
  • URL: [5]

Overfitting in adversarially robust deep learning

  • Conference: ICML 2020
  • URL: [6]

Certified Adversarial Robustness with Additive Noise

  • Conference: NIPS 2019
  • URL: [7]

Randomization matters: How to defend against strong adversarial attacks

  • Conference: ICML 2020
  • URL: [8]