David Stutz
Title
Cited by
Cited by
Year
Superpixels: An evaluation of the state-of-the-art
D Stutz, A Hermans, B Leibe
Computer Vision and Image Understanding 166, 1-27, 2018
3072018
Understanding convolutional neural networks
D Stutz
Seminar Report, Visual Computing Institute, RWTH Aachen University, 2014
136*2014
Disentangling adversarial robustness and generalization
D Stutz, M Hein, B Schiele
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
1022019
Learning 3d shape completion from laser scan data with weak supervision
D Stutz, A Geiger
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
1002018
Learning 3D Shape Completion Under Weak Supervision
D Stutz, A Geiger
International Journal of Computer Vision, 2018
372018
Superpixel segmentation: An evaluation
D Stutz
German conference on pattern recognition, 555-562, 2015
322015
Confidence-calibrated adversarial training: Generalizing to unseen attacks
D Stutz, M Hein, B Schiele
International Conference on Machine Learning, 9155-9166, 2020
312020
Superpixel segmentation using depth information
D Stutz
RWTH Aachen University, Aachen, Germany, 2014
232014
Adversarial training against location-optimized adversarial patches
S Rao, D Stutz, B Schiele
European Conference on Computer Vision, 429-448, 2020
82020
Bit error robustness for energy-efficient dnn accelerators
D Stutz, N Chandramoorthy, M Hein, B Schiele
Proceedings of Machine Learning and Systems 3, 2021
62021
Learning Shape Completion from Bounding Boxes with CAD Shape Priors
D Stutz
RWTH Aachen University, 2017
62017
Introduction to Neural Networks
D Stutz
Seminar Report, Human Language Technology and Pattern Recognition Group …, 2014
62014
Disentangling adversarial robustness and generalization. 2019 IEEE
D Stutz, M Hein, B Schiele
CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6969-6980, 2019
52019
Neural Codes for Image Retrieval
D Stutz
Seminar Report, Visual Computing Institute, RWTH Aachen University, 2015
52015
Confidence-calibrated adversarial training and detection: More robust models generalizing beyond the attack used during training
D Stutz, M Hein, B Schiele
arXiv preprint arXiv:1910.06259, 2019
42019
Confidence-Calibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training
D Stutz, M Hein, B Schiele
12019
A Closer Look at the Adversarial Robustness of Information Bottleneck Models
I Korshunova, D Stutz, AA Alemi, O Wiles, S Gowal
arXiv preprint arXiv:2107.05712, 2021
2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
D Stutz, N Chandramoorthy, M Hein, B Schiele
arXiv preprint arXiv:2104.08323, 2021
2021
Relating Adversarially Robust Generalization to Flat Minima
D Stutz, M Hein, B Schiele
arXiv preprint arXiv:2104.04448, 2021
2021
On Mitigating Random and Adversarial Bit Errors
D Stutz, N Chandramoorthy, M Hein, B Schiele
arXiv preprint arXiv:2006.13977, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20