Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks Y Tsuzuku, I Sato, M Sugiyama Advances in Neural Information Processing Systems 31 (NeurIPS 2018), 2018 | 306 | 2018 |
Variance-based Gradient Compression for Efficient Distributed Deep Learning Y Tsuzuku, H Imachi, T Akiba Workshop on International Conference on Learning Representations (ICLR WS 2018), 2018 | 72 | 2018 |
Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using pac-bayesian analysis Y Tsuzuku, I Sato, M Sugiyama International Conference of Machine Learning (ICML 2020), 2019 | 65 | 2019 |
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions Y Tsuzuku, I Sato 2019 Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2018 | 57 | 2018 |
Gradient compressing apparatus, gradient compressing method, and non-transitory computer readable medium Y Tsuzuku, H Imachi, T Akiba US Patent App. 16/171,340, 2019 | 4 | 2019 |