Follow
Jonathan Lacotte
Jonathan Lacotte
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
The hidden convex optimization landscape of regularized two-layer relu networks: an exact characterization of optimal solutions
Y Wang, J Lacotte, M Pilanci
International Conference on Learning Representations, 2021
51*2021
A risk-sensitive finite-time reachability approach for safety of stochastic dynamic systems
MP Chapman, J Lacotte, A Tamar, D Lee, KM Smith, V Cheng, JF Fisac, ...
2019 American Control Conference (ACC), 2958-2963, 2019
472019
Risk-Sensitive Generative Adversarial Imitation Learning
J Lacotte, M Ghavamzadeh, Y Chow, M Pavone
The 22nd International Conference on Artificial Intelligence and Statistics …, 2018
322018
Optimal Randomized First-Order Methods for Least-Squares Problems
J Lacotte, M Pilanci
Proceedings of the 37th International Conference on Machine Learning (ICML), 2020
302020
Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization
J Lacotte, M Pilanci
Advances in Neural Information Processing Systems 33, 2020
272020
Optimal Iterative Sketching Methods with the Subsampled Randomized Hadamard Transform
J Lacotte, S Liu, E Dobriban, M Pilanci
Advances in Neural Information Processing Systems 33, 2020
26*2020
Risk-sensitive inverse reinforcement learning via semi-and non-parametric methods
S Singh, J Lacotte, A Majumdar, M Pavone
The International Journal of Robotics Research 37 (13-14), 1713-1740, 2018
262018
Faster least squares optimization
J Lacotte, M Pilanci
arXiv preprint arXiv:1911.02675, 2019
242019
Adaptive and oblivious randomized subspace methods for high-dimensional optimization: Sharp analysis and lower bounds
J Lacotte, M Pilanci
IEEE Transactions on Information Theory 68 (5), 3281-3303, 2022
22*2022
Newton-LESS: Sparsification without trade-offs for the sketched Newton update
M Derezinski, J Lacotte, M Pilanci, MW Mahoney
Advances in Neural Information Processing Systems 34, 2835-2847, 2021
222021
Adaptive newton sketch: Linear-time optimization with quadratic convergence and effective hessian dimensionality
J Lacotte, Y Wang, M Pilanci
International Conference on Machine Learning, 5926-5936, 2021
142021
Globally optimal training of neural networks with threshold activation functions
T Ergen, HI Gulluk, J Lacotte, M Pilanci
arXiv preprint arXiv:2303.03382, 2023
92023
Fast convex quadratic optimization solvers with adaptive sketching-based preconditioners
J Lacotte, M Pilanci
arXiv preprint arXiv:2104.14101, 2021
62021
Randomized Methods in Optimization: Fast Algorithms and Fundamental Limits
JP Lacotte
Stanford University, 2021
2021
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
J Lacotte, M Pilanci, MW Mahoney
The system can't perform the operation now. Try again later.
Articles 1–15