Exploiting higher order smoothness in derivative-free optimization and continuous bandits A Akhavan, M Pontil, A Tsybakov Advances in Neural Information Processing Systems 33, 9017-9027, 2020 | 43 | 2020 |
A gradient estimator via l1-randomization for online zero-order optimization with two point feedback A Akhavan, E Chzhen, M Pontil, A Tsybakov Advances in Neural Information Processing Systems 35, 7685-7696, 2022 | 18 | 2022 |
Distributed zero-order optimization under adversarial noise A Akhavan, M Pontil, A Tsybakov Advances in Neural Information Processing Systems 34, 10209-10220, 2021 | 18 | 2021 |
Group meritocratic fairness in linear contextual bandits R Grazzi, A Akhavan, JIF Falk, L Cella, M Pontil Advances in Neural Information Processing Systems 35, 24392-24404, 2022 | 8 | 2022 |
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm A Akhavan, E Chzhen, M Pontil, AB Tsybakov arXiv preprint arXiv:2306.02159, 2023 | 6 | 2023 |
Estimating the Minimizer and the Minimum Value of a Regression Function under Passive Design A Akhavan, D Gogolashvili, AB Tsybakov Journal of Machine Learning Research 25 (11), 1-37, 2024 | | 2024 |
Re-thinking High-dimensional Mathematical Statistics F Bunea, R Nowak, AB Tsybakov Oberwolfach Reports 19 (2), 1377-1430, 2023 | | 2023 |
Derivative-free stochastic optimization, online learning and fairness A Akhavanfoomani Institut polytechnique de Paris, 2023 | | 2023 |