Follow
Guannan Qu
Title
Cited by
Cited by
Year
Harnessing smoothness to accelerate distributed optimization
G Qu, N Li
IEEE Transactions on Control of Network Systems 5 (3), 1245-1260, 2017
6662017
Accelerated distributed Nesterov gradient descent
G Qu, N Li
IEEE Transactions on Automatic Control 65 (6), 2566-2581, 2019
269*2019
Reinforcement learning for selective key applications in power systems: Recent advances and future challenges
X Chen, G Qu, Y Tang, S Low, N Li
IEEE Transactions on Smart Grid 13 (4), 2935-2958, 2022
204*2022
Optimal scheduling of battery charging station serving electric vehicles based on battery swapping
X Tan, G Qu, B Sun, N Li, DHK Tsang
IEEE Transactions on Smart Grid 10 (2), 1372-1384, 2017
1512017
On the exponential stability of primal-dual gradient dynamics
G Qu, N Li
IEEE Control Systems Letters 3 (1), 43-48, 2018
1402018
Real-time decentralized voltage control in distribution networks
N Li, G Qu, M Dahleh
2014 52nd Annual Allerton Conference on Communication, Control, and …, 2014
1362014
Optimal distributed feedback voltage control under limited reactive power
G Qu, N Li
IEEE Transactions on Power Systems 35 (1), 315-331, 2019
1302019
Online optimization with predictions and switching costs: Fast algorithms and the fundamental limit
Y Li, G Qu, N Li
IEEE Transactions on Automatic Control 66 (10), 4761-4768, 2020
111*2020
A random forest method for real-time price forecasting in New York electricity market
J Mei, D He, R Harley, T Habetler, G Qu
2014 IEEE PES general meeting| conference & exposition, 1-5, 2014
1112014
Finite-Time Analysis of Asynchronous Stochastic Approximation and -Learning
G Qu, A Wierman
Conference on Learning Theory, 3185-3205, 2020
1062020
Scalable reinforcement learning for multiagent networked systems
G Qu, A Wierman, N Li
Operations Research 70 (6), 3601-3628, 2022
99*2022
Distributed greedy algorithm for multi-agent task assignment problem with submodular utility functions
G Qu, D Brown, N Li
Automatica 105, 206-215, 2019
76*2019
Distributed optimal voltage control with asynchronous and delayed communication
S Magnússon, G Qu, N Li
IEEE Transactions on Smart Grid 11 (4), 3469-3482, 2020
672020
Learning optimal power flow: Worst-case guarantees for neural networks
A Venzke, G Qu, S Low, S Chatzivasileiadis
2020 IEEE International Conference on Communications, Control, and Computing …, 2020
652020
Scalable multi-agent reinforcement learning for networked systems with average reward
G Qu, Y Lin, A Wierman, N Li
Advances in Neural Information Processing Systems 33, 2074-2086, 2020
582020
Multi-agent reinforcement learning in stochastic networked systems
Y Lin, G Qu, L Huang, A Wierman
Advances in neural information processing systems 34, 7825-7837, 2021
52*2021
Voltage control using limited communication
S Magnússon, G Qu, C Fischione, N Li
IEEE Transactions on Control of Network Systems 6 (3), 993-1003, 2019
342019
Perturbation-based regret analysis of predictive control in linear time varying systems
Y Lin, Y Hu, G Shi, H Sun, G Qu, A Wierman
Advances in Neural Information Processing Systems 34, 5174-5185, 2021
312021
Combining model-based and model-free methods for nonlinear control: A provably convergent policy gradient approach
G Qu, C Yu, S Low, A Wierman
arXiv preprint arXiv:2006.07476, 2020
31*2020
Stability constrained reinforcement learning for real-time voltage control
Y Shi, G Qu, S Low, A Anandkumar, A Wierman
2022 American Control Conference (ACC), 2715-2721, 2022
302022
The system can't perform the operation now. Try again later.
Articles 1–20