Pattern control for networks of Ginzburg-Landau oscillators via Markov Decision Processes
2016 IEEE 55th Conference on Decision and Control (CDC), 2016•ieeexplore.ieee.org
This paper proposes a design methodology for pattern control in a network of identical
oscillators. Patterns correspond to a stable equilibrium in an oscillator network over different
coupling coefficients and available network topologies. We show that the discrete graph
based version of the Ginzburg-Landau equation, referred to as the graph Ginzburg-Landau
dynamics, exhibits n pattern equilibrium for an n-node cycle graph with the sign of the
oscillator coupling coefficient dictating the stability of the pattern. The pattern control problem …
oscillators. Patterns correspond to a stable equilibrium in an oscillator network over different
coupling coefficients and available network topologies. We show that the discrete graph
based version of the Ginzburg-Landau equation, referred to as the graph Ginzburg-Landau
dynamics, exhibits n pattern equilibrium for an n-node cycle graph with the sign of the
oscillator coupling coefficient dictating the stability of the pattern. The pattern control problem …
This paper proposes a design methodology for pattern control in a network of identical oscillators. Patterns correspond to a stable equilibrium in an oscillator network over different coupling coefficients and available network topologies. We show that the discrete graph based version of the Ginzburg-Landau equation, referred to as the graph Ginzburg-Landau dynamics, exhibits n pattern equilibrium for an n-node cycle graph with the sign of the oscillator coupling coefficient dictating the stability of the pattern. The pattern control problem is cast as a discrete Markov Decision Process (MDP) whose state space is the set of patterns realizable on subgraphs of the network. Actions in the MDP correspond to the selection of coupling coefficients and edge switches in the network. Transition sampling is applied to generate the transition probabilities. Dynamic programming can then be used to calculate a stochastic policy that maximizes the expected total reward over an infinite horizon.
ieeexplore.ieee.org
Showing the best result for this search. See all results