Im Rahmen des Kolloquiums des Graduiertenkollegs Algorithmic Optimization findet am
Montag, dem 11. Dezember 2023
16:00 Uhr c.t.
folgender Vortrag statt:
A regularized variance-reduced modified extragradient method for stochastic hierarchical games
Prof. Dr. Mathias Staudigl, University of Mannheim
The theory of learning in games has so far focused mainly on games with simultaneous moves. Recently, researchers in machine learning have started investigating learning dynamics in games involving hierarchical decision-making. We consider an N-player hierarchical game in which the ith player’s objective comprises of an expectation-valued term, parametrized by rival decisions, and a hierarchical term. Such a framework allows for capturing a broad range of stochastic hierarchical optimization problems, Stackelberg equilibrium problems, and leaderfollower games. We develop an iteratively regularized and smoothed variance-reduced modified extragradient framework for learning hierarchical equilibria in a stochastic setting. We equip our analysis with rate statements, complexity guarantees, and almost-sure convergence claims. We then extend these statements to settings where the lower-level problem is solved inexactly and provide the corresponding rate and complexity statements.