Skip to yearly menu bar Skip to main content


Poster

State Dependent Performative Prediction with Stochastic Approximation

Qiang Li · Hoi-To Wai


Abstract:

This paper studies the performative prediction problem which optimizes a stochastic loss function with data distribution that depends on the decision variable. We consider a setting where the agent(s) provides samples adapted to both the learner's and agent's previous states. The samples are then used by the learner to update his/her state to optimize a loss function. Such closed loop update dynamics is studied as a state dependent stochastic approximation (SA) algorithm, which is shown to find a fixed point known as the performative stable solution. Our setting captures the unforgetful nature and reliance on past experiences of agents. Our contributions are three-fold. First, we present a framework for state dependent performative prediction with biased stochastic gradients driven by a controlled Markov chain whose transition probability depends on the learner's state. Second, we present a new finite-time performance analysis of the SA algorithm. We show that the expected squared distance to the performative stable solution decreases as O(1/k), where k is the iteration number. Third, numerical experiments verify our findings.

Chat is not available.