# Adversarial Scheduling Models for Game Theoretic Dynamics

Monday, November 3, 2003 - 3:40pm - 4:15pm

Keller 3-180

Gabriel Istrate (Los Alamos National Laboratory)

Game-theoretic equilibria are steady-state properties; that is, given that all the players' actions correspond to an equilibrium point, it would be irrational for any of them to deviate from this behavior, given that the others stick to their strategy. A major weakness of this type of concept is that it fails to predict how players arrive at this equilibrium in the first place, or how they choose one such equilibrium, if several such points exist. One way to justify the emergence of such equilibria is provided by the theory of learning in games, which regards them as the result of an evolutivelearning process. Such models assume one (or several) populations of agents that interact by playing a certain game, and updating their behavior based on the outcome of this interaction.

In order for evolutionary results of this sort to offer convincing insights on equilibrium selection in real-life situations, they have to display robustness with respect to the various idealizations inherent in the mathematical model. One such idealization is random scheduling: agents that are given the chance to update are chosen according to a scheme that involves random choice. However, real social interaction is not random, and it is not clear whether the randomness assumption is essential for the validity of these results.

In this talk (based on results obtained in collaboration with M.V. Marathe and S.S. Ravi) we explicitly advocate a reexamination of the conclusions of the theory of learning in games under adversarial scheduling models, and present a couple of examples from the game-theoretic literature (e.g. Peyton Young stochastically stable equilibria, the colearning model due to Shoham and Tennenholtz, etc) that show that such an analysis is feasible (and interesting).

In order for evolutionary results of this sort to offer convincing insights on equilibrium selection in real-life situations, they have to display robustness with respect to the various idealizations inherent in the mathematical model. One such idealization is random scheduling: agents that are given the chance to update are chosen according to a scheme that involves random choice. However, real social interaction is not random, and it is not clear whether the randomness assumption is essential for the validity of these results.

In this talk (based on results obtained in collaboration with M.V. Marathe and S.S. Ravi) we explicitly advocate a reexamination of the conclusions of the theory of learning in games under adversarial scheduling models, and present a couple of examples from the game-theoretic literature (e.g. Peyton Young stochastically stable equilibria, the colearning model due to Shoham and Tennenholtz, etc) that show that such an analysis is feasible (and interesting).