Expected Motion in 2x2 Symmetric Games Played by Reinforcement Learners

Requires a Wolfram Notebook System

Interact on desktop, mobile and cloud with the free Wolfram CDF Player or other Wolfram Language products.

Requires a Wolfram Notebook System

Edit on desktop, mobile and cloud with any Wolfram Language product.




Contributed by: Luis R. Izquierdo and Segismundo S. Izquierdo (April 2008)
Open content licensed under CC BY-NC-SA



Reinforcement learners tend to repeat actions that led to satisfactory outcomes in the past, and avoid choices that resulted in unsatisfactory experiences. This behavior is one of the most widespread adaptation mechanisms in nature. This Demonstration shows the expected motion of a system where two players using the Bush–Mosteller reinforcement learning algorithm play a symmetric game. Mathematical analyses conducted by the contributors of this Demonstration show that the expected motion displayed in the figure is especially relevant to characterize the transient dynamics of the system, particularly with small learning rates, but, on the other hand, this expected motion can be misleading when studying the asymptotic behavior of the model. Further information at http://luis.izqui.org and http://segis.izqui.org.

Feedback (field required)
Email (field required) Name
Occupation Organization
Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback.