A Two-State, Discrete-Time Markov Chain

Initializing live version
Download to Desktop

Requires a Wolfram Notebook System

Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.

Consider a system that is always in one of two states, 1 or 2. Every time a clock ticks, the system updates itself according to a 2×2 matrix of transition probabilities, the entry of which gives the probability that the system moves from state to state at any clock tick. A two‐state Markov chain is a system like this, in which the next state depends only on the current state and not on previous states. Powers of the transition matrix approach a matrix with constant columns as the power increases. The number to which entries in the column converge is the asymptotic fraction of time the system spends in state .

[more]

The state of the system at the time step given by the time slider is the colored circle. Transition probabilities are shown on the left diagram and can be changed using the new transition matrix slider. State 1 is colored yellow for "sunny" and state 2 is colored gray for "not sunny" in deference to the classic two-state Markov chain example. The number of visits to each state over the number of time steps given by the time slider is illustrated by the histogram. Powers of the transition matrix are shown at the bottom.

[less]

Contributed by: Chris Boucher (March 2011)
Open content licensed under CC BY-NC-SA


Snapshots


Details

detailSectionParagraph


Feedback (field required)
Email (field required) Name
Occupation Organization
Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback.
Send