Finite-State, Discrete-Time Markov Chains

Initializing live version
Download to Desktop

Requires a Wolfram Notebook System

Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.

Consider a system that is always in one of states, numbered 1 through . Every time a clock ticks, the system updates itself according to an matrix of transition probabilities, the entry of which gives the probability that the system moves from state to state at any clock tick. A Markov chain is a system like this, in which the next state depends only on the current state and not on previous states. Powers of the transition matrix approach a matrix with constant columns as the power increases. The number to which the entries in the column converge is the asymptotic fraction of time the system spends in state .

[more]

The image on the upper left shows the states of the chain with the current state colored red, where what is "current" is determined by the time slider. The histogram tracks the number of visits to each state over the number of time steps determined by the time slider. The transition probabilities can be changed using the new transition matrix slider. For small chains, powers of the transition matrix are shown at the bottom.

[less]

Contributed by: Chris Boucher (March 2011)
Open content licensed under CC BY-NC-SA


Snapshots


Details

detailSectionParagraph


Feedback (field required)
Email (field required) Name
Occupation Organization
Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback.
Send