Core

The core module contains the main abstractions used throughout the library.

Chain

class markovpy.chain.Chain(data=None, **attr)[source]

Bases: object

A discrete-time Markov chain.

States are arbitrary hashable python objects. Transitions are stored as adjacency dictionaries with attributes, rather than as a matrix.

This class stores structures only, algorithms are implemented externally

add_state(s, **attr)[source]

Add a state to the chain :type s: :param s: State to add :type attr: :param attr: Any other arguments

add_states_from(states, **attr)[source]

Adds multiple states to the chain :type states: :param states: Iterable states to add :type attr: :param attr: Any other arguments

add_transition(u, v, p=None, **attr)[source]

Add a transition u -> v to the chain :type u: :param u: Transition origin state :type v: :param v: Transition target state :type p: :param p: Probability (optional) :type attr: :param attr: Any other arguments

property states

List of states

Type:

return

transitions(u=None)[source]

Returns transitions

If u is none:

iterate over (u, v, attr)

Else:

iterate over (u, v, attr) for fixed u

Parameters:

u – State to return transitions of

add_transitions_from(data)[source]

Adds transitions from an iterable.

Each element may be: (u, v) (u, v, p) (u, v, attr) (u, v, p, attr)

Parameters:

data – Data to convert from

successors(u)[source]

Returns all v where u can transition to v

Parameters:

u – Origin State

Returns:

List of States that u can transition to

predecessors(v)[source]

Returns all u where u can transition to v :type v: :param v: Target state to find transitions

has_state(s)[source]

Returns true if s is a state :type s: :param s: Possible state to check for :return: If state exists in chain

has_transition(u, v)[source]

returns true if u and v are transitions :type u: :param u: Origin of transition to find :type v: :param v: Target of transition to find :return: If transition exists

transition_mass(u, v)[source]

Return the transition probability from state u to state v

If there is no outgoing edge from u to v, returns 0 :type u: :param u: Origin of the weight to find :type v: :param v: Target of the weight to find :return:

out_degree(u, weight=None)[source]

Returns the number of outgoing edges if weight is none. Else returns the sum of the weight of outgoing edges :type u: :param u: State to count weight :type weight: :param weight: “p” returns sum. :return: Sum or count of weights of outgoing edges

in_degree(v, weight=None)[source]

Returns the number of entering edges if weight is none. Else returns the sum of the weight of entering edges :type v: :param v: Target state to count weight :type weight: :param weight: “p” returns sum. :return: Sum or count of weight of entering edges

is_stochastic(tol=1e-12)[source]

Returns True if the chain is stochastic :type tol: :param tol: Optional Tolerance :return: True if chain is stochastic

normalise()[source]

Takes the sums of the weights and normalises them to 1

classmethod from_adjacency_matrix(matrix, states=None, normalise=True, validate=True)[source]

Constructs a Markov chain from an adjacency matric

The adjacency matrix is interpreted row-wise

Parameters:
  • matrix – Sequence of Sequences of non-negative numbers

  • states – Optional state labels

  • normalise – Optional, rows of matrix normalised to 1

  • validate – Optional, validates the matrix for correctness

Returns:

Chain

to_adjacency_matrix(states=None, dense=True)[source]
Parameters:
  • states

  • dense

Returns:

classmethod merge(chain1, chain2, merge_type='add', normalise=True)[source]
stationary_distribution(method='auto', tol=1e-12, max_iter=10000)[source]

Submodules