aeppl.dists.DiscreteMarkovChainFactory#

class aeppl.dists.DiscreteMarkovChainFactory(inputs, outputs, inline=False, lop_overrides='default', grad_overrides='default', rop_overrides='default', connection_pattern=None, name=None, **kwargs)[source]#

An Op constructed from an Aesara graph that represents a discrete Markov chain.

This “composite” Op allows us to mark a sub-graph as measurable and assign a _logprob dispatch implementation.

As far as broadcasting is concerned, this Op has the following RandomVariable-like properties:

ndim_supp = 1 ndims_params = (3, 1)

TODO: It would be nice to express this as a Blockwise Op.

Parameters

Methods

DiscreteMarkovChainFactory.L_op(inputs, ...)

Construct a graph for the L-operator.

DiscreteMarkovChainFactory.R_op(inputs, ...)

Construct a graph for the R-operator.

DiscreteMarkovChainFactory.__init__(inputs, ...)

type inputs

List[Variable]

DiscreteMarkovChainFactory.add_tag_trace([...])

Add tag.trace to a node or variable.

DiscreteMarkovChainFactory.clone()

Clone the Op and its inner-graph.

DiscreteMarkovChainFactory.connection_pattern(node)

Return connection pattern of subfgraph defined by inputs and outputs.

DiscreteMarkovChainFactory.do_constant_folding(...)

Determine whether or not constant folding should be performed for the given node.

DiscreteMarkovChainFactory.get_lop_op()

DiscreteMarkovChainFactory.get_params(node)

Try to get parameters for the Op when Op.params_type is set to a ParamsType.

DiscreteMarkovChainFactory.get_rop_op()

DiscreteMarkovChainFactory.grad(inputs, ...)

Construct a graph for the gradient with respect to each input variable.

DiscreteMarkovChainFactory.infer_shape(...)

DiscreteMarkovChainFactory.make_node(*inputs)

Construct an Apply node that represent the application of this operation to the given inputs.

DiscreteMarkovChainFactory.make_py_thunk(...)

Make a Python thunk.

DiscreteMarkovChainFactory.make_thunk(node, ...)

Create a thunk.

DiscreteMarkovChainFactory.perform(node, ...)

Calculate the function on the inputs and put the variables in the output storage.

DiscreteMarkovChainFactory.prepare_node(...)

Make any special modifications that the Op needs before doing Op.make_thunk.

DiscreteMarkovChainFactory.set_grad_overrides(...)

Set gradient overrides.

DiscreteMarkovChainFactory.set_lop_overrides(...)

Set L_op overrides This will completely remove any previously set L_op/gradient overrides

DiscreteMarkovChainFactory.set_rop_overrides(...)

Set R_op overrides This will completely remove any previously set R_op overrides

Attributes

LOP_TYPE_ERR_MSG

OV_INP_LEN_ERR_MSG

STYPE_ERR_MSG

TYPE_ERR_MSG

default_output

An int that specifies which output Op.__call__ should return.

destroy_map

A dict that maps output indices to the input indices upon which they operate in-place.

fn

Lazily compile the inner function graph.

inner_inputs

The inner function's inputs.

inner_outputs

The inner function's outputs.

itypes

otypes

params_type

view_map

A dict that maps output indices to the input indices of which they are a view.