twodlearn.bayesnet.bayesnet module

Definition of several bayesian neural-networks

class twodlearn.bayesnet.bayesnet.AffineBernoulliLayer(units, *args, **kargs)[source]

Bases: twodlearn.feedforward.AffineLayer

Implements the layer y=dropout(x) W + b

class Output(model, inputs, options=None, name=None)[source]

Bases: twodlearn.feedforward.Output

affine[source]
property keep_prob[source]
value[source]
keep_prob[source]

Keep prob for dropout.

regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

class twodlearn.bayesnet.bayesnet.AffineNormalLayer(units, *args, **kargs)[source]

Bases: twodlearn.bayesnet.bayesnet.LinearNormalLayer

class Output(**kargs)[source]

Bases: twodlearn.bayesnet.bayesnet.Output

bias[source]

Autoinit with arguments [‘initializer’, ‘trainable’]

class twodlearn.bayesnet.bayesnet.BayesianMlp(n_inputs, n_outputs, n_hidden, afunction=<function selu01>, options=None, name='BayesianMlp')[source]

Bases: twodlearn.feedforward.StackedModel

Mlp composed of layers whose weights are sampled from a variational posterior distribution

class BayesMlpOutput(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

property kernels[source]
property shape[source]
evaluate(*args, **kargs)[source]
property kernels[source]

list the weights distributions from the layers

layers[source]
property n_inputs[source]

size of the input vectors

property n_outputs[source]

size of the output vectors

property parameters[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

class twodlearn.bayesnet.bayesnet.BernoulliBayesianMlp(n_inputs, n_outputs, n_hidden, keep_prob=0.8, options=None, afunction=<function relu>, name='BernoulliBayesianMlp')[source]

Bases: twodlearn.bayesnet.bayesnet.BayesianMlp

class twodlearn.bayesnet.bayesnet.BoundedBayesianMlp(n_inputs, n_outputs, n_hidden, lower=1e-07, upper=None, afunction=<function selu01>, options=None, name='BayesianMlp')[source]

Bases: twodlearn.bayesnet.bayesnet.BayesianMlp

Multi-layer bayesian neural network, with bounded output

layers[source]
class twodlearn.bayesnet.bayesnet.BoundedBernoulliBayesianMlp(n_inputs, n_outputs, n_hidden, keep_prob, lower=1e-07, upper=None, afunction=<function selu01>, options=None, name='BayesianMlp')[source]

Bases: twodlearn.bayesnet.bayesnet.BernoulliBayesianMlp

Multi-layer bayesian neural network, with bounded output

layers[source]
class twodlearn.bayesnet.bayesnet.ConditionalNormal(loc=None, scale=None, shape=None, name='ConditionalNormal')[source]

Bases: twodlearn.bayesnet.bayesnet.Normal

class NormalOutput(model, inputs, name=None)[source]

Bases: twodlearn.bayesnet.bayesnet.McNormal

inputs[source]
loc[source]
scale[source]
loc[source]
class twodlearn.bayesnet.bayesnet.DenseBernoulliLayer(activation=<function relu>, name=None, **kargs)[source]

Bases: twodlearn.bayesnet.bayesnet.AffineBernoulliLayer

class Output(model, inputs, options=None, name=None)[source]

Bases: twodlearn.bayesnet.bayesnet.Output

property keep_prob[source]
value[source]
activation[source]
class twodlearn.bayesnet.bayesnet.DenseNormalLayer(activation=<function relu>, name=None, **kargs)[source]

Bases: twodlearn.bayesnet.bayesnet.AffineNormalLayer

class Output(**kargs)[source]

Bases: twodlearn.bayesnet.bayesnet.Output

property value[source]

Sample from the output distribution.

activation[source]
class twodlearn.bayesnet.bayesnet.Entropy(prob, name=None)[source]

Bases: twodlearn.core.common.TdlModel

property prob[source]
property value[source]
class twodlearn.bayesnet.bayesnet.GaussianKL(p, q, name='GaussianKL')[source]

Bases: twodlearn.losses.Loss

Evaluate KL(p||q) for p and q Normal

evaluate(p, q)[source]

Evaluate KL(p||q)

fromlist(p_list)[source]
classmethod fromstats(p_loc, p_scale, q_loc, q_scale)[source]
get_n_vars(p, q)[source]
property p[source]
property q[source]
class twodlearn.bayesnet.bayesnet.GaussianNegLogLikelihood(y, labels=None, name='NegLogLikelihood')[source]

Bases: twodlearn.losses.EmpiricalLoss

define_fit_loss(y, labels)[source]
property labels[source]

Labels for computing the loss, if not provided, they are created automatically

property n_outputs[source]
property y[source]
class twodlearn.bayesnet.bayesnet.HeteroscedasticNormalMlp(loc_args, scale_args, LocClass=<class 'twodlearn.bayesnet.bayesnet.BayesianMlp'>, ScaleClass=<class 'twodlearn.bayesnet.bayesnet.BoundedBayesianMlp'>, options=None, name='HeteroscedasticGaussianMlp')[source]

Bases: twodlearn.bayesnet.bayesnet.NormalMlp

Defines a conditional gaussian N(loc=BayesianMlp(x), scale=BoundedOutputBayesianMlp(x))

class twodlearn.bayesnet.bayesnet.LinearNormalLayer(units, *args, **kargs)[source]

Bases: twodlearn.feedforward.LinearLayer

Tdl autoinitialization with arguments:

kernel[source]

(ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’, ‘max_scale’]

input_shape[source]

(InputArgument)

regularizer[source]

(Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

units[source]

(InputArgument) Number of output units (int).

tolerance[source]

(InputArgument)

class Output(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

affine[source]

Normal distribution for the outputs.

inputs[source]
model[source]
property shape[source]
value[source]

Sample from the output distribution.

call(inputs, *args, **kargs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

kernel[source]

Autoinit with arguments [‘initializer’, ‘trainable’, ‘max_scale’]

regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

tolerance[source]
class twodlearn.bayesnet.bayesnet.McEstimate(value, name='mc_estimate')[source]

Bases: twodlearn.core.common.TdlModel

mean[source]
stddev[source]
class twodlearn.bayesnet.bayesnet.McNormal(loc, scale, samples=None, name='McNormal', **kargs)[source]

Bases: twodlearn.core.common.TdlModel

loc[source]
resample(*args, **kargs)[source]
property samples[source]
scale[source]
property value[source]
class twodlearn.bayesnet.bayesnet.McNormalEstimate(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.core.layers.Layer

call(inputs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

input_shape[source]

Input tensor shape.

sample_dim[source]

Sample dimension across which mean and stddev are computed.

class twodlearn.bayesnet.bayesnet.McSample(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

distribution[source]
sample_axis[source]
class twodlearn.bayesnet.bayesnet.Normal(loc=None, scale=None, shape=None, name='Normal', **kargs)[source]

Bases: twodlearn.core.common.TdlModel

class NormalOutput(model, inputs, name=None)[source]

Bases: twodlearn.bayesnet.bayesnet.McNormal

inputs[source]
loc[source]
scale[source]
evaluate(inputs=None, name=None)[source]
loc[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

scale[source]
class twodlearn.bayesnet.bayesnet.NormalMlp(loc_args, scale_args={}, LocClass=<class 'twodlearn.bayesnet.bayesnet.BayesianMlp'>, ScaleClass=None, options=None, name='GaussianMlp')[source]

Bases: twodlearn.bayesnet.bayesnet.ConditionalNormal

class McNormalOutput(model, n_particles, inputs, name=None)[source]

Bases: twodlearn.bayesnet.bayesnet.NormalOutput

inputs[source]
property n_particles[source]
class NormalOutput(model, inputs, name=None)[source]

Bases: twodlearn.bayesnet.bayesnet.NormalOutput

inputs[source]
property n_inputs[source]
property n_outputs[source]
loc[source]
mc_evaluate(n_particles, x=None, name=None)[source]
property n_inputs[source]
property n_outputs[source]
scale[source]
class twodlearn.bayesnet.bayesnet.NormalModel(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.core.layers.Layer

Tdl autoinitialization with arguments:

scale[source]

(SubmodelInit) Autoinit with arguments [‘initializer’, ‘trainable’, ‘tolerance’]

input_shape[source]

(InputArgument) Input tensor shape.

batch_shape[source]

(InputArgument)

loc[source]

(SubmodelInit) Autoinit with arguments [‘initializer’, ‘trainable’]

batch_shape[source]
build(input_shape=None)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(inputs, *args, **kargs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

input_shape[source]

Input tensor shape.

loc[source]

Autoinit with arguments [‘initializer’, ‘trainable’]

scale[source]

Autoinit with arguments [‘initializer’, ‘trainable’, ‘tolerance’]

class twodlearn.bayesnet.bayesnet.Particles(n_particles, base=None, shape=None, name='Particles', **kargs)[source]

Bases: twodlearn.core.common.TdlModel

base[source]
property base_shape[source]
property n_particles[source]
property value[source]
class twodlearn.bayesnet.bayesnet.ParticlesLayer(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.core.layers.Layer

Tdl autoinitialization with arguments:

particles[source]

(InputArgument) Number of particles.

input_shape[source]

(InputArgument) Input tensor shape.

call(inputs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

compute_output_shape(input_shape=None)[source]

Computes the output shape of the layer.

Assumes that the layer will be built to match that input shape provided.

Parameters

input_shape – Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns

An input shape tuple.

input_shape[source]

Input tensor shape.

particles[source]

Number of particles.

class twodlearn.bayesnet.bayesnet.SampleLayer(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.core.layers.Layer

Tdl autoinitialization with arguments:

sample_shape[source]

(InputModel)

distribution[source]

(InputModel)

input_shape[source]

(InputArgument)

call(inputs, *args, **kargs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

distribution[source]
input_shape[source]
sample_shape[source]