twodlearn.DBM module

This file defines a deep restricted boltzman machine

class twodlearn.DBM.AutoencoderNetConf(inputs, labels, y, loss, xp_list, h_list, xp_losses)[source]

Bases: object

This is a wrapper to any network configuration, it contains the references to the placeholders for inputs and labels, and the reference of the computation graph for the network

inputs: placeholder for the inputs labels: placeholder for the labels y: output of the comptuation graph (logits) loss: loss for the network

xp_list: list with the “reconstructed” outputs using the transpose of the autoencoder layer h_list: list with the hidden layer outputs xp_losses: list with the loses for each one of the layers reconstructions

class twodlearn.DBM.RBM(n_inputs, n_units, afunction=None, name='')[source]

Bases: object

Standard restricted boltzman machine

evaluate_cd_step(x, k=1, alpha=0.001)[source]

runs a step of the contrastive divergence algorithm using k gibbs samples

evaluate_h_given_x(input_mat)[source]
evaluate_x_given_h(input_mat)[source]
gibbs_sampling_given_h(h_prob, k=1)[source]

generates a sample x after k gibbs samples h is assumed to be between [0-1], it is a set of probabilities

gibbs_sampling_given_x(x_prob, k=1)[source]

generates a sample x after k gibbs samples x is assumed to be between [0-1], it is a set of probabilities

class twodlearn.DBM.StackedAutoencoderNet(n_inputs, n_outputs, n_hidden, afunction=None, name='')[source]

Bases: object

add_noise(input_tensor, l_idx)[source]
compute_output_loss(y, labels)[source]
compute_pred_loss(x, xp)[source]
get_pred_optimizers(NetConf, learning_rate=0.0002, beta1=0.5)[source]
setup(batch_size, drop_prob=None, l2_reg_coef=None, inputs=None)[source]

Defines the computation graph of the neural network for a specific batch size

drop_prob: placeholder used for specify the probability for dropout.

If this coefficient is set, then dropout regularization is added between all fully connected layers (TODO: allow to choose which layers)

l2_reg_coef: coeficient for l2 regularization loss_type: type of the loss being used for training the network, the

options are:
  • ‘cross_entropy’: for classification tasks

  • ‘l2’: for regression tasks

twodlearn.DBM.bernoulli_sample_tf(x)[source]