twodlearn.DBM module¶
This file defines a deep restricted boltzman machine
- 
class 
twodlearn.DBM.AutoencoderNetConf(inputs, labels, y, loss, xp_list, h_list, xp_losses)[source]¶ Bases:
objectThis is a wrapper to any network configuration, it contains the references to the placeholders for inputs and labels, and the reference of the computation graph for the network
inputs: placeholder for the inputs labels: placeholder for the labels y: output of the comptuation graph (logits) loss: loss for the network
xp_list: list with the “reconstructed” outputs using the transpose of the autoencoder layer h_list: list with the hidden layer outputs xp_losses: list with the loses for each one of the layers reconstructions
- 
class 
twodlearn.DBM.RBM(n_inputs, n_units, afunction=None, name='')[source]¶ Bases:
objectStandard restricted boltzman machine
- 
evaluate_cd_step(x, k=1, alpha=0.001)[source]¶ runs a step of the contrastive divergence algorithm using k gibbs samples
- 
 
- 
class 
twodlearn.DBM.StackedAutoencoderNet(n_inputs, n_outputs, n_hidden, afunction=None, name='')[source]¶ Bases:
object- 
setup(batch_size, drop_prob=None, l2_reg_coef=None, inputs=None)[source]¶ Defines the computation graph of the neural network for a specific batch size
- drop_prob: placeholder used for specify the probability for dropout.
 If this coefficient is set, then dropout regularization is added between all fully connected layers (TODO: allow to choose which layers)
l2_reg_coef: coeficient for l2 regularization loss_type: type of the loss being used for training the network, the
- options are:
 ‘cross_entropy’: for classification tasks
‘l2’: for regression tasks
-