twodlearn.Autoencoder module

Definition of autoencoder layers and a stacked autoencoder network

class twodlearn.Autoencoder.AutoencoderClassifierNet(n_inputs, enc_hidden, enc_out, n_classes, hidden_afunction=<function relu>, dec_hidden=None, output_function=None, enc_output_function=None, tied_weights=False, name='autoencoder')[source]

Bases: twodlearn.Autoencoder.AutoencoderNet

Autoencoder with a linear classifier in the encoder output

class AutoencoderClassifierNetSetup(model, batch_size=None, keep_prob=None, inputs=None, opt=None, name='train')[source]

Bases: twodlearn.Autoencoder.AutoencoderNetSetup

setup_classifier()[source]
define_classifier(n_classes)[source]
property parameters[source]
setup(batch_size=None, keep_prob=None, inputs=None, opt=None, name=None)[source]
class twodlearn.Autoencoder.AutoencoderNet(n_inputs, enc_hidden, enc_out, hidden_afunction=<function relu>, dec_hidden=None, output_function=None, enc_output_function=None, tied_weights=False, name='autoencoder')[source]

Bases: twodlearn.core.common.TdlModel

class AutoencoderNetSetup(model, batch_size=None, keep_prob=None, inputs=None, opt=None, name='train')[source]

Bases: twodlearn.core.common.TdlModel

add_noise(inputs, opt)[source]

adds noise to the input

Parameters
  • inputs – intpus for which noise will be added

  • opt – dictionary with the options for the autoencoder, the options regarding noise are: ‘noise/type’: ‘bernoulli’, ‘gaussian’ ‘noise/level’: float (0.0-1.0 typically) specifing how much noise will be added

@return noisy_inputs

property batch_size[source]
property contractive_loss[source]

mean squared frobenius norm of the jacobian

property decoder[source]
define_decoder_forward(inputs)[source]
define_encoder_forward(inputs)[source]
define_opt(opt)[source]
define_reconstruction_loss(inputs, decoder, opt)[source]
define_regularizer()[source]
define_supervised_loss()[source]
property encoder[source]
property inputs[source]

inputs to the encoder

property jacobian_f[source]

mean frobenius norm of the jacobian

property keep_prob[source]
property loss[source]

Reconstruction loss + regularizers

property model[source]

model that holds the parameters

property n_inputs[source]
property name[source]

name used for the construction of the comptuation graph

property opt[source]

options used to build the graph

property reconstruction_loss[source]

loss that penalize for the reconstructions of the original inputs

setup_contractive_regularizer()[source]
property tied_weights[source]
property wd_loss[source]

weight-decay, ussually a frobenius norm of the weights

property weights[source]
property y[source]

outputs from the decoder

define_decoder(n_inputs, dec_hidden, n_outputs, hidden_afunction, output_function)[source]
define_encoder(n_inputs, enc_hidden, n_outputs, hidden_afunction, output_function)[source]
property parameters[source]
setup(batch_size=None, keep_prob=None, inputs=None, opt=None, name=None)[source]
property tied_weights[source]
class twodlearn.Autoencoder.AutoencoderNetConf(inputs, labels, y, loss, xp_list, h_list, xp_losses)[source]

Bases: object

This is a wrapper to any network configuration, it contains the references to the placeholders for inputs and labels, and the reference of the computation graph for the network

inputs: placeholder for the inputs labels: placeholder for the labels y: output of the comptuation graph (logits) loss: loss for the network

xp_list: list with the “reconstructed” outputs using the transpose of the autoencoder layer h_list: list with the hidden layer outputs xp_losses: list with the loses for each one of the layers reconstructions

class twodlearn.Autoencoder.StackedAutoencoderNet(n_inputs, n_outputs, n_hidden, afunction=None, name='')[source]

Bases: object

add_noise(input_tensor, l_idx)[source]
compute_output_loss(y, labels)[source]
compute_pred_loss(x, xp)[source]
get_pred_optimizers(NetConf, learning_rate=0.0002, beta1=0.5)[source]
setup(batch_size, drop_prob=None, l2_reg_coef=None, inputs=None)[source]

Defines the computation graph of the neural network for a specific batch size

drop_prob: placeholder used for specify the probability for dropout. If this coefficient is set, then

dropout regularization is added between all fully connected layers(TODO: allow to choose which layers)

l2_reg_coef: coeficient for l2 regularization loss_type: type of the loss being used for training the network, the options are:

  • ‘cross_entropy’: for classification tasks

  • ‘l2’: for regression tasks

class twodlearn.Autoencoder.TransposedAffine(reference_layer, name=None)[source]

Bases: twodlearn.feedforward.AffineLayer

class twodlearn.Autoencoder.TransposedFullyConnected(reference_layer, afunction=None, name=None)[source]

Bases: twodlearn.feedforward.DenseLayer

class twodlearn.Autoencoder.TransposedMlpNet(encoder_net, output_function=None, name=None)[source]

Bases: twodlearn.feedforward.MlpNet

define_fullyconnected_layers()[source]

Defines the model for the fully connected layers

define_output_layer(output_function)[source]

Defines the model for the final layer