twodlearn.Autoencoder module¶
Definition of autoencoder layers and a stacked autoencoder network
-
class
twodlearn.Autoencoder.
AutoencoderClassifierNet
(n_inputs, enc_hidden, enc_out, n_classes, hidden_afunction=<function relu>, dec_hidden=None, output_function=None, enc_output_function=None, tied_weights=False, name='autoencoder')[source]¶ Bases:
twodlearn.Autoencoder.AutoencoderNet
Autoencoder with a linear classifier in the encoder output
-
class
twodlearn.Autoencoder.
AutoencoderNet
(n_inputs, enc_hidden, enc_out, hidden_afunction=<function relu>, dec_hidden=None, output_function=None, enc_output_function=None, tied_weights=False, name='autoencoder')[source]¶ Bases:
twodlearn.core.common.TdlModel
-
class
AutoencoderNetSetup
(model, batch_size=None, keep_prob=None, inputs=None, opt=None, name='train')[source]¶ Bases:
twodlearn.core.common.TdlModel
-
add_noise
(inputs, opt)[source]¶ adds noise to the input
- Parameters
inputs – intpus for which noise will be added
opt – dictionary with the options for the autoencoder, the options regarding noise are: ‘noise/type’: ‘bernoulli’, ‘gaussian’ ‘noise/level’: float (0.0-1.0 typically) specifing how much noise will be added
@return noisy_inputs
-
-
class
-
class
twodlearn.Autoencoder.
AutoencoderNetConf
(inputs, labels, y, loss, xp_list, h_list, xp_losses)[source]¶ Bases:
object
This is a wrapper to any network configuration, it contains the references to the placeholders for inputs and labels, and the reference of the computation graph for the network
inputs: placeholder for the inputs labels: placeholder for the labels y: output of the comptuation graph (logits) loss: loss for the network
xp_list: list with the “reconstructed” outputs using the transpose of the autoencoder layer h_list: list with the hidden layer outputs xp_losses: list with the loses for each one of the layers reconstructions
-
class
twodlearn.Autoencoder.
StackedAutoencoderNet
(n_inputs, n_outputs, n_hidden, afunction=None, name='')[source]¶ Bases:
object
-
setup
(batch_size, drop_prob=None, l2_reg_coef=None, inputs=None)[source]¶ Defines the computation graph of the neural network for a specific batch size
- drop_prob: placeholder used for specify the probability for dropout. If this coefficient is set, then
dropout regularization is added between all fully connected layers(TODO: allow to choose which layers)
l2_reg_coef: coeficient for l2 regularization loss_type: type of the loss being used for training the network, the options are:
‘cross_entropy’: for classification tasks
‘l2’: for regression tasks
-
-
class
twodlearn.Autoencoder.
TransposedFullyConnected
(reference_layer, afunction=None, name=None)[source]¶
-
class
twodlearn.Autoencoder.
TransposedMlpNet
(encoder_net, output_function=None, name=None)[source]¶ Bases:
twodlearn.feedforward.MlpNet