twodlearn.feedforward module¶
- 
class 
twodlearn.feedforward.AffineLayer(units, *args, **kargs)[source]¶ Bases:
twodlearn.feedforward.LinearLayerStandard affine (W*X+b) fully connected layer
Tdl autoinitialization with arguments:
- 
kernel[source]¶ (
ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]
- 
regularizer[source]¶ (
Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.
- 
units[source]¶ (
InputArgument) Number of output units (int).
- 
bias[source]¶ (
ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]
- 
bias[source] Autoinit with arguments [‘initializer’, ‘trainable’]
- 
 
- 
class 
twodlearn.feedforward.AlexNet(input_shape, n_outputs, n_filters, filter_sizes, pool_sizes, n_hidden, output_function=None, name='AlexNet')[source]¶ Bases:
twodlearn.core.common.TdlModel- 
class 
AlexNetSetup(model, inputs=None, batch_size=None, options=None, name='AlexNet')[source]¶ 
- 
class 
 
- 
class 
twodlearn.feedforward.AlexNetClassifier(input_shape, n_classes, n_filters, filter_sizes, pool_sizes, n_hidden, name='AlexNetClassifier')[source]¶ 
- 
class 
twodlearn.feedforward.AlexnetLayer(filter_size, n_maps, pool_size, name=None)[source]¶ Bases:
twodlearn.core.common.TdlModelCreates a layer like the one used in (ImageNet Classification with Deep Convolutional Neural Networks).
- The format for filter_size is:
 [filter_size_dim0 , filter_size_dim1], it performs 2D convolution
- The format for n_maps is:
 [num_input_maps, num_output_maps]
- The format for pool_size is:
 [pool_size_dim0, pool_size_dim1]
- 
class 
twodlearn.feedforward.BoundedOutput(lower=1e-07, upper=None, name='BoundedOutput')[source]¶ 
- 
class 
twodlearn.feedforward.DenseLayer(activation=<function relu>, name=None, **kargs)[source]¶ Bases:
twodlearn.feedforward.AffineLayerStandard fully connected layer
Tdl autoinitialization with arguments:
- 
kernel[source]¶ (
ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]
- 
regularizer[source]¶ (
Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.
- 
units[source]¶ (
InputArgument) Number of output units (int).
- 
bias[source]¶ (
ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]
- 
activation[source] 
- 
 
- 
class 
twodlearn.feedforward.LinearClassifier(n_inputs, n_classes, name='linear_classifier', **kargs)[source]¶ Bases:
twodlearn.core.common.TdlModel- 
class 
LinearClassifierSetup(**kargs)[source]¶ 
- 
class 
 
- 
class 
twodlearn.feedforward.LinearLayer(units, *args, **kargs)[source]¶ Bases:
twodlearn.core.layers.LayerStandard linear (W*X) fully connected layer
Tdl autoinitialization with arguments:
- 
kernel[source]¶ (
ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]
- 
units[source]¶ (
InputArgument) Number of output units (int).
- 
regularizer[source]¶ (
Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.
- 
call(inputs, *args, **kargs)[source]¶ This is where the layer’s logic lives.
- Parameters
 inputs – Input tensor, or list/tuple of input tensors.
**kwargs – Additional keyword arguments.
- Returns
 A tensor or list/tuple of tensors.
- 
compute_output_shape(input_shape=None)[source]¶ Computes the output shape of the layer.
Assumes that the layer will be built to match that input shape provided.
- Parameters
 input_shape – Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
- Returns
 An input shape tuple.
- 
input_shape[source] 
- 
kernel[source] Autoinit with arguments [‘initializer’, ‘trainable’]
- 
regularizer[source] Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.
- 
units[source] Number of output units (int).
- 
 
- 
class 
twodlearn.feedforward.MlpClassifier(n_inputs, n_classes, n_hidden, afunction=<function relu>, name=None)[source]¶ Bases:
twodlearn.feedforward.MlpNet
- 
class 
twodlearn.feedforward.MlpNet(n_inputs, n_outputs, n_hidden, afunction=<function relu>, output_function=None, name='MlpNet')[source]¶ Bases:
twodlearn.feedforward.StackedModelfull_layers: list of fully connected layers out_layer: output layer, for the moment, linear layer
- 
class 
Output(model, inputs=None, keep_prob=None, name=None)[source]¶ Bases:
twodlearn.core.common.TdlModel
- 
class 
 
- 
class 
twodlearn.feedforward.MultiLayer2DConvolution(input_shape, n_filters, filter_sizes, pool_sizes, name='MultiConv2D')[source]¶ Bases:
twodlearn.core.common.TdlModelCreates a Convolutional neural network
It performs a series of 2d Convolutions and pooling operations
input_size: size of the input maps, [size_dim0, size_dim1] n_outputs: number of outputs n_input_maps: number of input maps n_filters: list with the number of filters for layer filter_size: list with the size of the kernel for each layer,
the format for the size of each layer is: [filter_size_dim0 , filter_size_dim1]
- pool_size: list with the size of the pooling kernel foreach layer,
 the format for each layer is: [pool_size_dim0, pool_size_dim1]
- 
class 
Output(model, inputs=None, batch_size=None, options=None, name='MultiConv2D')[source]¶ Bases:
twodlearn.core.common.TdlModel
- 
class 
twodlearn.feedforward.NetConf(inputs, labels, y, loss)[source]¶ Bases:
objectThis is a wrapper to any network configuration, it contains the references to the placeholders for inputs and labels, and the reference of the computation graph for the network
inputs: placeholder for the inputs labels: placeholder for the labels y: output of the comptuation graph, ussually a linear map
from the last layer (logits)
loss: loss for the network
- 
class 
twodlearn.feedforward.Options(weight_initialization, weight_initialization_alpha)[source]¶ Bases:
object
- 
class 
twodlearn.feedforward.StackedModel(layers=None, return_layers=None, options=None, name='Stacked')[source]¶ 
- 
class 
twodlearn.feedforward.StridedDeconvNet(n_inputs, input_size, n_input_maps, n_filters, filter_size, upsampling, name='')[source]¶ Bases:
objectCreates a Deconvolutional neural network using upsampling TODO: implement this using new format It performs a ‘deconvolutional’ neural network similar to the one used in “UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS” (http://arxiv.org/pdf/1511.06434v2.pdf)
The network maps a vector of size n_inputs to a 2d map with several chanels
First a linear mapping is performed, then a reshape to form an initial tensor of 2d maps with chanels, then a series of upscaling and convolutions are performed
n_inputs: size of the input vectors input_size: size of the maps after linear stage: [size_dim0, size_dim1] n_input_maps: number of maps after linear stage n_filters: list with the number of filters for each layer filter_size: list with the size of the kernel for each layer,
the format for the size of each layer is: [filter_size_dim0 , filter_size_dim1]
- upsampling: list with the size for the upsampling in each deconv layer:
 [upsampling_dim0, upsampling_dim1]
- in_layer: input layer, a linear layer for mapping the inputs to the desired
 output
- 
setup(batch_size, drop_prob=None)[source]¶ - Defines the computation graph of the neural network for a specific
 batch size
- drop_prob: placeholder used for specify the probability for dropout. If
 this coefficient is set, then dropout regularization is added between all fully connected layers(TODO: allow to choose which layers)
- 
twodlearn.feedforward.leaky_relu(x, leaky_slope=0.01)[source]¶ leaky relu, with 0.01 slope for negative values
- 
twodlearn.feedforward.options= <twodlearn.feedforward.Options object>[source]¶ ————————- Activation functions ————————
- 
twodlearn.feedforward.selu01(x)[source]¶ Self normalizing activation function Activation function proposed by Gunter Klambauer et. al. “Self-Normalizing Neural Networks”, https://arxiv.org/abs/1706.02515