twodlearn.feedforward module

class twodlearn.feedforward.AffineLayer(units, *args, **kargs)[source]

Bases: twodlearn.feedforward.LinearLayer

Standard affine (W*X+b) fully connected layer

Tdl autoinitialization with arguments:

kernel[source]

(ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]

input_shape[source]

(InputArgument)

regularizer[source]

(Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

units[source]

(InputArgument) Number of output units (int).

bias[source]

(ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]

class Output(model, inputs, options=None, name=None)[source]

Bases: twodlearn.feedforward.Output

property bias[source]
value[source]
bias[source]

Autoinit with arguments [‘initializer’, ‘trainable’]

class twodlearn.feedforward.AlexNet(input_shape, n_outputs, n_filters, filter_sizes, pool_sizes, n_hidden, output_function=None, name='AlexNet')[source]

Bases: twodlearn.core.common.TdlModel

class AlexNetSetup(model, inputs=None, batch_size=None, options=None, name='AlexNet')[source]

Bases: twodlearn.core.common.TdlModel

conv[source]
property input_shape[source]
inputs[source]
loss[source]

Decorator used to specify an optional property inside a model. The decorator works similar to @property, but the specified method correponds to the initialization of the property

mlp[source]
output[source]
property value[source]
property weights[source]
conv[source]
evaluate(inputs=None, options=None, name=None)[source]
property input_shape[source]
mlp[source]
property n_outputs[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

property weights[source]
class twodlearn.feedforward.AlexNetClassifier(input_shape, n_classes, n_filters, filter_sizes, pool_sizes, n_hidden, name='AlexNetClassifier')[source]

Bases: twodlearn.feedforward.AlexNet

class AlexNetOutput(model, inputs=None, batch_size=None, options=None, name='AlexNet')[source]

Bases: twodlearn.feedforward.AlexNetSetup

property logits[source]
loss[source]

Decorator used to specify an optional property inside a model. The decorator works similar to @property, but the specified method correponds to the initialization of the property

evaluate(inputs=None, options=None, name=None)[source]
mlp[source]
property n_classes[source]
class twodlearn.feedforward.AlexnetLayer(filter_size, n_maps, pool_size, name=None)[source]

Bases: twodlearn.core.common.TdlModel

Creates a layer like the one used in (ImageNet Classification with Deep Convolutional Neural Networks).

The format for filter_size is:

[filter_size_dim0 , filter_size_dim1], it performs 2D convolution

The format for n_maps is:

[num_input_maps, num_output_maps]

The format for pool_size is:

[pool_size_dim0, pool_size_dim1]

class AlexnetLayerSetup(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

property bias[source]
eval_layer(inputs)[source]
inputs[source]
model[source]
property pool_size[source]
value[source]
property weights[source]
property y[source]
bias[source]
evaluate(inputs, name=None)[source]
property filter_size[source]
property n_maps[source]
property pool_size[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

weights[source]
class twodlearn.feedforward.BoundedOutput(lower=1e-07, upper=None, name='BoundedOutput')[source]

Bases: twodlearn.core.common.TdlModel

class Output(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

property lower[source]
property upper[source]
evaluate(*args, **kargs)[source]
lower[source]
upper[source]
class twodlearn.feedforward.Concat(axis, name='Concat')[source]

Bases: twodlearn.core.common.TdlModel

property axis[source]
evaluate(*args, **kargs)[source]
class twodlearn.feedforward.DenseLayer(activation=<function relu>, name=None, **kargs)[source]

Bases: twodlearn.feedforward.AffineLayer

Standard fully connected layer

Tdl autoinitialization with arguments:

kernel[source]

(ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]

input_shape[source]

(InputArgument)

regularizer[source]

(Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

activation[source]

(InputArgument)

units[source]

(InputArgument) Number of output units (int).

bias[source]

(ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]

class Output(model, inputs, options=None, name=None)[source]

Bases: twodlearn.feedforward.Output

property activation[source]
property affine[source]

activation before non-linearity

value[source]
activation[source]
class twodlearn.feedforward.LinearClassifier(n_inputs, n_classes, name='linear_classifier', **kargs)[source]

Bases: twodlearn.core.common.TdlModel

class LinearClassifierSetup(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

property labels[source]
loss[source]

Decorator used to specify an optional property inside a model. The decorator works similar to @property, but the specified method correponds to the initialization of the property

property n_inputs[source]
property n_outputs[source]
property weights[source]
evaluate(*args, **kargs)[source]
linear_layer[source]
property n_classes[source]
property n_inputs[source]
property n_outputs[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

property weights[source]
class twodlearn.feedforward.LinearLayer(units, *args, **kargs)[source]

Bases: twodlearn.core.layers.Layer

Standard linear (W*X) fully connected layer

Tdl autoinitialization with arguments:

kernel[source]

(ParameterInit) Autoinit with arguments [‘initializer’, ‘trainable’]

input_shape[source]

(InputArgument)

units[source]

(InputArgument) Number of output units (int).

regularizer[source]

(Regularizer) Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

class Output(model, inputs, options=None, name=None)[source]

Bases: twodlearn.core.common.TdlModel

inputs[source]
property kernel[source]
property shape[source]
value[source]
call(inputs, *args, **kargs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

compute_output_shape(input_shape=None)[source]

Computes the output shape of the layer.

Assumes that the layer will be built to match that input shape provided.

Parameters

input_shape – Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns

An input shape tuple.

input_shape[source]
kernel[source]

Autoinit with arguments [‘initializer’, ‘trainable’]

regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

units[source]

Number of output units (int).

class twodlearn.feedforward.MlpClassifier(n_inputs, n_classes, n_hidden, afunction=<function relu>, name=None)[source]

Bases: twodlearn.feedforward.MlpNet

class Output(model, inputs=None, keep_prob=None, name=None)[source]

Bases: twodlearn.feedforward.Output

property logits[source]
loss[source]

Decorator used to specify an optional property inside a model. The decorator works similar to @property, but the specified method correponds to the initialization of the property

evaluate(inputs=None, keep_prob=None, name=None)[source]
property n_classes[source]
class twodlearn.feedforward.MlpNet(n_inputs, n_outputs, n_hidden, afunction=<function relu>, output_function=None, name='MlpNet')[source]

Bases: twodlearn.feedforward.StackedModel

full_layers: list of fully connected layers out_layer: output layer, for the moment, linear layer

class Output(model, inputs=None, keep_prob=None, name=None)[source]

Bases: twodlearn.core.common.TdlModel

hidden[source]
inputs[source]
keep_prob[source]

probability of not droping an activation during dropout

loss[source]

Decorator used to specify an optional property inside a model. The decorator works similar to @property, but the specified method correponds to the initialization of the property

property n_inputs[source]
property n_outputs[source]
output[source]

output from the network, after output_function

property shape[source]
value[source]
property weights[source]
add(layer)[source]
evaluate(inputs=None, keep_prob=None, name=None)[source]
layers[source]
property n_hidden[source]
property n_inputs[source]
property n_outputs[source]
property weights[source]
class twodlearn.feedforward.MultiLayer2DConvolution(input_shape, n_filters, filter_sizes, pool_sizes, name='MultiConv2D')[source]

Bases: twodlearn.core.common.TdlModel

Creates a Convolutional neural network

It performs a series of 2d Convolutions and pooling operations

input_size: size of the input maps, [size_dim0, size_dim1] n_outputs: number of outputs n_input_maps: number of input maps n_filters: list with the number of filters for layer filter_size: list with the size of the kernel for each layer,

the format for the size of each layer is: [filter_size_dim0 , filter_size_dim1]

pool_size: list with the size of the pooling kernel foreach layer,

the format for each layer is: [pool_size_dim0, pool_size_dim1]

class Output(model, inputs=None, batch_size=None, options=None, name='MultiConv2D')[source]

Bases: twodlearn.core.common.TdlModel

property hidden[source]
property input_shape[source]
inputs[source]
model[source]
setup_conv_layers(inputs, layers)[source]
setup_inputs(batch_size, input_shape)[source]
value[source]
property weights[source]
property y[source]
property filter_sizes[source]
property input_shape[source]
layers[source]
property n_filters[source]
property output_shape[source]
property pool_sizes[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

setup(inputs=None, batch_size=None, options=None, name=None)[source]
property weights[source]
class twodlearn.feedforward.NetConf(inputs, labels, y, loss)[source]

Bases: object

This is a wrapper to any network configuration, it contains the references to the placeholders for inputs and labels, and the reference of the computation graph for the network

inputs: placeholder for the inputs labels: placeholder for the labels y: output of the comptuation graph, ussually a linear map

from the last layer (logits)

loss: loss for the network

class twodlearn.feedforward.Options(weight_initialization, weight_initialization_alpha)[source]

Bases: object

class twodlearn.feedforward.ParallelModel(models, name='Parallel')[source]

Bases: twodlearn.core.common.TdlModel

evaluate(*args, **kargs)[source]
property models[source]
class twodlearn.feedforward.StackedModel(layers=None, return_layers=None, options=None, name='Stacked')[source]

Bases: twodlearn.core.common.TdlModel

class StackedOutput(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

add(layer, name=None)[source]
evaluate(*args, **kargs)[source]
get_save_data()[source]
layers[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

return_layers[source]

True if the return value of the stacked model is the layers

class twodlearn.feedforward.StridedDeconvNet(n_inputs, input_size, n_input_maps, n_filters, filter_size, upsampling, name='')[source]

Bases: object

Creates a Deconvolutional neural network using upsampling TODO: implement this using new format It performs a ‘deconvolutional’ neural network similar to the one used in “UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS” (http://arxiv.org/pdf/1511.06434v2.pdf)

The network maps a vector of size n_inputs to a 2d map with several chanels

First a linear mapping is performed, then a reshape to form an initial tensor of 2d maps with chanels, then a series of upscaling and convolutions are performed

n_inputs: size of the input vectors input_size: size of the maps after linear stage: [size_dim0, size_dim1] n_input_maps: number of maps after linear stage n_filters: list with the number of filters for each layer filter_size: list with the size of the kernel for each layer,

the format for the size of each layer is: [filter_size_dim0 , filter_size_dim1]

upsampling: list with the size for the upsampling in each deconv layer:

[upsampling_dim0, upsampling_dim1]

in_layer: input layer, a linear layer for mapping the inputs to the desired

output

setup(batch_size, drop_prob=None)[source]
Defines the computation graph of the neural network for a specific

batch size

drop_prob: placeholder used for specify the probability for dropout. If

this coefficient is set, then dropout regularization is added between all fully connected layers(TODO: allow to choose which layers)

class twodlearn.feedforward.Transpose(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

inputs[source]
perm[source]
rightmost[source]
shape[source]
value[source]
class twodlearn.feedforward.TransposeLayer(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

perm[source]

Autoinit with arguments [‘inputs’]

rightmost[source]
twodlearn.feedforward.leaky_relu(x, leaky_slope=0.01)[source]

leaky relu, with 0.01 slope for negative values

twodlearn.feedforward.options = <twodlearn.feedforward.Options object>[source]

————————- Activation functions ————————

twodlearn.feedforward.selu01(x)[source]

Self normalizing activation function Activation function proposed by Gunter Klambauer et. al. “Self-Normalizing Neural Networks”, https://arxiv.org/abs/1706.02515

twodlearn.feedforward.selu01_disc(x)[source]

discontinuous selu

twodlearn.feedforward.selu01_disc2(x)[source]

another version of discontinuous selu