twodlearn.recurrent module

class twodlearn.recurrent.BaseCell(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.core.layers.Layer

build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(inputs)[source]

This is where the layer’s logic lives.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments.

Returns

A tensor or list/tuple of tensors.

call_cell(inputs, state)[source]
get_initial_state(inputs=None, batch_size=None, initializer=None)[source]
input_shape[source]
state_shape[source]
class twodlearn.recurrent.DenseCell(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.recurrent.BaseCell

Base RNN cell for which the inputs and states are represented using a dense tensor.

get_initial_state(batch_size=None, initializer=None)[source]
get_inputs(batch_size=None, initializer=None)[source]
class twodlearn.recurrent.Lstm(n_inputs, n_outputs, n_hidden, name=None, **kargs)[source]

Bases: twodlearn.recurrent.Rnn

class LstmSetup(**kargs)[source]

Bases: twodlearn.recurrent.RnnSetup

class LstmStateAndOutput(hidden, y)[source]

Bases: twodlearn.core.common.TdlModel

hidden[source]
y[source]
property labels[source]
ModelOutput[source]

alias of Lstm.LstmSetup

define_cell(n_inputs, n_outputs, n_hidden)[source]
n_hidden[source]
property parameters[source]
property weights[source]
class twodlearn.recurrent.Lstm2Lstm(n_inputs, n_outputs, n_hidden, afunction=<function tanh>, encoder_afunction=<function tanh>, name=None)[source]

Bases: twodlearn.core.common.TdlModel

Uses an Lstm to convert a fixed length sequence into the initial state for an LSTM sequential model

ModelOutput[source]

alias of Lstm2Lstm.Output

class Output(model, n_unrollings=1, encoder_n_unrollings=1, batch_size=None, inputs=None, encoder_inputs=None, compute_loss=True, options=None, name='lstm2lstm')[source]

Bases: twodlearn.core.common.TdlModel

property encoder[source]
property fit_loss[source]
property inputs[source]
property labels[source]
property loss[source]
property lstm[source]
property x0[source]

Inputs to the encoder

property y[source]
encoder[source]

Autoinit with arguments [‘n_inputs’, ‘n_outputs’, ‘n_hidden’, ‘afunction’, ‘name’]

lstm[source]

Autoinit with arguments [‘n_inputs’, ‘n_outputs’, ‘n_hidden’, ‘afunction’, ‘name’]

property parameters[source]
property weights[source]
class twodlearn.recurrent.LstmCellOptimized(n_inputs, n_units, afunction=<function tanh>, name='LstmCell')[source]

Bases: twodlearn.core.common.TdlModel

Single lstm cell defined as in: “Generating Sequences with

Recurrent Neural Networks”, Alex Graves, 2014

n_inputs[source]

number of inputs

n_nodes[source]

number of nodes

afunction[source]

activation function

name[source]

name used in all TensorFlow variables’ names

class LstmCellSetup(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

input_state[source]
inputs[source]
value[source]
afunction[source]

activation function for the cell

evaluate(*args, **kargs)[source]
n_inputs[source]
property n_outputs[source]
n_units[source]
parameters[source]
property weights[source]
class twodlearn.recurrent.LstmState(h, x)[source]

Bases: twodlearn.core.common.TdlModel

h[source]
x[source]
class twodlearn.recurrent.Mlp2Lstm(n_inputs, n_outputs, window_size=1, name=None, **kargs)[source]

Bases: twodlearn.core.common.TdlModel

Uses an MLP to convert a fixed length sequence into the initial state for an LSTM sequential model

class Mlp2LstmOutput(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

batch_size[source]
inputs[source]

Autoinit with arguments [‘n_unrollings’, ‘Type’, ‘batch_size’]

lstm[source]
lstm_x0[source]
property n_unrollings[source]
property window_size[source]
x0[source]

Autoinit with arguments [‘Type’, ‘batch_size’]

evaluate(x0=None, inputs=None, **kargs)[source]
lstm[source]

Autoinit with arguments [‘n_hidden’]

mlp[source]

Autoinit with arguments [‘n_hidden’]

n_inputs[source]
n_outputs[source]
property parameters[source]
property weights[source]
window_size[source]
class twodlearn.recurrent.MlpNarx(n_inputs, n_outputs, window_size, n_hidden, afunction=<function relu>, name='mlp_narx', **kargs)[source]

Bases: twodlearn.recurrent.Narx

Narx that uses an Mlp as cell

CellModel[source]

alias of twodlearn.feedforward.MlpNet

ModelOutput[source]

alias of MlpNarx.Output

class Output(model, x0=None, n_unrollings=1, batch_size=None, inputs=None, compute_loss=True, options=None, name=None)[source]

Bases: twodlearn.recurrent.NarxSetup

property labels[source]
property afunction[source]

activation function for the MLP

define_cell(n_inputs, n_outputs, window_size)[source]
property n_hidden[source]

Number of hidden layers

setup(*args, **kargs)[source]
class twodlearn.recurrent.MultilayerLstmCell(n_inputs, n_hidden, n_outputs=None, output_layer=None, name=None, **kargs)[source]

Bases: twodlearn.core.common.TdlModel

class MultilayerLstmCellSetup(**kargs)[source]

Bases: twodlearn.core.common.OutputModel

hidden[source]
input_h[source]
input_x[source]
output[source]
property state[source]
evaluate(*args, **kargs)[source]
hidden_layers[source]
n_hidden[source]
n_inputs[source]
property n_outputs[source]
output_layer[source]
property parameters[source]
property weights[source]
class twodlearn.recurrent.Narx(n_inputs, n_outputs, window_size=1, name='narx', **kargs)[source]

Bases: twodlearn.recurrent.Rnn

class NarxSetup(**kargs)[source]

Bases: twodlearn.recurrent.RnnSetup

property window_size[source]
cell[source]
define_cell(n_inputs, n_outputs, window_size)[source]
property window_size[source]
class twodlearn.recurrent.Rnn(n_inputs, n_outputs, n_states=None, name='rnn', **kargs)[source]

Bases: twodlearn.core.common.TdlModel

class RnnSetup(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

inputs[source]

exogenous inputs

property loss[source]
model[source]
property n_inputs[source]

number of exogenous inputs

property n_outputs[source]

number of outputs from the model

n_unrollings[source]
property outputs[source]
reset_inputs[source]
property states[source]

State of the network

property unrolled[source]

list of unrolled networks

x0[source]

initial state

cell[source]
define_cell(n_inputs, n_outputs, n_states)[source]
evaluate(x0=None, inputs=None, n_unrollings=None, options=None, name=None, **kargs)[source]
n_inputs[source]
n_outputs[source]
class twodlearn.recurrent.SimpleRnn(cell, name='SimpleRnn', options=None, **kargs)[source]

Bases: twodlearn.core.common.TdlModel

class RnnOutput(model, x0=None, inputs=None, n_unrollings=None, options=None, name=None)[source]

Bases: twodlearn.core.common.TdlModel

inputs[source]

Setup either tf variables or placeholders for the external (control) inputs

loss[source]

Decorator used to specify an optional property inside a model. The decorator works similar to @property, but the specified method correponds to the initialization of the property

property n_unrollings[source]
reset_inputs[source]
property unrolled[source]

list of unrolled networks

property x[source]
x0[source]
property y[source]
cell[source]
evaluate(x0=None, inputs=None, n_unrollings=None, options=None, name=None, **kargs)[source]
regularizer[source]

Decorator used to specify a regularizer for a model. The decorator works similar to @property, but the specified method correponds to the initialization of the regularizer.

class twodlearn.recurrent.StateSpaceCell(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.recurrent.BaseCell

call_cell(inputs, state)[source]
input_shape[source]
output_model[source]

outputs = call(state)

Type

Callable model with signature

state_model[source]

next_state = call(inputs, state)

Type

Callable model with signature

state_shape[source]
class twodlearn.recurrent.StateSpaceDense(trainable=True, name=None, *args, **kwargs)[source]

Bases: twodlearn.recurrent.StateSpaceCell, twodlearn.recurrent.DenseCell

twodlearn.recurrent.explicit_call_wrapper(model)[source]

Re-factors the call function of a model/layer to have the signature: call(inputs, state)

twodlearn.recurrent.same_state_output_wrapper(cell)[source]

Re-factors the call function of a model/layer to have the signature: outputs, outputs = call(inputs, state)