twodlearn.losses module

class twodlearn.losses.AddLoss(loss1, loss2, name='AddLoss')[source]

Bases: twodlearn.losses.Loss

Tdl autoinitialization with arguments:

Attributes:

property loss1[source]
property loss2[source]
class twodlearn.losses.AddNLosses(losses, name='AddNLosses')[source]

Bases: twodlearn.losses.Loss

Tdl autoinitialization with arguments:

Attributes:

property losses[source]
mean()[source]
class twodlearn.losses.ClassificationLoss(logits, labels=None, name=None)[source]

Bases: twodlearn.losses.EmpiricalLoss

Tdl autoinitialization with arguments:

correct_prediction[source]

(LazzyProperty)

accuracy[source]

(LazzyProperty)

value[source]

(OutputValue)

labels[source]

(InputArgument) Labels for computing the loss, if not provided, they are created automatically

logits[source]

(InputArgument)

cross_entropy[source]

(Submodel)

accuracy[source]
correct_prediction[source]
cross_entropy[source]
labels[source]

Labels for computing the loss, if not provided, they are created automatically

logits[source]
property n_classes[source]
property n_outputs[source]
value[source]
class twodlearn.losses.EmpiricalLoss(**kargs)[source]

Bases: twodlearn.losses.Loss

Tdl autoinitialization with arguments:

Attributes:

property labels[source]

Labels for computing the loss, if not provided, they are created automatically

class twodlearn.losses.EmpiricalLossWrapper(loss, labels, name='EmpiricalLoss')[source]

Bases: twodlearn.losses.EmpiricalLoss

Tdl autoinitialization with arguments:

Attributes:

property loss[source]
property value[source]
class twodlearn.losses.EmpiricalWithRegularization(empirical, regularizer, alpha=None, name='ERLoss')[source]

Bases: twodlearn.losses.Loss

Linear combination of a Empirical and a Reguralizer loss: loss = empirical + alpha * regularizer If alpha is None, the result is the sum of the empirical and regularizer losses: loss = empirical + regularizer

Tdl autoinitialization with arguments:

Attributes:

property alpha[source]
property empirical[source]
property labels[source]
property regularizer[source]
class twodlearn.losses.GreaterThan(x, reference, mask=None, func=<function softplus>, name='GreaterThanLoss')[source]

Bases: twodlearn.losses.LessThan

Loss that punishes values being larger than a given value.

\[loss = func(x - reference)\]

Tdl autoinitialization with arguments:

reference[source]

(InputArgument)

x[source]

(InputArgument)

mask[source]

(InputArgument)

loss_eval(x, reference, mask)[source]
class twodlearn.losses.L1Regularizer(weights, scale=None, name='L2Regularizer')[source]

Bases: twodlearn.losses.L2Regularizer

Tdl autoinitialization with arguments:

Attributes:

define_loss(weights, scale)[source]
class twodlearn.losses.L2Loss(y, labels=None, name='L2Loss')[source]

Bases: twodlearn.losses.EmpiricalLoss

Computes (1/M)sum( (y - labels)**2 )

Tdl autoinitialization with arguments:

Attributes:

define_fit_loss(y, labels)[source]
property n_outputs[source]
property y[source]
class twodlearn.losses.L2Regularizer(weights, scale=None, name='L2Regularizer')[source]

Bases: twodlearn.losses.Loss

Tdl autoinitialization with arguments:

Attributes:

define_loss(weights, scale)[source]
property scale[source]
property weights[source]
class twodlearn.losses.LessThan(x, reference, mask=None, func=<function softplus>, name='LessThanLoss')[source]

Bases: twodlearn.losses.Loss

Defines a loss that punishes the variable being smaller than a given value loss = func(reference - x)

Tdl autoinitialization with arguments:

reference[source]

(InputArgument)

x[source]

(InputArgument)

mask[source]

(InputArgument)

loss_eval(x, reference, mask)[source]
mask[source]
mean(*args, **kargs)[source]
reference[source]
x[source]
class twodlearn.losses.Loss(**kargs)[source]

Bases: twodlearn.core.common.TdlModel

Tdl autoinitialization with arguments:

Attributes:

property value[source]
class twodlearn.losses.LossMethod(output_vars, input_vars, OutputClass=None)[source]

Bases: twodlearn.core.common.ModelMethod

Decorator used to specify an operation for a loss inside a model. The decorator works similar to @property, but the specified method correponds to the definition of the operation

Examples

Usage of the decorator:

class MyModel(tdl.TdlModel):
    _submodels = ['evaluate']
    @tdl.LossMethod(['y'], # list of outputs
                    ['x']  # list of inputs
                    )
    def mean_loss(self, x):
        return tf.reduce_mean(x)
class twodlearn.losses.MultipliedLosses(loss1, loss2, name='MultipliedLosses')[source]

Bases: twodlearn.losses.Loss

property loss1[source]
property loss2[source]
class twodlearn.losses.QuadraticLoss(x, q=None, target=None, name='QuadraticLoss')[source]

Bases: twodlearn.losses.Loss

Defines a cuadratic loss that takes the form:

\[loss = (X-target) q (X-target)^T\]

Tdl autoinitialization with arguments:

q[source]

(SimpleParameter)

target[source]

(InputArgument)

x[source]

(InputArgument)

mean(*args, **kargs)[source]
q[source]
target[source]
x[source]
class twodlearn.losses.ScaledLoss(alpha, loss, name='ScaledLoss')[source]

Bases: twodlearn.losses.Loss

property alpha[source]
property pre_scaled[source]
twodlearn.losses.convert_loss_to_tensor(value, dtype=None, name=None, as_ref=False)[source]