twodlearn.losses module¶
-
class
twodlearn.losses.AddLoss(loss1, loss2, name='AddLoss')[source]¶ Bases:
twodlearn.losses.LossTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.AddNLosses(losses, name='AddNLosses')[source]¶ Bases:
twodlearn.losses.LossTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.ClassificationLoss(logits, labels=None, name=None)[source]¶ Bases:
twodlearn.losses.EmpiricalLossTdl autoinitialization with arguments:
-
labels[source]¶ (
InputArgument) Labels for computing the loss, if not provided, they are created automatically
-
accuracy[source]
-
correct_prediction[source]
-
cross_entropy[source]
-
labels[source] Labels for computing the loss, if not provided, they are created automatically
-
logits[source]
-
value[source]
-
-
class
twodlearn.losses.EmpiricalLoss(**kargs)[source]¶ Bases:
twodlearn.losses.LossTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.EmpiricalLossWrapper(loss, labels, name='EmpiricalLoss')[source]¶ Bases:
twodlearn.losses.EmpiricalLossTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.EmpiricalWithRegularization(empirical, regularizer, alpha=None, name='ERLoss')[source]¶ Bases:
twodlearn.losses.LossLinear combination of a Empirical and a Reguralizer loss: loss = empirical + alpha * regularizer If alpha is None, the result is the sum of the empirical and regularizer losses: loss = empirical + regularizer
Tdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.GreaterThan(x, reference, mask=None, func=<function softplus>, name='GreaterThanLoss')[source]¶ Bases:
twodlearn.losses.LessThanLoss that punishes values being larger than a given value.
\[loss = func(x - reference)\]Tdl autoinitialization with arguments:
-
class
twodlearn.losses.L1Regularizer(weights, scale=None, name='L2Regularizer')[source]¶ Bases:
twodlearn.losses.L2RegularizerTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.L2Loss(y, labels=None, name='L2Loss')[source]¶ Bases:
twodlearn.losses.EmpiricalLossComputes (1/M)sum( (y - labels)**2 )
Tdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.L2Regularizer(weights, scale=None, name='L2Regularizer')[source]¶ Bases:
twodlearn.losses.LossTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.LessThan(x, reference, mask=None, func=<function softplus>, name='LessThanLoss')[source]¶ Bases:
twodlearn.losses.LossDefines a loss that punishes the variable being smaller than a given value loss = func(reference - x)
Tdl autoinitialization with arguments:
-
mask[source]
-
reference[source]
-
-
class
twodlearn.losses.Loss(**kargs)[source]¶ Bases:
twodlearn.core.common.TdlModelTdl autoinitialization with arguments:
Attributes:
-
class
twodlearn.losses.LossMethod(output_vars, input_vars, OutputClass=None)[source]¶ Bases:
twodlearn.core.common.ModelMethodDecorator used to specify an operation for a loss inside a model. The decorator works similar to @property, but the specified method correponds to the definition of the operation
Examples
Usage of the decorator:
class MyModel(tdl.TdlModel): _submodels = ['evaluate'] @tdl.LossMethod(['y'], # list of outputs ['x'] # list of inputs ) def mean_loss(self, x): return tf.reduce_mean(x)
-
class
twodlearn.losses.MultipliedLosses(loss1, loss2, name='MultipliedLosses')[source]¶ Bases:
twodlearn.losses.Loss
-
class
twodlearn.losses.QuadraticLoss(x, q=None, target=None, name='QuadraticLoss')[source]¶ Bases:
twodlearn.losses.LossDefines a cuadratic loss that takes the form:
\[loss = (X-target) q (X-target)^T\]Tdl autoinitialization with arguments:
-
target[source]
-
-
class
twodlearn.losses.ScaledLoss(alpha, loss, name='ScaledLoss')[source]¶ Bases:
twodlearn.losses.Loss