twodlearn.bayesnet.gaussian_process module¶
-
class
twodlearn.bayesnet.gaussian_process.
BasisBase
(**kargs)[source]¶
-
class
twodlearn.bayesnet.gaussian_process.
ExplicitVGP
(**kargs)[source]¶ Bases:
twodlearn.core.common.TdlModel
-
class
EVGPPrediction
(**kargs)[source]¶ Bases:
twodlearn.bayesnet.gaussian_process.EVGPInference
,twodlearn.bayesnet.distributions.MVN
-
class
-
class
twodlearn.bayesnet.gaussian_process.
GPNegLogEvidence
(cov, cov_inv=None, labels=None, loc=None, name=None)[source]¶ Bases:
twodlearn.losses.EmpiricalLoss
Negative log evidence for a Gaussian Process with training covariance cov = K(train, train).
The loss takes the following form
loss = 0.5(y^T inv(cov) y + log(|cov|) + n log(2 pi))
if conv_inv can be provided when instiating the loss to prevent computing the inverse multiple times
-
class
twodlearn.bayesnet.gaussian_process.
GaussianProcess
(xm, ym, y_scale=0.1, kernel=None, options=None, name=None, **kargs)[source]¶ Bases:
twodlearn.core.common.TdlModel
Gaussian process with zero mean.
The covariance kernel takes by default the following form
# X1 is a matrix, whose rows represent samples # X2 is a matrix, whose rows represent samples K(i,j) = (f_scale**2) exp(-0.5 (x1(i)-x2(j))^T (l**-2)I (x1(i)-x2(j)) )
By default, when float values of l_scale, f_scale and y_scale are provided, a trainable variable is created. If want them fixed, you can provide a kernel with parameters as tf.Variable(value, trainable=False)
-
class
GPPosterior
(**kargs)[source]¶ Bases:
twodlearn.bayesnet.gaussian_process.GpOutput
,twodlearn.bayesnet.distributions.MVN
Compute the posterior distribution for p(f*| x*, ym, Xm) where {ym, Xm} is the training dataset of the GP model.
-
predict
(inputs)[source]¶ Compute the posterior distribution for p(f*| x*, y, X) where {y, X} is the training dataset of the GP model. :param inputs: matrix with test inputs x*. :type inputs: tf.Tensor, TdlModel
- Returns
- TdlModel with the value of p(f*| x*, y, X).
loc: mean for f* scale: scale for f* posterior: distributions for y* and f*
- Return type
-
class
-
class
twodlearn.bayesnet.gaussian_process.
GpWithExplicitMean
(gp_model, prior_scale=None, explicit_basis=None, **kargs)[source]¶ Bases:
twodlearn.core.common.TdlModel
-
class
EGPPosterior
(**kargs)[source]¶ Bases:
twodlearn.bayesnet.gaussian_process.EGpOutput
,twodlearn.bayesnet.distributions.MVN
-
predict
(inputs)[source]¶ Compute the posterior distribution for p(f*| x*, y, X) where {y, X} is the training dataset of the GP model. :param inputs: matrix with test inputs x*. :type inputs: tf.Tensor, TdlModel
- Returns
- TdlModel with the value of p(f*| x*, y, X).
loc: mean for f* scale: scale for f*
- Return type
-
class
-
class
twodlearn.bayesnet.gaussian_process.
VariationalGP
(m=None, input_shape=None, name=None, **kargs)[source]¶ Bases:
twodlearn.core.common.TdlModel
-
class
VGPEstimate
(**kargs)[source]¶ Bases:
twodlearn.bayesnet.gaussian_process.VGPInference
,twodlearn.bayesnet.distributions.MVN
-
class