The mean function
A set of basis functions
Returns the sample of the coefficients of the sample that best explains the given training data.
Returns the sample of the coefficients of the sample that best explains the given training data. It is assumed that the training data (values) are subject to 0 mean Gaussian noise
Returns the sample of the coefficients of the sample that best explains the given training data.
Returns the sample of the coefficients of the sample that best explains the given training data. It is assumed that the training data (values) are subject to 0 mean Gaussian noise
The covariance function.
The covariance function. Needs to be positive definite
Discretize the gaussian process on the given points.
an instance of the gaussian process, which is formed by a linear combination of the klt basis using the given coefficients c.
an instance of the gaussian process, which is formed by a linear combination of the klt basis using the given coefficients c.
Coefficients that determine the linear combination. Are assumed to be N(0,1) distributed.
A set of basis functions
Returns the log of the probability density of the instance produced by the x coefficients.
Returns the log of the probability density of the instance produced by the x coefficients.
If you are interested in ordinal comparisons of PDFs, use this as it is numerically more stable
Compute the marginal distribution at a single point.
Compute the marginal distribution at a single point.
Compute the marginal distribution for the given points.
Compute the marginal distribution for the given points. The result is again a Gaussian process, whose domain is defined by the given points.
The mean function
The mean function
Returns the probability density of the instance produced by the x coefficients
The posterior distribution of the gaussian process, with respect to the given trainingData.
The posterior distribution of the gaussian process, with respect to the given trainingData. It is computed using Gaussian process regression.
The posterior distribution of the gaussian process, with respect to the given trainingData.
The posterior distribution of the gaussian process, with respect to the given trainingData. It is computed using Gaussian process regression. We assume that the trainingData is subject to isotropic Gaussian noise with variance sigma2.
Returns the sample of the gaussian process that best explains the given training data.
Returns the sample of the gaussian process that best explains the given training data. It is assumed that the training data (values) are subject to 0 mean gaussian noise
Point/value pairs where that the sample should approximate, together with the variance of the noise model at each point.
Returns the sample of the gaussian process that best explains the given training data.
Returns the sample of the gaussian process that best explains the given training data. It is assumed that the training data (values) are subject to 0 mean Gaussian noise
Point/value pairs where that the sample should approximate.
variance of a Gaussian noise that is assumed on every training point
the rank (i.e.
the rank (i.e. number of basis functions)
A random sample of the gaussian process
A random sample evaluated at the given points
A random sample evaluated at the given points
A gaussian process which is represented in terms of a (small) finite set of basis functions. The basis functions are the orthonormal basis functions given by a mercers' decomposition.
The dimensionality of the input space
The dimensionality of the output space