@@ -92,15 +92,15 @@ class HSGP(Base):
92
92
93
93
The `gp.HSGP` class is an implementation of the Hilbert Space Gaussian process. It is a
94
94
reduced rank GP approximation that uses a fixed set of basis vectors whose coefficients are
95
- random functions of a stationary covariance function's power spectral density. It's usage
95
+ random functions of a stationary covariance function's power spectral density. Its usage
96
96
is largely similar to `gp.Latent`. Like `gp.Latent`, it does not assume a Gaussian noise model
97
97
and can be used with any likelihood, or as a component anywhere within a model. Also like
98
98
`gp.Latent`, it has `prior` and `conditional` methods. It supports any sum of covariance
99
99
functions that implement a `power_spectral_density` method. (Note, this excludes the
100
100
`Periodic` covariance function, which uses a different set of basis functions for a
101
101
low rank approximation, as described in `HSGPPeriodic`.).
102
102
103
- For information on choosing appropriate `m`, `L`, and `c`, refer Ruitort-Mayol et al. or to
103
+ For information on choosing appropriate `m`, `L`, and `c`, refer to Ruitort-Mayol et al. or to
104
104
the PyMC examples that use HSGP.
105
105
106
106
To work with the HSGP in its "linearized" form, as a matrix of basis vectors and a vector of
@@ -117,14 +117,14 @@ class HSGP(Base):
117
117
`active_dim`.
118
118
c: float
119
119
The proportion extension factor. Used to construct L from X. Defined as `S = max|X|` such
120
- that `X` is in `[-S, S]`. `L` is the calculated as `c * S`. One of `c` or `L` must be
120
+ that `X` is in `[-S, S]`. `L` is calculated as `c * S`. One of `c` or `L` must be
121
121
provided. Further information can be found in Ruitort-Mayol et al.
122
122
drop_first: bool
123
123
Default `False`. Sometimes the first basis vector is quite "flat" and very similar to
124
124
the intercept term. When there is an intercept in the model, ignoring the first basis
125
125
vector may improve sampling. This argument will be deprecated in future versions.
126
126
parameterization: str
127
- Whether to use `centred ` or `noncentered` parameterization when multiplying the
127
+ Whether to use the `centered ` or `noncentered` parameterization when multiplying the
128
128
basis by the coefficients.
129
129
cov_func: Covariance function, must be an instance of `Stationary` and implement a
130
130
`power_spectral_density` method.
@@ -245,16 +245,16 @@ def prior_linearized(self, Xs: TensorLike):
245
245
"""Linearized version of the HSGP. Returns the Laplace eigenfunctions and the square root
246
246
of the power spectral density needed to create the GP.
247
247
248
- This function allows the user to bypass the GP interface and work directly with the basis
248
+ This function allows the user to bypass the GP interface and work with the basis
249
249
and coefficients directly. This format allows the user to create predictions using
250
250
`pm.set_data` similarly to a linear model. It also enables computational speed ups in
251
- multi-GP models since they may share the same basis. The return values are the Laplace
251
+ multi-GP models, since they may share the same basis. The return values are the Laplace
252
252
eigenfunctions `phi`, and the square root of the power spectral density.
253
253
254
254
Correct results when using `prior_linearized` in tandem with `pm.set_data` and
255
255
`pm.MutableData` require two conditions. First, one must specify `L` instead of `c` when
256
256
the GP is constructed. If not, a RuntimeError is raised. Second, the `Xs` needs to be
257
- zero-centered, so it's mean must be subtracted. An example is given below.
257
+ zero-centered, so its mean must be subtracted. An example is given below.
258
258
259
259
Parameters
260
260
----------
@@ -286,9 +286,9 @@ def prior_linearized(self, Xs: TensorLike):
286
286
# L = [10] means the approximation is valid from Xs = [-10, 10]
287
287
gp = pm.gp.HSGP(m=[200], L=[10], cov_func=cov_func)
288
288
289
- # Order is important. First calculate the mean, then make X a shared variable,
290
- # then subtract the mean. When X is mutated later, the correct mean will be
291
- # subtracted.
289
+ # Order is important.
290
+ # First calculate the mean, then make X a shared variable, then subtract the mean.
291
+ # When X is mutated later, the correct mean will be subtracted.
292
292
X_mean = np.mean(X, axis=0)
293
293
X = pm.MutableData("X", X)
294
294
Xs = X - X_mean
@@ -301,9 +301,14 @@ def prior_linearized(self, Xs: TensorLike):
301
301
# as m_star.
302
302
beta = pm.Normal("beta", size=gp._m_star)
303
303
304
- # The (non-centered) GP approximation is given by
304
+ # The (non-centered) GP approximation is given by:
305
305
f = pm.Deterministic("f", phi @ (beta * sqrt_psd))
306
306
307
+ # The centered approximation can be more efficient when
308
+ # the GP is stronger than the noise
309
+ # beta = pm.Normal("beta", sigma=sqrt_psd, size=gp._m_star)
310
+ # f = pm.Deterministic("f", phi @ beta)
311
+
307
312
...
308
313
309
314
0 commit comments