Skip to content

Commit a06081e

Browse files
AlexAndorraricardoV94
authored andcommitted
Improve docs
1 parent 509b6b5 commit a06081e

File tree

2 files changed

+23
-18
lines changed

2 files changed

+23
-18
lines changed

pymc/distributions/mixture.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -629,7 +629,7 @@ class ZeroInflatedPoisson:
629629
Parameters
630630
----------
631631
psi : tensor_like of float
632-
Expected proportion of Poisson variates (0 < psi < 1)
632+
Expected proportion of Poisson draws (0 < psi < 1)
633633
mu : tensor_like of float
634634
Expected number of occurrences during the given interval
635635
(mu >= 0).
@@ -692,7 +692,7 @@ class ZeroInflatedBinomial:
692692
Parameters
693693
----------
694694
psi : tensor_like of float
695-
Expected proportion of Binomial variates (0 < psi < 1)
695+
Expected proportion of Binomial draws (0 < psi < 1)
696696
n : tensor_like of int
697697
Number of Bernoulli trials (n >= 0).
698698
p : tensor_like of float
@@ -779,7 +779,7 @@ def ZeroInfNegBinom(a, m, psi, x):
779779
Parameters
780780
----------
781781
psi : tensor_like of float
782-
Expected proportion of NegativeBinomial variates (0 < psi < 1)
782+
Expected proportion of NegativeBinomial draws (0 < psi < 1)
783783
mu : tensor_like of float
784784
Poisson distribution parameter (mu > 0).
785785
alpha : tensor_like of float
@@ -867,7 +867,7 @@ class HurdlePoisson:
867867
Parameters
868868
----------
869869
psi : tensor_like of float
870-
Expected proportion of Poisson variates (0 < psi < 1)
870+
Expected proportion of Poisson draws (0 < psi < 1)
871871
mu : tensor_like of float
872872
Expected number of occurrences (mu >= 0).
873873
"""
@@ -911,7 +911,7 @@ class HurdleNegativeBinomial:
911911
Parameters
912912
----------
913913
psi : tensor_like of float
914-
Expected proportion of Negative Binomial variates (0 < psi < 1)
914+
Expected proportion of Negative Binomial draws (0 < psi < 1)
915915
alpha : tensor_like of float
916916
Gamma distribution shape parameter (alpha > 0).
917917
mu : tensor_like of float
@@ -963,7 +963,7 @@ class HurdleGamma:
963963
Parameters
964964
----------
965965
psi : tensor_like of float
966-
Expected proportion of Gamma variates (0 < psi < 1)
966+
Expected proportion of Gamma draws (0 < psi < 1)
967967
alpha : tensor_like of float, optional
968968
Shape parameter (alpha > 0).
969969
beta : tensor_like of float, optional
@@ -1015,7 +1015,7 @@ class HurdleLogNormal:
10151015
Parameters
10161016
----------
10171017
psi : tensor_like of float
1018-
Expected proportion of LogNormal variates (0 < psi < 1)
1018+
Expected proportion of LogNormal draws (0 < psi < 1)
10191019
mu : tensor_like of float, default 0
10201020
Location parameter.
10211021
sigma : tensor_like of float, optional

pymc/gp/hsgp_approx.py

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -92,15 +92,15 @@ class HSGP(Base):
9292
9393
The `gp.HSGP` class is an implementation of the Hilbert Space Gaussian process. It is a
9494
reduced rank GP approximation that uses a fixed set of basis vectors whose coefficients are
95-
random functions of a stationary covariance function's power spectral density. It's usage
95+
random functions of a stationary covariance function's power spectral density. Its usage
9696
is largely similar to `gp.Latent`. Like `gp.Latent`, it does not assume a Gaussian noise model
9797
and can be used with any likelihood, or as a component anywhere within a model. Also like
9898
`gp.Latent`, it has `prior` and `conditional` methods. It supports any sum of covariance
9999
functions that implement a `power_spectral_density` method. (Note, this excludes the
100100
`Periodic` covariance function, which uses a different set of basis functions for a
101101
low rank approximation, as described in `HSGPPeriodic`.).
102102
103-
For information on choosing appropriate `m`, `L`, and `c`, refer Ruitort-Mayol et al. or to
103+
For information on choosing appropriate `m`, `L`, and `c`, refer to Ruitort-Mayol et al. or to
104104
the PyMC examples that use HSGP.
105105
106106
To work with the HSGP in its "linearized" form, as a matrix of basis vectors and a vector of
@@ -117,14 +117,14 @@ class HSGP(Base):
117117
`active_dim`.
118118
c: float
119119
The proportion extension factor. Used to construct L from X. Defined as `S = max|X|` such
120-
that `X` is in `[-S, S]`. `L` is the calculated as `c * S`. One of `c` or `L` must be
120+
that `X` is in `[-S, S]`. `L` is calculated as `c * S`. One of `c` or `L` must be
121121
provided. Further information can be found in Ruitort-Mayol et al.
122122
drop_first: bool
123123
Default `False`. Sometimes the first basis vector is quite "flat" and very similar to
124124
the intercept term. When there is an intercept in the model, ignoring the first basis
125125
vector may improve sampling. This argument will be deprecated in future versions.
126126
parameterization: str
127-
Whether to use `centred` or `noncentered` parameterization when multiplying the
127+
Whether to use the `centered` or `noncentered` parameterization when multiplying the
128128
basis by the coefficients.
129129
cov_func: Covariance function, must be an instance of `Stationary` and implement a
130130
`power_spectral_density` method.
@@ -245,16 +245,16 @@ def prior_linearized(self, Xs: TensorLike):
245245
"""Linearized version of the HSGP. Returns the Laplace eigenfunctions and the square root
246246
of the power spectral density needed to create the GP.
247247
248-
This function allows the user to bypass the GP interface and work directly with the basis
248+
This function allows the user to bypass the GP interface and work with the basis
249249
and coefficients directly. This format allows the user to create predictions using
250250
`pm.set_data` similarly to a linear model. It also enables computational speed ups in
251-
multi-GP models since they may share the same basis. The return values are the Laplace
251+
multi-GP models, since they may share the same basis. The return values are the Laplace
252252
eigenfunctions `phi`, and the square root of the power spectral density.
253253
254254
Correct results when using `prior_linearized` in tandem with `pm.set_data` and
255255
`pm.MutableData` require two conditions. First, one must specify `L` instead of `c` when
256256
the GP is constructed. If not, a RuntimeError is raised. Second, the `Xs` needs to be
257-
zero-centered, so it's mean must be subtracted. An example is given below.
257+
zero-centered, so its mean must be subtracted. An example is given below.
258258
259259
Parameters
260260
----------
@@ -286,9 +286,9 @@ def prior_linearized(self, Xs: TensorLike):
286286
# L = [10] means the approximation is valid from Xs = [-10, 10]
287287
gp = pm.gp.HSGP(m=[200], L=[10], cov_func=cov_func)
288288
289-
# Order is important. First calculate the mean, then make X a shared variable,
290-
# then subtract the mean. When X is mutated later, the correct mean will be
291-
# subtracted.
289+
# Order is important.
290+
# First calculate the mean, then make X a shared variable, then subtract the mean.
291+
# When X is mutated later, the correct mean will be subtracted.
292292
X_mean = np.mean(X, axis=0)
293293
X = pm.MutableData("X", X)
294294
Xs = X - X_mean
@@ -301,9 +301,14 @@ def prior_linearized(self, Xs: TensorLike):
301301
# as m_star.
302302
beta = pm.Normal("beta", size=gp._m_star)
303303
304-
# The (non-centered) GP approximation is given by
304+
# The (non-centered) GP approximation is given by:
305305
f = pm.Deterministic("f", phi @ (beta * sqrt_psd))
306306
307+
# The centered approximation can be more efficient when
308+
# the GP is stronger than the noise
309+
# beta = pm.Normal("beta", sigma=sqrt_psd, size=gp._m_star)
310+
# f = pm.Deterministic("f", phi @ beta)
311+
307312
...
308313
309314

0 commit comments

Comments
 (0)