Skip to content

Commit f6b5650

Browse files
Merge branch 'glossary-update-1' of github.com:CloudChaoszero/pymc3 into glossary-update-1
2 parents b2c5abf + f39b985 commit f6b5650

File tree

7 files changed

+25
-25
lines changed

7 files changed

+25
-25
lines changed

.github/ISSUE_TEMPLATE/config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ blank_issues_enabled: false
22
contact_links:
33
- name: PyMC Discourse
44
url: https://discourse.pymc.io/
5-
about: Ask usage questions about PyMC3
5+
about: Ask usage questions about PyMC/PyMC3
66
- name: Example notebook error report
77
url: https://github.com/pymc-devs/pymc-examples/issues
88
about: Please report errors or desired extensions to the tutorials and examples here.

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ repos:
1919
- id: isort
2020
name: isort
2121
- repo: https://github.com/asottile/pyupgrade
22-
rev: v2.26.0
22+
rev: v2.29.0
2323
hooks:
2424
- id: pyupgrade
2525
args: [--py37-plus]

docs/source/glossary.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -20,44 +20,44 @@ Dispatching
2020

2121
Underdispersion
2222
In statistics, underdispersion is the presence of lower {term}`variability <dispersion>` in a data set than would be expected based on a given statistical model.
23-
Choosing which function or method implementation to use based on the type of the input variables (usually just the first variable). For some examples, see Python's documentation for the [singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch) decorator.
24-
23+
Choosing which function or method implementation to use based on the type of the input variables (usually just the first variable). For some examples, see Python's documentation for the [singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch) decorator.
24+
2525
Bayesian Workflow
2626
Bayesian workflow is the overall iterative procedure towards model refinement. It often includes the two related tasks of {term}`inference` and the exploratory analysis of models.
2727
- For a compact overview, see Bayesian statistics and modelling by van de Schoot, R., Depaoli, S., King, R. et al in Nat Rev Methods - Primers 1, 1 (2021).
2828
- For an in-depth overview, see Bayesian Workflow by Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C. Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, Martin Modrák
29-
- For an exercise-based material, see Think Bayes 2e: Bayesian Statistics Made Simple by Allen B. Downey
29+
- For an exercise-based material, see Think Bayes 2e: Bayesian Statistics Made Simple by Allen B. Downey
3030
- For an upcoming textbook that uses PyMC3, Tensorflow Probability, and ArviZ libraries, see Bayesian Modeling and Computation by Osvaldo A. Martin, Ravin Kumar, Junpeng Lao
3131

3232
Bayesian inference
3333
Once we have defined the statistical model, Bayesian inference processes the data and model to produce a {term}`posterior` distribution. That is a joint distribution of all parameters in the model. This distribution is used to represent plausibility, and is the logical consequence of the model and data.
3434

3535
Bayesian model
3636
A Bayesian model is a composite of variables and distributional definitions for these variables. Fundamentally, it tells you all the ways that the observed data could have been produced.
37-
37+
3838
Prior
3939
Bayesian statistics allow us, in principle, to include all information we have about the structure of the problem into the model. We can do this via assuming prior distributions of the model’s parameters. Priors represent the plausibility of the value of the parameters before accounting for the data. Priors multiplied by {term}`likelihood` produce the {term}`posterior`.
40-
40+
4141
Priors’ informativeness can fall anywhere on the complete uncertainty to relative certainty continuum. An informative prior might encode known restrictions on the possible range of values of that parameter.
42-
42+
4343
To understand the implications of a prior, as well as the model itself, we can simulate predictions from the model, using only the prior distribution instead of the {term}`posterior` distribution - a process sometimes referred to as prior predictive simulation.
44-
44+
4545
- For an in-depth guide to priors, consider Statistical Rethinking 2nd Edition By Richard McElreath, especially chapters 2.3
4646

4747
Likelihood
4848
There are many perspectives on likelihood, but conceptually we can think about it as the relative number of ways the model could have produced the data; in other words, the probability of the data, given the parameters.
49-
49+
5050
- For an in-depth unfolding of the concept, refer to Statistical Rethinking 2nd Edition By Richard McElreath, particularly chapter 2.
51-
- For the problem-based material, see Think Bayes 2e: Bayesian Statistics Made Simple by Allen B. Downey
52-
- For univariate, continuous scenarios, see the calibr8 paper: Bayesian calibration, process modeling and uncertainty quantification in biotechnology by Laura Marie Helleckes, Michael Osthege, Wolfgang Wiechert, Eric von Lieres, Marco Oldiges
53-
51+
- For the problem-based material, see Think Bayes 2e: Bayesian Statistics Made Simple by Allen B. Downey
52+
- For univariate, continuous scenarios, see the calibr8 paper: Bayesian calibration, process modeling and uncertainty quantification in biotechnology by Laura Marie Helleckes, Michael Osthege, Wolfgang Wiechert, Eric von Lieres, Marco Oldiges
53+
5454
Posterior
5555
The outcome of a Bayesian model is the posterior distribution, which describes the relative plausibilities of every possible combination of parameter values. We can think of the posterior as the updated {term}`priors` after the model has seen the data.
56-
56+
5757
When the posterior is obtained using numerical methods we first need to check how adequately the model fits to data. By sampling from the posterior distribution we can simulate the observations, or the implied predictions of the model. This posterior predictive distribution can then be compared to the observed data, the process known as the posterior predictive check.
5858

5959
Once you are satisfied with the model, posterior distribution can be summarized and interpreted. Common questions for the posterior include: intervals of defined boundaries, intervals of defined probability mass, and point estimates. When the posterior is very similar to the prior, the available data does not contain much information about a parameter of interest.
60-
60+
6161
- For more on generating and interpreting the posterior samples, see Statistical Rethinking 2nd Edition By Richard McElreath, chapter 3.
6262

6363
:::::

pymc/gp/mean.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ class Constant(Mean):
6060
"""
6161

6262
def __init__(self, c=0):
63-
Mean.__init__(self)
63+
super().__init__()
6464
self.c = c
6565

6666
def __call__(self, X):
@@ -80,7 +80,7 @@ class Linear(Mean):
8080
"""
8181

8282
def __init__(self, coeffs, intercept=0):
83-
Mean.__init__(self)
83+
super().__init__()
8484
self.b = intercept
8585
self.A = coeffs
8686

@@ -90,7 +90,7 @@ def __call__(self, X):
9090

9191
class Add(Mean):
9292
def __init__(self, first_mean, second_mean):
93-
Mean.__init__(self)
93+
super().__init__()
9494
self.m1 = first_mean
9595
self.m2 = second_mean
9696

@@ -100,7 +100,7 @@ def __call__(self, X):
100100

101101
class Prod(Mean):
102102
def __init__(self, first_mean, second_mean):
103-
Mean.__init__(self)
103+
super().__init__()
104104
self.m1 = first_mean
105105
self.m2 = second_mean
106106

pymc/tests/helpers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def __init__(self, matcher):
5454
# shouldFlush anyway, we can set a capacity of zero.
5555
# You can call flush() manually to clear out the
5656
# buffer.
57-
BufferingHandler.__init__(self, 0)
57+
super().__init__(0)
5858
self.matcher = matcher
5959

6060
def shouldFlush(self):

pymc/util.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,11 +89,11 @@ def __setitem__(self, key, value):
8989
# This is my best guess about what this should do. I might be happier
9090
# to kill both of these if they are not used.
9191
def __mul__(self, other) -> "treelist":
92-
return cast("treelist", list.__mul__(self, other))
92+
return cast("treelist", super().__mul__(other))
9393

9494
def __imul__(self, other) -> "treelist":
9595
t0 = len(self)
96-
list.__imul__(self, other)
96+
super().__imul__(other)
9797
if self.parent is not None:
9898
self.parent.extend(self[t0:])
9999
return self # python spec says should return the result.

pymc/variational/operators.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ class KL(Operator):
4949
"""
5050

5151
def __init__(self, approx, beta=1.0):
52-
Operator.__init__(self, approx)
52+
super().__init__(approx)
5353
self.beta = pm.floatX(beta)
5454

5555
def apply(self, f):
@@ -73,7 +73,7 @@ class KSDObjective(ObjectiveFunction):
7373
def __init__(self, op, tf):
7474
if not isinstance(op, KSD):
7575
raise opvi.ParametrizationError("Op should be KSD")
76-
ObjectiveFunction.__init__(self, op, tf)
76+
super().__init__(op, tf)
7777

7878
@aesara.config.change_flags(compute_test_value="off")
7979
def __call__(self, nmc, **kwargs):
@@ -127,7 +127,7 @@ class KSD(Operator):
127127
objective_class = KSDObjective
128128

129129
def __init__(self, approx, temperature=1):
130-
Operator.__init__(self, approx)
130+
super().__init__(approx)
131131
self.temperature = temperature
132132

133133
def apply(self, f):

0 commit comments

Comments
 (0)