Skip to content

Commit 17d04dd

Browse files
Added link and put definitions above 5 colons as mentioned
1 parent da1fae4 commit 17d04dd

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

docs/source/glossary.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,19 +17,19 @@ Functional Programming
1717
This contrasts with functions or methods that depend on variables that are not explicitly passed as an input (such as accessing `self.variable` inside a method) or that alter the inputs or other state variables in-place, instead of returning new distinct variables as outputs.
1818
Dispatching
1919
Choosing which function or method implementation to use based on the type of the input variables (usually just the first variable). For some examples, see Python's documentation for the [singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch) decorator.
20-
:::::
2120

22-
Equidispersion
23-
Equidispersion exists when data exibits variation similar to what you would expect based on a binomial distribution (for defectives) or a Poisson distribution (for defects). Traditional P charts and U charts assume that your rate of defectives or defects remains constant over time.
21+
[Equidispersion](http://www.ce.memphis.edu/7012/L20_CountDataModels_v2.pdf) (http://cursos.leg.ufpr.br/rmcd/introduction.html)
22+
If in a Poisson distribution if the variance equals the mean of the distribution, it is reffered to as equidispersion.
2423

2524
Generalized Poisson PMF
26-
A generalization of the Poisson distribution, with two parameters X1, and X2, is obtained as a limiting form of the generalized negative binomial distribution. The variance of the distribution is greater than, equal to or smaller than the mean according. as X2 is positive, zero or negative.
25+
A generalization of the Poisson distribution, with two parameters X1, and X2, is obtained as a limiting form of the generalized negative binomial distribution. The variance of the distribution is greater than, equal to or smaller than the mean according. as X2 is positive, zero or negative. For formula and more detail visit the link (https://www.sciencedirect.com/science/article/pii/S0047259X14000256)
2726

28-
Bayes' theorem
27+
[Bayes' theorem] (https://www.investopedia.com/terms/b/bayes-theorem.asp)
2928
Describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately (by conditioning it on their age) than simply assuming that the individual is typical of the population as a whole.
3029

3130
Markov Chain (MC)
32-
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.
31+
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. For a visual explantation, visit (https://setosa.io/ev/markov-chains/)
3332

34-
Markov Chain Monte Carlo (MCMC)
35-
Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm.
33+
[Markov Chain Monte Carlo (MCMC)] (https://machinelearningmastery.com/markov-chain-monte-carlo-for-probability/)
34+
Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm.
35+
:::::

0 commit comments

Comments
 (0)