Skip to content

Convert zero-inflated distributions to Mixture subclasses #2231

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 40 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
b407bdb
Added zero-inflated binomial distribution
fonnesbeck May 27, 2017
73694ef
Raise NotImplementedError for SplineWrapper gradient operation (#2211)
a-rodin May 27, 2017
524b88e
Fixed syntax error
fonnesbeck May 27, 2017
17b27b0
Added ZeroInflatedBinomial to global namespace
fonnesbeck May 27, 2017
4fd07b8
Fixed attribute error in call to scipy.stats
fonnesbeck May 27, 2017
da5401f
Use setup.py to create test environment
ColCarroll May 27, 2017
cfd29aa
Mock is also used
ColCarroll May 27, 2017
d35f0e5
Imported but untested requirements
ColCarroll May 27, 2017
0f7f6b5
Use conda numpy/scipy/mkl-service
ColCarroll May 27, 2017
b505f47
Merge pull request #2229 from ColCarroll/update_create_testenv
twiecki May 28, 2017
c3afa00
Vi summary (#2230)
ferrine May 28, 2017
52761c2
Return identity matrix if no scaling provided
a-rodin May 27, 2017
c4659b4
Simplify calculation of identity scaling matrix
a-rodin May 29, 2017
eb6042f
Merge pull request #2232 from a-rodin/scaling-default-to-identity
May 29, 2017
4a8f64f
scaling for NUTS using 'advi_map'
May 29, 2017
1aea645
Revert "Merge pull request #2232 from a-rodin/scaling-default-to-iden…
twiecki May 29, 2017
9b0f4fc
Merge pull request #2234 from junpenglao/master
May 29, 2017
999be1e
Imported ZeroInflatedBinomial in test_distributions
fonnesbeck May 29, 2017
e478fe3
Converted zero-inflated distributions to Mixture subclasses
fonnesbeck May 29, 2017
8bc7cbd
Changed docstring of zero-inflated distributions to specify two-compo…
fonnesbeck May 29, 2017
e7b7462
Fixing typo
KarinKnudson May 29, 2017
69e223b
Merge pull request #2237 from karink520/mixture_typo
May 29, 2017
a750e3d
Fixed dimension-matching issue with mixture weights in zero-inflated …
fonnesbeck May 29, 2017
4515977
Reversed order of psi, 1-psi to make them consistent with the definit…
fonnesbeck May 29, 2017
3d83c6d
Change sample() to use live_plot_kwargs instead of **kwargs. (#2235)
twiecki May 30, 2017
9327794
Clean up GLM doc (#2241)
May 30, 2017
9cb6d53
Don't broadcast conditions in Mixture logp
AustinRochford May 31, 2017
482e751
statsmodels is required for some examples
AustinRochford May 31, 2017
9047dfa
Revert "statsmodels is required for some examples"
AustinRochford May 31, 2017
541afd8
Merge pull request #2243 from AustinRochford/mixture-bound-dont-broad…
AustinRochford May 31, 2017
504fd60
Added zero-inflated binomial distribution
fonnesbeck May 27, 2017
8a9936b
Fixed syntax error
fonnesbeck May 27, 2017
9f7e8ee
Added ZeroInflatedBinomial to global namespace
fonnesbeck May 27, 2017
16576f9
Fixed attribute error in call to scipy.stats
fonnesbeck May 27, 2017
b7cfd1f
Imported ZeroInflatedBinomial in test_distributions
fonnesbeck May 29, 2017
b4494c6
Converted zero-inflated distributions to Mixture subclasses
fonnesbeck May 29, 2017
e847161
Changed docstring of zero-inflated distributions to specify two-compo…
fonnesbeck May 29, 2017
4e8df1c
Fixed dimension-matching issue with mixture weights in zero-inflated …
fonnesbeck May 29, 2017
1e05632
Reversed order of psi, 1-psi to make them consistent with the definit…
fonnesbeck May 29, 2017
0217259
Merge branch 'zero_inflated_binomial' of github.com:pymc-devs/pymc3 i…
fonnesbeck May 31, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 35 additions & 59 deletions docs/source/notebooks/GLM-linear.ipynb

Large diffs are not rendered by default.

1,677 changes: 297 additions & 1,380 deletions docs/source/notebooks/GLM-logistic.ipynb

Large diffs are not rendered by default.

108 changes: 31 additions & 77 deletions docs/source/notebooks/GLM-model-selection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -96,9 +96,7 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"from collections import OrderedDict\n",
Expand Down Expand Up @@ -135,9 +133,7 @@
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"def generate_data(n=20, p=0, a=1, b=1, c=0, latent_sigma_y=20):\n",
Expand Down Expand Up @@ -267,7 +263,7 @@
" with pm.Model() as models[nm]:\n",
"\n",
" print('\\nRunning: {}'.format(nm))\n",
" pm.glm.glm(fml, df, family=pm.glm.families.Normal())\n",
" pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Normal())\n",
"\n",
" # For speed, we're using Metropolis here\n",
" traces[nm] = pm.sample(5000, pm.Metropolis())[1000::5]\n",
Expand Down Expand Up @@ -356,9 +352,7 @@
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -407,9 +401,7 @@
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"n = 12\n",
Expand All @@ -427,9 +419,7 @@
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -467,9 +457,7 @@
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"dfs_lin = df_lin.copy()\n",
Expand All @@ -489,9 +477,7 @@
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"dfs_lin_xlims = (dfs_lin['x'].min() - np.ptp(dfs_lin['x'])/10,\n",
Expand Down Expand Up @@ -530,9 +516,7 @@
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"name": "stderr",
Expand Down Expand Up @@ -573,9 +557,7 @@
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -620,9 +602,7 @@
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"name": "stderr",
Expand All @@ -639,7 +619,7 @@
"source": [
"with pm.Model() as mdl_ols_glm:\n",
" # setup model with Normal likelihood (which uses HalfCauchy for error prior)\n",
" pm.glm.glm('y ~ 1 + x', df_lin, family=pm.glm.families.Normal())\n",
" pm.glm.GLM.from_formula('y ~ 1 + x', df_lin, family=pm.glm.families.Normal())\n",
" \n",
" traces_ols_glm = pm.sample(2000)"
]
Expand All @@ -654,9 +634,7 @@
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -728,9 +706,7 @@
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"name": "stdout",
Expand Down Expand Up @@ -815,9 +791,7 @@
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"name": "stdout",
Expand Down Expand Up @@ -916,9 +890,7 @@
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"dfll = pd.DataFrame(index=['k1','k2','k3','k4','k5'], columns=['lin','quad'])\n",
Expand All @@ -941,9 +913,7 @@
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -987,9 +957,7 @@
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -1038,9 +1006,7 @@
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand All @@ -1063,9 +1029,7 @@
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand All @@ -1086,9 +1050,7 @@
{
"cell_type": "code",
"execution_count": 50,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -1116,9 +1078,7 @@
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -1154,9 +1114,7 @@
{
"cell_type": "code",
"execution_count": 52,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -1222,9 +1180,7 @@
{
"cell_type": "code",
"execution_count": 53,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -1260,9 +1216,7 @@
{
"cell_type": "code",
"execution_count": 54,
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -1399,14 +1353,14 @@
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3.0
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
Expand All @@ -1420,14 +1374,14 @@
"87b986ac3e5a43ec859cf10e013f2955": {
"views": [
{
"cell_index": 9.0
"cell_index": 9
}
]
},
"f1f05f8da738419e8e2c54ee1809c61c": {
"views": [
{
"cell_index": 47.0
"cell_index": 47
}
]
}
Expand All @@ -1436,5 +1390,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 0
}
"nbformat_minor": 1
}
171 changes: 91 additions & 80 deletions docs/source/notebooks/GLM-negative-binomial-regression.ipynb

Large diffs are not rendered by default.

192 changes: 88 additions & 104 deletions docs/source/notebooks/GLM-robust-with-outlier-detection.ipynb

Large diffs are not rendered by default.

101 changes: 50 additions & 51 deletions docs/source/notebooks/GLM-robust.ipynb

Large diffs are not rendered by default.

Loading