Skip to content

Commit 57bf09d

Browse files
committed
change tabbing in rst files to sphinx-design
1 parent 2ec2cbc commit 57bf09d

File tree

14 files changed

+2266
-1974
lines changed

14 files changed

+2266
-1974
lines changed

doc/guide/algorithms.rst

Lines changed: 81 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -10,30 +10,30 @@ Algorithm DML1
1010

1111
The algorithm ``dml_procedure='dml1'`` can be summarized as
1212

13-
1. **Inputs:** Choose a model (PLR, PLIV, IRM, IIVM), provide data :math:`(W_i)_{i=1}^{N}`, a Neyman-orthogonal score function :math:`\psi(W; \theta, \eta)` and specify machine learning method(s) for the nuisance function(s) :math:`\eta`.
14-
15-
2. **Train ML predictors on folds:** Take a :math:`K`-fold random partition :math:`(I_k)_{k=1}^{K}` of observation indices :math:`[N] = \lbrace 1, \ldots, N\rbrace` such that the size of each fold :math:`I_k` is :math:`n=N/K`. For each :math:`k \in [K] = \lbrace 1, \ldots, K\rbrace`, construct a high-quality machine learning estimator
13+
#. **Inputs:** Choose a model (PLR, PLIV, IRM, IIVM), provide data :math:`(W_i)_{i=1}^{N}`, a Neyman-orthogonal score function :math:`\psi(W; \theta, \eta)` and specify machine learning method(s) for the nuisance function(s) :math:`\eta`.
1614

17-
.. math::
15+
#. **Train ML predictors on folds:** Take a :math:`K`-fold random partition :math:`(I_k)_{k=1}^{K}` of observation indices :math:`[N] = \lbrace 1, \ldots, N\rbrace` such that the size of each fold :math:`I_k` is :math:`n=N/K`. For each :math:`k \in [K] = \lbrace 1, \ldots, K\rbrace`, construct a high-quality machine learning estimator
16+
17+
.. math::
1818
19-
\hat{\eta}_{0,k} = \hat{\eta}_{0,k}\big((W_i)_{i\not\in I_k}\big)
19+
\hat{\eta}_{0,k} = \hat{\eta}_{0,k}\big((W_i)_{i\not\in I_k}\big)
2020
21-
of :math:`\eta_0`, where :math:`x \mapsto \hat{\eta}_{0,k}(x)` depends only on the subset of data :math:`(W_i)_{i\not\in I_k}`.
21+
of :math:`\eta_0`, where :math:`x \mapsto \hat{\eta}_{0,k}(x)` depends only on the subset of data :math:`(W_i)_{i\not\in I_k}`.
2222

23-
3. **Estimate causal parameter:** For each :math:`k \in [K]`, construct the estimator :math:`\check{\theta}_{0,k}` as the solution to the equation
23+
#. **Estimate causal parameter:** For each :math:`k \in [K]`, construct the estimator :math:`\check{\theta}_{0,k}` as the solution to the equation
2424

25-
.. math::
25+
.. math::
2626
27-
\frac{1}{n} \sum_{i \in I_k} \psi(W_i; \check{\theta}_{0,k}, \hat{\eta}_{0,k}) = 0.
27+
\frac{1}{n} \sum_{i \in I_k} \psi(W_i; \check{\theta}_{0,k}, \hat{\eta}_{0,k}) = 0.
2828
29-
The estimate of the causal parameter is obtain via aggregation
29+
The estimate of the causal parameter is obtain via aggregation
3030

31-
.. math::
31+
.. math::
3232
33-
\tilde{\theta}_0 = \frac{1}{K} \sum_{k=1}^{K} \check{\theta}_{0,k}.
33+
\tilde{\theta}_0 = \frac{1}{K} \sum_{k=1}^{K} \check{\theta}_{0,k}.
3434
3535
36-
4. **Outputs:** The estimate of the causal parameter :math:`\tilde{\theta}_0` as well as the values of the evaluated score function are returned.
36+
#. **Outputs:** The estimate of the causal parameter :math:`\tilde{\theta}_0` as well as the values of the evaluated score function are returned.
3737

3838
Algorithm DML2
3939
++++++++++++++
@@ -44,17 +44,17 @@ The algorithm ``dml_procedure='dml2'`` can be summarized as
4444

4545
2. **Train ML predictors on folds:** Take a :math:`K`-fold random partition :math:`(I_k)_{k=1}^{K}` of observation indices :math:`[N] = \lbrace 1, \ldots, N\rbrace` such that the size of each fold :math:`I_k` is :math:`n=N/K`. For each :math:`k \in [K] = \lbrace 1, \ldots, K\rbrace`, construct a high-quality machine learning estimator
4646

47-
.. math::
47+
.. math::
4848
49-
\hat{\eta}_{0,k} = \hat{\eta}_{0,k}\big((W_i)_{i\not\in I_k}\big)
49+
\hat{\eta}_{0,k} = \hat{\eta}_{0,k}\big((W_i)_{i\not\in I_k}\big)
5050
51-
of :math:`\eta_0`, where :math:`x \mapsto \hat{\eta}_{0,k}(x)` depends only on the subset of data :math:`(W_i)_{i\not\in I_k}`.
51+
of :math:`\eta_0`, where :math:`x \mapsto \hat{\eta}_{0,k}(x)` depends only on the subset of data :math:`(W_i)_{i\not\in I_k}`.
5252

5353
3. **Estimate causal parameter:** Construct the estimator for the causal parameter :math:`\tilde{\theta}_0` as the solution to the equation
5454

55-
.. math::
55+
.. math::
5656
57-
\frac{1}{N} \sum_{k=1}^{K} \sum_{i \in I_k} \psi(W_i; \tilde{\theta}_0, \hat{\eta}_{0,k}) = 0.
57+
\frac{1}{N} \sum_{k=1}^{K} \sum_{i \in I_k} \psi(W_i; \tilde{\theta}_0, \hat{\eta}_{0,k}) = 0.
5858
5959
6060
4. **Outputs:** The estimate of the causal parameter :math:`\tilde{\theta}_0` as well as the values of the evaluate score function are returned.
@@ -73,89 +73,105 @@ As an example we consider a partially linear regression model (PLR)
7373
implemented in ``DoubleMLPLR``.
7474
The DML algorithm can be selected via parameter ``dml_procedure='dml1'`` vs. ``dml_procedure='dml2'``.
7575

76-
.. tabbed:: Python
76+
.. tab-set::
7777

78-
.. ipython:: python
78+
.. tab-item:: Python
79+
:sync: py
7980

80-
import doubleml as dml
81-
from doubleml.datasets import make_plr_CCDDHNR2018
82-
from sklearn.ensemble import RandomForestRegressor
83-
from sklearn.base import clone
81+
.. ipython:: python
8482
85-
np.random.seed(3141)
86-
learner = RandomForestRegressor(n_estimators=100, max_features=20, max_depth=5, min_samples_leaf=2)
87-
ml_l = clone(learner)
88-
ml_m = clone(learner)
89-
data = make_plr_CCDDHNR2018(alpha=0.5, return_type='DataFrame')
90-
obj_dml_data = dml.DoubleMLData(data, 'y', 'd')
91-
dml_plr_obj = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m, dml_procedure='dml1')
92-
dml_plr_obj.fit();
83+
import doubleml as dml
84+
from doubleml.datasets import make_plr_CCDDHNR2018
85+
from sklearn.ensemble import RandomForestRegressor
86+
from sklearn.base import clone
9387
94-
.. tabbed:: R
88+
np.random.seed(3141)
89+
learner = RandomForestRegressor(n_estimators=100, max_features=20, max_depth=5, min_samples_leaf=2)
90+
ml_l = clone(learner)
91+
ml_m = clone(learner)
92+
data = make_plr_CCDDHNR2018(alpha=0.5, return_type='DataFrame')
93+
obj_dml_data = dml.DoubleMLData(data, 'y', 'd')
94+
dml_plr_obj = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m, dml_procedure='dml1')
95+
dml_plr_obj.fit();
9596
96-
.. jupyter-execute::
97+
.. tab-item:: R
98+
:sync: r
9799

98-
library(DoubleML)
99-
library(mlr3)
100-
library(mlr3learners)
101-
library(data.table)
102-
lgr::get_logger("mlr3")$set_threshold("warn")
100+
.. jupyter-execute::
103101

104-
learner = lrn("regr.ranger", num.trees = 100, mtry = 20, min.node.size = 2, max.depth = 5)
105-
ml_l = learner$clone()
106-
ml_m = learner$clone()
107-
set.seed(3141)
108-
data = make_plr_CCDDHNR2018(alpha=0.5, return_type='data.table')
109-
obj_dml_data = DoubleMLData$new(data, y_col="y", d_cols="d")
110-
dml_plr_obj = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m, dml_procedure="dml1")
111-
dml_plr_obj$fit()
102+
library(DoubleML)
103+
library(mlr3)
104+
library(mlr3learners)
105+
library(data.table)
106+
lgr::get_logger("mlr3")$set_threshold("warn")
107+
108+
learner = lrn("regr.ranger", num.trees = 100, mtry = 20, min.node.size = 2, max.depth = 5)
109+
ml_l = learner$clone()
110+
ml_m = learner$clone()
111+
set.seed(3141)
112+
data = make_plr_CCDDHNR2018(alpha=0.5, return_type='data.table')
113+
obj_dml_data = DoubleMLData$new(data, y_col="y", d_cols="d")
114+
dml_plr_obj = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m, dml_procedure="dml1")
115+
dml_plr_obj$fit()
112116

113117

114118
The ``fit()`` method of ``DoubleMLPLR``
115119
stores the estimate :math:`\tilde{\theta}_0` in its ``coef`` attribute.
116120

117-
.. tabbed:: Python
121+
.. tab-set::
122+
123+
.. tab-item:: Python
124+
:sync: py
118125

119-
.. ipython:: python
126+
.. ipython:: python
120127
121-
dml_plr_obj.coef
128+
dml_plr_obj.coef
122129
123-
.. tabbed:: R
130+
.. tab-item:: R
131+
:sync: r
124132

125-
.. jupyter-execute::
133+
.. jupyter-execute::
126134

127-
dml_plr_obj$coef
135+
dml_plr_obj$coef
128136

129137
Let :math:`k(i) = \lbrace k: i \in I_k \rbrace`.
130138
The values of the score function :math:`(\psi(W_i; \tilde{\theta}_0, \hat{\eta}_{0,k(i)}))_{i \in [N]}`
131139
are stored in the attribute ``psi``.
132140

133141

134-
.. tabbed:: Python
142+
.. tab-set::
135143

136-
.. ipython:: python
144+
.. tab-item:: Python
145+
:sync: py
137146

138-
dml_plr_obj.psi[:5]
147+
.. ipython:: python
139148
140-
.. tabbed:: R
149+
dml_plr_obj.psi[:5]
141150
142-
.. jupyter-execute::
151+
.. tab-item:: R
152+
:sync: r
143153

144-
dml_plr_obj$psi[1:5, ,1]
154+
.. jupyter-execute::
155+
156+
dml_plr_obj$psi[1:5, ,1]
145157

146158

147159
For the DML1 algorithm, the estimates for the different folds
148160
:math:`\check{\theta}_{0,k}``, :math:`k \in [K]` are stored in attribute ``all_dml1_coef``.
149161

150-
.. tabbed:: Python
162+
.. tab-set::
163+
164+
.. tab-item:: Python
165+
:sync: py
151166

152-
.. ipython:: python
167+
.. ipython:: python
153168
154-
dml_plr_obj.all_dml1_coef
169+
dml_plr_obj.all_dml1_coef
155170
156-
.. tabbed:: R
171+
.. tab-item:: R
172+
:sync: r
157173

158-
.. jupyter-execute::
174+
.. jupyter-execute::
159175

160-
dml_plr_obj$all_dml1_coef
176+
dml_plr_obj$all_dml1_coef
161177

doc/guide/basics.rst

Lines changed: 667 additions & 639 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)