Skip to content

Commit ceec03c

Browse files
committed
drop meaningless parameter in bonus example and add brief note on the ml_l vs. ml_g
1 parent 6897fde commit ceec03c

File tree

1 file changed

+9
-3
lines changed

1 file changed

+9
-3
lines changed

doc/intro/intro.rst

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ For details on the specification of learners and their hyperparameters we refer
154154
# surpress messages from mlr3 package during fitting
155155
lgr::get_logger("mlr3")$set_threshold("warn")
156156

157-
learner = lrn("regr.ranger", num.trees=500, mtry=floor(sqrt(n_vars)), max.depth=5, min.node.size=2)
157+
learner = lrn("regr.ranger", num.trees=500, max.depth=5, min.node.size=2)
158158
ml_l_bonus = learner$clone()
159159
ml_m_bonus = learner$clone()
160160

@@ -172,9 +172,15 @@ of repetitions when applying repeated cross-fitting ``n_rep`` (defaults to ``n_r
172172
Additionally, one can choose between the algorithms ``'dml1'`` and ``'dml2'`` via ``dml_procedure`` (defaults to
173173
``'dml2'``).
174174
Depending on the causal model, one can further choose between different Neyman-orthogonal score / moment functions.
175-
For the PLR model the default ``score`` is ``'partialling out'``.
175+
For the PLR model the default ``score`` is ``'partialling out'``, i.e.,
176176

177-
The user guide provides details about the :ref:`resampling`, the :ref:`algorithms`
177+
.. math::
178+
179+
\psi(W; \theta, \eta) &:= [Y - \ell(X) - \theta (D - m(X))] [D - m(X)].
180+
181+
182+
183+
Note that with this score, we do not estimate $g_0(X)$ directly, but the conditional expectation of :math:`Y` given :math:`X`, :math:`\ell = \mathbb{E}[Y|X]`. The user guide provides details about the :ref:`resampling`, the :ref:`algorithms`
178184
and the :ref:`scores`.
179185

180186
Estimate double/debiased machine learning models

0 commit comments

Comments
 (0)