Skip to content

Commit c19105f

Browse files
committed
add sensitivity to workflow and hide nb cell
1 parent b42d729 commit c19105f

File tree

3 files changed

+32
-6
lines changed

3 files changed

+32
-6
lines changed

doc/examples/py_double_ml_sensitivity.ipynb

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -370,7 +370,9 @@
370370
{
371371
"cell_type": "code",
372372
"execution_count": null,
373-
"metadata": {},
373+
"metadata": {
374+
"nbsphinx": "hidden"
375+
},
374376
"outputs": [],
375377
"source": [
376378
"fig = dml_irm_obj.sensitivity_plot()\n",
@@ -400,7 +402,7 @@
400402
"name": "python",
401403
"nbconvert_exporter": "python",
402404
"pygments_lexer": "ipython3",
403-
"version": "3.11.2"
405+
"version": "3.10.10"
404406
},
405407
"orig_nbformat": 4
406408
},

doc/guide/se_confint.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
.. _se_confint:
22

33
Variance estimation and confidence intervals
4-
---------------------------------------------
4+
--------------------------------------------
55

66
Variance estimation
77
+++++++++++++++++++

doc/workflow/workflow.rst

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ of the main regression equation and the first stage.
6565
1. Data-Backend
6666
---------------
6767

68-
In step 1., we initialize the data-backend and thereby declare the role of the outcome, the treatment, and the confounding variables.
68+
In Step 1., we initialize the data-backend and thereby declare the role of the outcome, the treatment, and the confounding variables.
6969

7070
We use data from the 1991 Survey of Income and Program Participation which is available via the function
7171
`fetch_401K (Python) <https://docs.doubleml.org/stable/api/generated/doubleml.datasets.fetch_401K.html>`_
@@ -141,7 +141,7 @@ individuals. To keep the presentation short, we will choose a partially linear m
141141
3. ML Methods
142142
-------------
143143

144-
In Step 3. we can specify the machine learning tools used for estimation of the nuisance parts.
144+
In Step 3., we can specify the machine learning tools used for estimation of the nuisance parts.
145145
We can generally choose any learner from `scikit learn <https://scikit-learn.org>`_ in Python and from the `mlr3 <https://mlr3.mlr-org.com>`_ ecosystem in R.
146146

147147
There are two nuisance parts in the PLR, :math:`g_0(X)=\mathbb{E}(Y|X)` and :math:`m_0(X)=\mathbb{E}(D|X)`.
@@ -296,7 +296,7 @@ corresponding fields or via a summary.
296296
6. Inference
297297
------------
298298

299-
In Step 6. we can perform further inference methods and finally interpret our findings. For example, we can set up confidence intervals
299+
In Step 6., we can perform further inference methods and finally interpret our findings. For example, we can set up confidence intervals
300300
or, in case multiple causal parameters are estimated, adjust the analysis for multiple testing. :ref:`DoubleML <doubleml_package>`
301301
supports various approaches to perform :ref:`valid simultaneous inference <sim_inf>`
302302
which are partly based on a multiplier bootstrap.
@@ -341,3 +341,27 @@ If we did not control for the confounding variables, the average treatment effec
341341

342342
# Simultaneous confidence bands
343343
dml_plr_forest$confint(joint = TRUE)
344+
345+
346+
7. Sensitivity Analysis
347+
------------------------
348+
349+
In Step 7., we can analyze the sensitivity of the estimated parameters. In the :ref:`plr-model` the causal interpretation
350+
relies on conditional exogeneity, which requires to control for confounding variables. The :ref:`DoubleML <doubleml_package>` python package
351+
implements :ref:`sensitivity` with respect to omitted confounders.
352+
353+
Analyzing the sensitivity of the intent-to-treat effect in the 401(k) example, we find that the effect remains positive even after adjusting for
354+
omitted confounders with a lower bound of :math:`$4,611` for the point estimate and :math:`$2,359` including statistical uncertainty.
355+
356+
.. tab-set::
357+
358+
.. tab-item:: Python
359+
:sync: py
360+
361+
.. ipython:: python
362+
363+
# Sensitivity analysis
364+
dml_plr_tree.sensitivity_analysis(cf_y=0.04, cf_d=0.03)
365+
366+
# Sensitivity summary
367+
print(dml_plr_tree.sensitivity_summary)

0 commit comments

Comments
 (0)