@@ -161,6 +161,46 @@ For example:
161
161
# wrong
162
162
from common import test_base
163
163
164
+ Testing
165
+ =======
166
+
167
+ Failing tests
168
+ --------------
169
+
170
+ See https://docs.pytest.org/en/latest/skipping.html for background.
171
+
172
+ Do not use ``pytest.xfail ``
173
+ ---------------------------
174
+
175
+ Do not use this method. It has the same behavior as ``pytest.skip ``, namely
176
+ it immediately stops the test and does not check if the test will fail. If
177
+ this is the behavior you desire, use ``pytest.skip `` instead.
178
+
179
+ Using ``pytest.mark.xfail ``
180
+ ---------------------------
181
+
182
+ Use this method if a test is known to fail but the manner in which it fails
183
+ is not meant to be captured. It is common to use this method for a test that
184
+ exhibits buggy behavior or a non-implemented feature. If
185
+ the failing test has flaky behavior, use the argument ``strict=False ``. This
186
+ will make it so pytest does not fail if the test happens to pass.
187
+
188
+ Prefer the decorator ``@pytest.mark.xfail `` and the argument ``pytest.param ``
189
+ over usage within a test so that the test is appropriately marked during the
190
+ collection phase of pytest. For xfailing a test that involves multiple
191
+ parameters, a fixture, or a combination of these, it is only possible to
192
+ xfail during the testing phase. To do so, use the ``request `` fixture:
193
+
194
+ .. code-block :: python
195
+
196
+ import pytest
197
+
198
+ def test_xfail (request ):
199
+ request.node.add_marker(pytest.mark.xfail(reason = " Indicate why here" ))
200
+
201
+ xfail is not to be used for tests involving failure due to invalid user arguments.
202
+ For these tests, we need to verify the correct exception type and error message
203
+ is being raised, using ``pytest.raises `` instead.
164
204
165
205
Miscellaneous
166
206
=============
0 commit comments