-
-
Notifications
You must be signed in to change notification settings - Fork 32.2k
bpo-40331: Increase test coverage for the statistics module #19608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
ccd8bef
21084c1
49def61
b2b9844
50a843f
bbe16bd
74b047a
1a5c2df
ac8b586
ee55b16
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||
---|---|---|---|---|---|---|---|---|
|
@@ -1004,6 +1004,9 @@ def test_nan(self): | |||||||
x = statistics._convert(nan, type(nan)) | ||||||||
self.assertTrue(_nan_equal(x, nan)) | ||||||||
|
||||||||
def test_raise_type_error(self): | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's rename this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||||||||
self.assertRaises(TypeError, statistics._convert, None, float) | ||||||||
tzabal marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I find using
Suggested change
(Also in following new tests in this PR.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure, I will change it here, as well as in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry to mention it just after you made the change already, but I find that using So, it may not be necessary in the above case, but it's worth some consideration. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
That may be true in general, but I think that in this case it's most appropriate to simply check that a TypeError is thrown. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, looking at the above test a bit more closely, it seems highly unlikely for any other There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for your suggestion @aeros. As you have both already said, it may not be useful in this particular test, but maybe, it would be useful in the If you don't have any objections, I will go and modify the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Go ahead. |
||||||||
|
||||||||
|
||||||||
class FailNegTest(unittest.TestCase): | ||||||||
"""Test _fail_neg private function.""" | ||||||||
|
@@ -1033,6 +1036,45 @@ def test_error_msg(self): | |||||||
self.assertEqual(errmsg, msg) | ||||||||
|
||||||||
|
||||||||
class FindLteqTest(unittest.TestCase): | ||||||||
# Test _find_lteq private function. | ||||||||
|
||||||||
def test_raise_value_error(self): | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's also find a better name for this, e.g. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||||||||
for a, x in [ | ||||||||
([], 1), # empty a triggers `i != len(a)` | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I took a while to think about this, but I've decided that we should remove all of the comments referencing the specific statements that this is intended to exercise. My main reason is that the referenced code will likely change over time, and these comments are very likely to become stale. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The more general reason is that tests must not test the implementation, but only the desired behavior. We might take inspiration for relevant test cases from the code, but the tests should not need to be changed when the code is changed in ways that don't affect its behavior, such as refactoring and optimization. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for describing your reasoning behind this decision. I will remove those comments. I agree that good tests assert the desired behavior and not the actual implementation. I put them only for helping the reader understand why each of them was added, but all of them, trigger the same behavior (throwing the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Comments explaining why, conceptually, each of the test cases should result in such an error, or perhaps which edge-case it is testing, could be welcome. This is true general, and specifically in this case. They're not required though; I'm just adding this note for completeness :) |
||||||||
([1, 2], 3), # non-existing max value triggers `i != len(a)` | ||||||||
([1, 3], 2) # non-existing value triggers `a[i] == x` | ||||||||
]: | ||||||||
self.assertRaises(ValueError, statistics._find_lteq, a, x) | ||||||||
|
||||||||
def test_locate_successfully(self): | ||||||||
for a, x, expected_i in [ | ||||||||
([1, 1, 1, 2, 3], 1, 0), | ||||||||
([0, 1, 1, 1, 2, 3], 1, 1), | ||||||||
([1, 2, 3, 3, 3], 3, 2) | ||||||||
]: | ||||||||
self.assertEqual(expected_i, statistics._find_lteq(a, x)) | ||||||||
|
||||||||
|
||||||||
class FindRteqTest(unittest.TestCase): | ||||||||
# Test _find_rteq private function. | ||||||||
|
||||||||
def test_raise_value_error(self): | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this test method could also be renamed, similarly to the previous ones. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, I should have changed this too. |
||||||||
for a, l, x in [ | ||||||||
([1], 2, 1), # when l=len(a)+1 triggers `i != (len(a)+1)` | ||||||||
([1, 3], 0, 2) # non-existing value triggers `a[i-1] == x` | ||||||||
]: | ||||||||
self.assertRaises(ValueError, statistics._find_rteq, a, l, x) | ||||||||
|
||||||||
def test_locate_successfully(self): | ||||||||
for a, l, x, expected_i in [ | ||||||||
([1, 1, 1, 2, 3], 0, 1, 2), | ||||||||
([0, 1, 1, 1, 2, 3], 0, 1, 3), | ||||||||
([1, 2, 3, 3, 3], 0, 3, 4) | ||||||||
]: | ||||||||
self.assertEqual(expected_i, statistics._find_rteq(a, l, x)) | ||||||||
|
||||||||
|
||||||||
# === Tests for public functions === | ||||||||
|
||||||||
class UnivariateCommonMixin: | ||||||||
|
@@ -1454,6 +1496,9 @@ class TestHarmonicMean(NumericTestCase, AverageMixin, UnivariateTypeMixin): | |||||||
def setUp(self): | ||||||||
self.func = statistics.harmonic_mean | ||||||||
|
||||||||
def test_single_value_unsupported_type(self): | ||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should come after the Also, why test only a an array with a single value of an unsupported type? This would be a good opportunity for other unsupported cases, such as longer arrays, and arrays with values of mixed types, some supported and some not. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are right, I will put it right after the I tested it with a single value of an unsupported type because the statement that explicitly raises this If you think that it would be useful to have a test with longer lists (only with unsupported values, both supported/unsupported values) and assert that a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggest adding two assertions:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the suggestions @taleinat. I have added a test in the latest commit that asserts those cases. |
||||||||
self.assertRaises(TypeError, self.func, ['3.14']) | ||||||||
|
||||||||
def prepare_data(self): | ||||||||
# Override mixin method. | ||||||||
values = super().prepare_data() | ||||||||
|
Uh oh!
There was an error while loading. Please reload this page.