Skip to content

TST: Add unmatched warning messages to assert_produces_warning() error #42181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

calvinsomething
Copy link
Contributor

  • closes #42103
  • tests added / passed
  • Ensure all linting tests pass, see here for how to run them
  • whatsnew entry

I didn't add anything to the whatsnew file, as it seems it's not always necessary depending on the change?

Error message for a failed assert_produces_warning(match=". . .") now looks like e.g.:

Did not see warning 'DtypeWarning' matching 'Warning'. The emitted warning messages are [DtypeWarning('This is not a match.'), DtypeWarning('Another unmatched warning.')]

Thanks for any feedback if I've done something wrong here.

Copy link
Member

@MarcoGorelli MarcoGorelli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is off to a good start!

I'm not a fan of logic in tests - I think test_fail_to_match is rather complicated and that the parametrisation doesn't add much, could you split it up into three distinct tests so that you don't have to substitute category.__name__ in the unmatched error message?

def test_fail_to_match(category, match):
msg1 = "This is not a match."
msg2 = "Another unmatched warning."
unmatched = rf"{category.__name__}\('{msg1}'\), {category.__name__}\('{msg2}'\)"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make three distinct tests here, and match the full unmatched error message each time?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the feedback! Yes, I can do that. I think I see the reasoning for the original three test cases that I could preserve.

@simonjayhawkins simonjayhawkins added Error Reporting Incorrect or improved errors from pandas Testing pandas testing functions or related to the test suite labels Jun 22, 2021
@pep8speaks
Copy link

pep8speaks commented Jun 22, 2021

Hello @calvinsomething! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-06-24 13:13:28 UTC

Copy link
Contributor Author

@calvinsomething calvinsomething left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I've split the test into 3 and removed the logic. I used three different warning categories to test as the full range will not be used as a parameter anymore. How does this look?

@MarcoGorelli
Copy link
Member

Nice! Looks like there's going to be a pre-commit failure though, did you run it locally? See https://pandas.pydata.org/docs/development/contributing.html#pre-commit

Can the test names be slightly more descriptive, e.g. test_fail_to_match_future_warning for the first and likewise for the others?

Other than that, looks good to me - thanks for doing this, it'll be useful to anyone testing warnings!

@calvinsomething
Copy link
Contributor Author

Thanks! No I didn't run pre-commit this time. I will make the changes and make the checks before adding the next commit.

@MarcoGorelli
Copy link
Member

I didn't run pre-commit this time

if you do pre-commit install then it'll run automatically for you before each commit

@calvinsomething
Copy link
Contributor Author

Hopefully that looks better!

@calvinsomething
Copy link
Contributor Author

Sorry. Ran pre-commit manually before, but now I have it installed and have corrected the flake-8 error. Everything should be perfect... hopefully.

Copy link
Member

@ivanovmg ivanovmg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice improvement to the current assertion!
There is one minor comment from my side regarding readability of the message in the test function.

category = RuntimeWarning
match = "Did not see this warning"
unmatched = (
r"Did not see warning 'RuntimeWarning' matching 'Did not see this warning'. "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be more readable if you split the message in lines this way:

    r"Did not see warning 'RuntimeWarning' matching 'Did not see this warning'. "
    r"The emitted warning messages are "
    r"\[RuntimeWarning\('This is not a match.'\), "
    r"RuntimeWarning\('Another unmatched warning.'\)\]"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think that is better. I'll do that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, not it looks much cleaner here and in other places as well!

@calvinsomething
Copy link
Contributor Author

Made the last suggested change and now all the warning messages are more conformed.

Copy link
Member

@ivanovmg ivanovmg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@MarcoGorelli MarcoGorelli added this to the 1.4 milestone Jul 11, 2021
@MarcoGorelli
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Error Reporting Incorrect or improved errors from pandas Testing pandas testing functions or related to the test suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants