Skip to content

Add "Run All/Debug All" dropdown picker for parametrized tests #8757

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Dec 5, 2019

Conversation

phloose
Copy link

@phloose phloose commented Nov 23, 2019

Related to #5608

This pull requests adds a picker option (Run All/Debug All) to the dropdown that shows when clicking on the code lens for a parametrized test. That way it is possible to run/debug all test variants that belong to a parametrized tests.

Run All:
run_all

Debug All:
debug_all

  • Pull request represents a single change (i.e. not fixing disparate/unrelated things in a single PR)
  • Title summarizes what is changing
  • Has a news entry file (remember to thank yourself!)
  • Appropriate comments and documentation strings in the code
  • Has sufficient logging.
  • Has telemetry for enhancements.
  • Unit tests & system/integration tests are added/updated
  • Test plan is updated as appropriate
  • package-lock.json has been regenerated by running npm install (if dependencies have changed)
  • The wiki is updated with any design decisions/details.

@codecov-io
Copy link

codecov-io commented Nov 23, 2019

Codecov Report

Merging #8757 into master will decrease coverage by 0.02%.
The diff coverage is 22.22%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #8757      +/-   ##
=========================================
- Coverage   58.82%   58.8%   -0.03%     
=========================================
  Files         526     526              
  Lines       24563   24580      +17     
  Branches     3974    3976       +2     
=========================================
+ Hits        14450   14454       +4     
- Misses       9178    9191      +13     
  Partials      935     935
Impacted Files Coverage Δ
src/client/testing/main.ts 15.38% <0%> (-0.31%) ⬇️
src/client/common/constants.ts 100% <100%> (ø) ⬆️
src/client/testing/display/picker.ts 31.65% <25%> (-0.91%) ⬇️
src/client/linters/flake8.ts 91.66% <0%> (+0.75%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0c5b2fd...89a9fa5. Read the comment docs.

@phloose
Copy link
Author

phloose commented Nov 23, 2019

While trying to write a unittest for the added picker option i stumbled over something in picker.unit.test.ts. In this line an object is build whose value property should be of type CommandSource but gets a Type object as input. Is this intentional? Because the resulting combinations do not make sense to me. You get different types of commands invoked from different types and not different types invoked from different command sources... @DonJayamanne could you clarify that?

Philipp Loose added 2 commits November 24, 2019 13:40
In the picker unit tests two calls to getNamesAndValues are used to
extract values from enums. The second call used a generic with type
'CommandSource' but as input a Type object. To be more meaningful this
has been changed to 'CommandSource' so that the test checks different
TestItem types against different command sources. Otherwise it would
test TestItem types against TestItem types.

Furthermore extracting name and value of CommandSource.commandpalette
with getNamesAndValues was not successful and returned undefined. This
has been fixed by assigning CommandSource.commandpalette directly to
commandSource.value when this command source is evaluated.
@phloose phloose force-pushed the run-all-dropdown-picker-5608 branch from 22290c6 to 3818e14 Compare November 24, 2019 12:45
@phloose phloose force-pushed the run-all-dropdown-picker-5608 branch from 5b5d645 to 89a9fa5 Compare November 24, 2019 13:06
@luabud
Copy link
Member

luabud commented Nov 27, 2019

In terms of how it looks, it's great :) No clue about the code part though 😝
Also, Don is now working on the data science features, so you may want to ask those questions to @karthiknadig instead 😊

@phloose
Copy link
Author

phloose commented Nov 29, 2019

@karthiknadig as @luabud told me, that you can also look at the changes, what do you think?

@karthiknadig
Copy link
Member

@phloose I am taking a look at this. For the unit tests you are right, it should be passing CommandSource and it passes Type instead.

@karthiknadig
Copy link
Member

@phloose let me know if you plan on updating the tests.

@phloose
Copy link
Author

phloose commented Dec 3, 2019

@phloose let me know if you plan on updating the tests.

What do you mean exactly? Should i enhance the existing ones? Or test also the used TestDisplay methods? I realized that there are no tests that test TestDisplay methods...

I thought i wait for your feedback and if you are concerned about something i would refactor the code to your suggestions or add parts i missed/didn't think of.

@karthiknadig karthiknadig self-assigned this Dec 4, 2019
@karthiknadig
Copy link
Member

@phloose Sorry, I thought you were waiting to do the Type to CommandSource change. I missed that. These changes look good.

@karthiknadig karthiknadig self-requested a review December 4, 2019 06:48
@phloose
Copy link
Author

phloose commented Dec 4, 2019

@phloose Sorry, I thought you were waiting to do the Type to CommandSource change. I missed that. These changes look good.

Actually i already made the change in 98f3316 ;)

Thanks for your approval! I think in general that there should be more tests for the TestDisplay class which does the real work. But in #8647 i already developed a small test case for that class and will enhance it to be a base for future tests for TestDisplay. If i would add something here i would do double work and in the end there would be conflicts. Maybe there should be a follow up issue to this pull-request that adresses writing proper tests for the additions to TestDisplay made here.

@karthiknadig karthiknadig merged commit 1523dd6 into microsoft:master Dec 5, 2019
@karthiknadig
Copy link
Member

The smoke test failure is unrelated to this PR.

@karthiknadig
Copy link
Member

@phloose We are beginning to address test gaps in the following sprints. I agree, we need tests covering TestDisplay. We are using analysis tools to find gaps and address them. If you are planning on addressing TestDisplay or are already working on it, please create a issue for it. You could submit a draft PR, so we know you are looking at it, and we can prevent overlaps.

@lock lock bot locked as resolved and limited conversation to collaborators Dec 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants