-
Notifications
You must be signed in to change notification settings - Fork 1.8k
operator test tool proposal - add updates from comments #2018
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
f911fec
20d617c
dcbae58
b2fac2b
1894336
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
--- | ||
title: operator-testing-tool | ||
title: Tooling for Testing Operators | ||
authors: | ||
- "@jmccormick2001" | ||
- "@joelanford" | ||
|
@@ -19,7 +19,7 @@ superseded-by: | |
- "/enhancements/our-past-effort.md" | ||
--- | ||
|
||
# operator-testing-tool | ||
# Tooling for Testing Operators | ||
|
||
|
||
## Release Signoff Checklist | ||
|
@@ -58,7 +58,11 @@ not in scope: Operator Developers can use the same tool to run custom, functiona | |
#### Story 1 - Show pass/fail in Scorecard Output | ||
Today, the scorecard output shows a percentage of tests that were successful to the end user. This story is to change the scorecard output to show a pass or fail for each test that is run in the output instead of a success percentage. | ||
|
||
The exit code of scorecard would be 0 if all tests passed. The exit code would be non-zero if tests failed. With this change scores are essentially replaced with a list of pass/fail(s). | ||
The exit code of scorecard would be 0 if all required tests passed. The exit code would be non-zero if any of the required tests failed. With this change scores are essentially replaced with a list of pass/fail(s). | ||
|
||
A message produced by the scorecard will show whether or not the required | ||
tests fully passed or not. Tests will be labled in such a way as to | ||
specify them as required or not. | ||
|
||
The scorecard would by default run all tests regardless if a single test fails. Using a CLI flag such as below would cause the test execution to stop on a failure: | ||
* operator-sdk scorecard -l ‘testgroup=required’ --fail-fast | ||
|
@@ -112,7 +116,8 @@ Tasks: | |
* Document changes to CLI | ||
* Document new output formats | ||
* Document changes to configuration | ||
* Document breaking changes and removals | ||
* Update the CHANGELOG and Migration files with breaking changes and removals | ||
|
||
|
||
|
||
|
||
|
@@ -124,6 +129,12 @@ The “source of truth” for validation would need to be established such as wh | |
|
||
If no runtime tests are specified, then the scorecard would only run the static tests and not depend on a running cluster. | ||
|
||
Allow tests to declare a set of labels that can be introspected by the scorecard at runtime. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This might be obvious to everyone, and sorry if it is, but how does a test declare a set of labels? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. here is my current thinking and what I've been prototyping so far...to add labels to the CheckSpec Basic test, I add the labels to the 'TestInfo' struct for that test like so:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For now, they're statically defined in code. In the future, where we may support Kubernetes |
||
|
||
Allow users to filter the set of tests to run based on a label selector string. | ||
|
||
Reuse apimachinery for parsing and matching. | ||
|
||
### Risks and Mitigations | ||
|
||
The scorecard would implement a version flag in the CLI to allow users to migrate from current functionality to the proposed functionality (e.g. v1alpha2): | ||
|
@@ -187,8 +198,14 @@ end to end tests.** | |
|
||
##### Removing a deprecated feature | ||
|
||
- Announce deprecation and support policy of the existing feature | ||
- Deprecate the feature | ||
- We are adding a new --version flag to allow users to switch between | ||
v1alpha1 and the proposed v1alpha2 or vice-versa for backward compatiblity | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How long will keep v1alpha1? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe only keep it for the next release/fixes (0.12->0.12.x), then remove it following any release after that? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @dmesser @joelanford wdyt? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah I think keeping it for 1 release would be enough. So, for example:
|
||
- The output spec for v1alpha2 is added and the v1alpha1 spec is | ||
retained to support the existing output format | ||
- The default spec version will be v1alpha2, users will need to modify | ||
their usage to specify --version v1alpha1 to retain the older output | ||
- In a subsequent release, the v1alpha1 support will be removed. | ||
|
||
|
||
### Upgrade / Downgrade Strategy | ||
|
||
|
Uh oh!
There was an error while loading. Please reload this page.