Skip to content

operator test tool proposal - add updates from comments #2018

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Oct 21, 2019
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 23 additions & 6 deletions doc/proposals/openshift-4.3/operator-testing-tool.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: operator-testing-tool
title: Tooling for Testing Operators
authors:
- "@jmccormick2001"
- "@joelanford"
Expand All @@ -19,7 +19,7 @@ superseded-by:
- "/enhancements/our-past-effort.md"
---

# operator-testing-tool
# Tooling for Testing Operators


## Release Signoff Checklist
Expand Down Expand Up @@ -58,7 +58,11 @@ not in scope: Operator Developers can use the same tool to run custom, functiona
#### Story 1 - Show pass/fail in Scorecard Output
Today, the scorecard output shows a percentage of tests that were successful to the end user. This story is to change the scorecard output to show a pass or fail for each test that is run in the output instead of a success percentage.

The exit code of scorecard would be 0 if all tests passed. The exit code would be non-zero if tests failed. With this change scores are essentially replaced with a list of pass/fail(s).
The exit code of scorecard would be 0 if all required tests passed. The exit code would be non-zero if any of the required tests failed. With this change scores are essentially replaced with a list of pass/fail(s).

A message produced by the scorecard will show whether or not the required
tests fully passed or not. Tests will be labled in such a way as to
specify them as required or not.

The scorecard would by default run all tests regardless if a single test fails. Using a CLI flag such as below would cause the test execution to stop on a failure:
* operator-sdk scorecard -l ‘testgroup=required’ --fail-fast
Expand Down Expand Up @@ -112,7 +116,8 @@ Tasks:
* Document changes to CLI
* Document new output formats
* Document changes to configuration
* Document breaking changes and removals
* Update the CHANGELOG and Migration files with breaking changes and removals




Expand All @@ -124,6 +129,12 @@ The “source of truth” for validation would need to be established such as wh

If no runtime tests are specified, then the scorecard would only run the static tests and not depend on a running cluster.

Allow tests to declare a set of labels that can be introspected by the scorecard at runtime.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be obvious to everyone, and sorry if it is, but how does a test declare a set of labels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here is my current thinking and what I've been prototyping so far...to add labels to the CheckSpec Basic test, I add the labels to the 'TestInfo' struct for that test like so:

    return &CheckSpecTest{
            BasicTestConfig: conf,
            TestInfo: schelpers.TestInfo{
                    Name:        "Spec Block Exists",
                    Description: "Custom Resource has a Spec Block",
                    Cumulative:  false,
                    Labels:      labels.Set{"required": "true", "suite": "basic"},
            },
    }

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, they're statically defined in code. In the future, where we may support Kubernetes Job-based tests, they would be declared in metadata.labels


Allow users to filter the set of tests to run based on a label selector string.

Reuse apimachinery for parsing and matching.

### Risks and Mitigations

The scorecard would implement a version flag in the CLI to allow users to migrate from current functionality to the proposed functionality (e.g. v1alpha2):
Expand Down Expand Up @@ -187,8 +198,14 @@ end to end tests.**

##### Removing a deprecated feature

- Announce deprecation and support policy of the existing feature
- Deprecate the feature
- We are adding a new --version flag to allow users to switch between
v1alpha1 and the proposed v1alpha2 or vice-versa for backward compatiblity
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long will keep v1alpha1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe only keep it for the next release/fixes (0.12->0.12.x), then remove it following any release after that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think keeping it for 1 release would be enough.

So, for example:

  • 0.11.x has v1alpha1 only
  • 0.12.x has v1alpha1 and v1alpha2
  • 0.13.x has v1alpha2 only

- The output spec for v1alpha2 is added and the v1alpha1 spec is
retained to support the existing output format
- The default spec version will be v1alpha2, users will need to modify
their usage to specify --version v1alpha1 to retain the older output
- In a subsequent release, the v1alpha1 support will be removed.


### Upgrade / Downgrade Strategy

Expand Down