-
Notifications
You must be signed in to change notification settings - Fork 1.8k
scorecard - Add list flag #2137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scorecard - Add list flag #2137
Conversation
internal/scorecard/scorecard.go
Outdated
if scViper.GetBool(ListOpt) { | ||
fmt.Printf("\nTests that would be executed include:\n") | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried this out in ./test/test-framework
and got:
$ cd ./test/test-framework
$ operator-sdk scorecard -L
Tests that would be executed include:
Error: error validating plugin config: revert to v1alpha1 to use external plugins: external plugins are not currently supported with v1alpha2
We should print this after we've validated the config, or maybe we don't need it at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed that in this next update.
if err != nil { | ||
basicTests.Log = fmt.Sprintf("failed to read log buffer: %v", err) | ||
if config.ListOpt { | ||
basicTests.List() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than using a new List()
function on TestSuite
, I was kind of thinking we would build the pluginOutputs
without actually running the tests, and then just let printPluginOutputs()
do its thing. Then the output formating would be the same regardless of run vs. list, and we'd be able to list in JSON or text.
var pluginOutputs []scapiv1alpha1.ScorecardOutput
for _, plugin := range plugins {
if scViper.GetBool(ListOpt) {
pluginOutputs = append(pluginOutputs, plugin.List())
} else {
pluginOutputs = append(pluginOutputs, plugin.Run())
}
}
I think we also need to print the labels in the output somewhere so users know how to build their label selectors.
Maybe something like the verbose
text format I mocked up here?
Another alternative I like is a column-based format that looks like kubectl
output, something like this, using a tab writer:
TEST NAME LABELS STATUS
Spec Block Exists "necessity"="required", "suite"="basic" pass
Status Block Exists "necessity"="required", "suite"="basic" pass
Writing into CRs has an effect "necessity"="required", "suite"="basic" fail
We'd leave off STATUS
when using --list
. But when actually running the tests, if we did choose something like this, we'd have to figure out how to output the suggestions, errors, and logs, and make it obvious which test and which CR they're for. Maybe initially, we'd just append them at the end like we do now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in this version the text output with the --list flag looks like this:
basic:
Spec Block Exists
Labels:
necessity:required
suite:basic
Status Block Exists
Labels:
necessity:required
suite:basic
Writing into CRs has an effect
Labels:
necessity:required
suite:basic
olm:
Provided APIs have validation
Labels:
necessity:required
suite:olm
Owned CRDs have resources listed
Labels:
necessity:required
suite:olm
CRs have at least 1 example
Labels:
necessity:required
suite:olm
Spec fields with descriptors
Labels:
necessity:required
suite:olm
Status fields with descriptors
Labels:
necessity:required
suite:olm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the output when --list is not specified:
basic:
Spec Block Exists : pass
Labels:
necessity:required
suite:basic
Status Block Exists : pass
Labels:
necessity:required
suite:basic
Writing into CRs has an effect : pass
Labels:
suite:basic
necessity:required
olm:
Spec fields with descriptors : pass
Labels:
necessity:required
suite:olm
Status fields with descriptors : fail
Labels:
suite:olm
necessity:required
Suggestions:
Add a status descriptor for status
Provided APIs have validation : pass
Labels:
necessity:required
suite:olm
Owned CRDs have resources listed : pass
Labels:
necessity:required
suite:olm
Suggestions:
If it would be helpful to an end-user to understand or troubleshoot your CR, consider adding resources [configmaps/v1 memcacheds/v1alpha1 replicasets/v1 deployments/v1 services/v1] to the resources section for owned CRD Memcached
CRs have at least 1 example : pass
Labels:
necessity:required
suite:olm
for _, suggestion := range result.Suggestions { | ||
sb.WriteString("\tLabels: \n") | ||
for labelKey, labelValue := range result.Labels { | ||
sb.WriteString(fmt.Sprintf("\t\t%s:%s\n", labelKey, labelValue)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we put labels and values in quotes in case they have spaces in them? I think for our own labels this should never be a problem, but if/when we add support back in for external plugins with v1alpha2, we'll have less control.
sb.WriteString(fmt.Sprintf("\t\t%s:%s\n", labelKey, labelValue)) | |
sb.WriteString(fmt.Sprintf("\t\t%q: %q\n", labelKey, labelValue)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add this, it renders pretty good I think.
/test e2e-aws-subcommand |
/test marker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approved
@@ -20,7 +20,7 @@ operator-sdk scorecard --version v1alpha2 --config "$CONFIG_PATH" |& grep '^.*er | |||
|
|||
# test to see if v1alpha2 is used from the command line | |||
commandoutput="$(operator-sdk scorecard --version v1alpha2 --config "$CONFIG_PATH_V1ALPHA2" 2>&1)" | |||
failCount=`echo $commandoutput | grep -o ": fail" | wc -l` | |||
failCount=`echo $commandoutput | grep -o "fail" | wc -l` | |||
expectedFailCount=3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jmccormick2001,
I could not find any test here with --selector -l (passing the selector) and checking that just the tests with the label X was executed as I could not find the test with the --list / -l as well. Should we not add it here?
Besides this, all shows OK for me /LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am approving but please, check my comment above.
I think it is missing the tests and we should add it before merging.
2c5604f
@@ -28,6 +28,26 @@ then | |||
exit 1 | |||
fi | |||
|
|||
# test to see if list flag work |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🥇 Great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Description of the change:
this change adds a new 'list' flag to the scorecard CLI set of flags, this flag will
cause only the test names to be printed out and not actually run the scorecard tests.
The tests that 'would' be run are determined by the selector filter being applied and gives
the end user a chance to see which tests would be run given a particular selector.
Motivation for the change:
This feature is part of a larger effort to build the v1alpha2 version of the scorecard.