You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
4
4
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
@@ -19,21 +19,21 @@
19
19
-[OLM Integration](#olm-integration)
20
20
-[Extending the Scorecard with Plugins](#extending-the-scorecard-with-plugins)
21
21
-[JSON format](#json-format)
22
-
-[Running the scorecard with a deployed CSV](#running-the-scorecard-with-a-deployed-csv)
22
+
-[Running the scorecard with an OLM-managed operator](#running-the-scorecard-with-an-olm-managed-operator)
23
23
24
24
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
25
25
26
26
## Overview
27
27
28
-
The scorecard works by creating all resources required by CRs and the operator.
28
+
The scorecard works by creating all resources required by CRs and the operator.
29
29
30
30
The scorecard will create another container in the operator deployment which is used to record calls to the API server and run a lot of the tests. The tests performed will also examine some of the fields in the CRs.
31
31
32
-
The scorecard also supports plugins which allows to extend the functionality of the scorecard and add additional tests on it.
32
+
The scorecard also supports plugins which allows to extend the functionality of the scorecard and add additional tests on it.
33
33
34
-
## Requirements
34
+
## Requirements
35
35
36
-
Following are some requirements for the operator project which would be checked by the scorecard.
36
+
Following are some requirements for the operator project which would be checked by the scorecard.
37
37
38
38
- Access to a Kubernetes v1.11.3+ cluster
39
39
@@ -48,9 +48,9 @@ Following are some requirements for the operator project which would be checked
48
48
49
49
1. Setup the `.osdk-scorecard.yaml` configuration file in your project.. See [Config file](#config-file)
50
50
2. Create the namespace defined in the RBAC files(`role_bindinding`)
51
-
3. Then, run the scorecard, for example `$ operator-sdk scorecard`. See the [Command args](#command-args) to check its options.
51
+
3. Then, run the scorecard, for example `$ operator-sdk scorecard`. See the [Command args](#command-args) to check its options.
52
52
53
-
**NOTE:** If your operator is non-SDK then some steps will be required in order to meet its requirements.
53
+
**NOTE:** If your operator is non-SDK then some steps will be required in order to meet its requirements.
54
54
55
55
## Configuration
56
56
@@ -95,13 +95,13 @@ The hierarchy of config methods for the global options that are also configurabl
95
95
The config file support is provided by the `viper` package. For more info on how viper configuration works, see [`viper`'s README][viper].
96
96
97
97
**NOTE:** The config file can be in any of the `json`, `yaml`, or `toml` formats as long as the file has the correct extension. As the config file may be extended to allow configuration
98
-
of all `operator-sdk` subcommands in the future, the scorecard's configuration must be under a `scorecard` subsection.
98
+
of all `operator-sdk` subcommands in the future, the scorecard's configuration must be under a `scorecard` subsection.
99
99
100
100
### Command Args
101
101
102
-
While most configuration is done via a config file, there are a few important args that can be used as follows.
102
+
While most configuration is done via a config file, there are a few important args that can be used as follows.
103
103
104
-
| Flag | Type | Description |
104
+
| Flag | Type | Description |
105
105
| -------- | -------- | -------- |
106
106
| `--config` | string | Path to config file (default `<project_dir>/.osdk-scorecard`; file type and extension can be any of `.yaml`, `.json`, or `.toml`). If a config file is not provided and a config file is not found at the default location, the scorecard will exit with an error. |
107
107
| `--output`, `-o` | string | Output format. Valid options are: `text`and `json`. The default format is `text`, which is designed to be a simpler human readable format. The `json` format uses the JSON schema output format used for plugins defined later in this document. |
@@ -110,7 +110,7 @@ While most configuration is done via a config file, there are a few important ar
110
110
111
111
### Config File Options
112
112
113
-
| Option | Type | Description |
113
+
| Option | Type | Description |
114
114
| -------- | -------- | -------- |
115
115
| `output` | string | equivalent of the `--output` flag. If this option is defined by both the config file and the flag, the flag's value takes priority |
116
116
| `kubeconfig` | string | equivalent of the `--kubeconfig` flag. If this option is defined by both the config file and the flag, the flag's value takes priority |
@@ -122,11 +122,11 @@ A plugin object is used to configure plugins. The possible values for the plugin
122
122
123
123
Note that each Plugin type has different configuration options and they are named differently in the config. Only one of these fields can be set per plugin.
124
124
125
-
#### Basic and OLM
125
+
#### Basic and OLM
126
126
127
127
The `basic` and `olm` internal plugins have the same configuration fields:
128
128
129
-
| Option | Type | Description |
129
+
| Option | Type | Description |
130
130
| -------- | -------- | -------- |
131
131
| `cr-manifest` | [\]string | path(s) for CRs being tested.(required if `olm-deployed` is not set or false) |
132
132
| `csv-path` | string | path to CSV for the operator (required for OLM tests or if `olm-deployed` is set to true) |
@@ -143,27 +143,27 @@ The `basic` and `olm` internal plugins have the same configuration fields:
143
143
The scorecard allows developers to write their own plugins for the scorecard that can be run via an executable binary or script. For more information on developing external plugins,
144
144
please see the [Extending the Scorecard with Plugins](#extending-the-scorecard-with-plugins) section. These are the options available to configure external plugins:
145
145
146
-
| Option | Type | Description |
146
+
| Option | Type | Description |
147
147
| -------- | -------- | -------- |
148
148
| `command` | string | (required) path to the plugin binary or script. The path can either be absolute or relative to the operator project's root directory. All external plugins are run from the operator project's root directory |
149
149
| `args` | \[\]string | arguments to pass to the plugin |
150
150
| `env` | array | `env var` objects, which consist of a `name` and `value` field. If a `KUBECONFIG` env var is declared in this array as well as via the top-level `kubeconfig` option, the `KUBECONFIG` from the env array for the plugin is used |
151
151
152
-
## Tests Performed
152
+
## Tests Performed
153
153
154
154
Following the description of each internal [Plugin](#plugins). Note that are 8 internal tests across 2 internal plugins that the scorecard can run. If multiple CRs are specified for a plugin, the test environment is fully cleaned up after each CR so each CR gets a clean testing environment.
155
155
156
156
### Basic Operator
157
157
158
-
| Test | Description |
158
+
| Test | Description |
159
159
| -------- | -------- |
160
160
| Spec Block Exists | This test checks the Custom Resource(s) created in the cluster to make sure that all CRs have a spec block. This test has a maximum score of 1 |
161
161
| Status Block Exists | This test checks the Custom Resource(s) created in the cluster to make sure that all CRs have a status block. This test has a maximum score of 1 |
162
162
| Writing Into CRs Has An Effect | This test reads the scorecard proxy's logs to verify that the operator is making `PUT` and/or `POST` requests to the API server, indicating that it is modifying resources. This test has a maximum score of 1 |
163
163
164
164
### OLM Integration
165
165
166
-
| Test | Description |
166
+
| Test | Description |
167
167
| -------- | -------- |
168
168
| Provided APIs have validation |This test verifies that the CRDs for the provided CRs contain a validation section and that there is validation for each spec and status field detected in the CR. This test has a maximum score equal to the number of CRs provided via the `cr-manifest` option. |
169
169
| Owned CRDs Have Resources Listed | This test makes sure that the CRDs for each CR provided via the `cr-manifest` option have a `resources` subsection in the [`owned` CRDs section][owned-crds] of the CSV. If the test detects used resources that are not listed in the resources section, it will list them in the suggestions at the end of the test. This test has a maximum score equal to the number of CRs provided via the `cr-manifest` option. |
@@ -386,16 +386,162 @@ Example of a valid JSON output:
386
386
**NOTE:** The `ScorecardOutput.Log` field is only intended to be used to log the scorecard's output and the scorecard will ignore that field if a plugin provides it.
387
387
To add logs to the main `ScorecardOuput.Log` field, a plugin can output the logs to `stderr`.
388
388
389
-
## Running the scorecard with a deployed CSV
390
-
391
-
The scorecard can be run using only a [Cluster Service Version (CSV)][olm-csv], providing a way to test cluster-ready and non-SDK operators.
389
+
## Running the scorecard with an OLM-managed operator
390
+
391
+
The scorecard can be run using a [Cluster Service Version (CSV)][olm-csv], providing a way to test cluster-ready and non-SDK operators.
392
+
393
+
Running with a CSV alone requires both the `csv-path: <CSV manifest path>` and `olm-deployed` options to be set. The scorecard assumes your CSV and relevant CRD's have been deployed onto the cluster using OLM when using `olm-deployed`.
394
+
395
+
The scorecard requires a proxy container in the operator's `Deployment` pod to read operator logs. A few modifications to your CSV and creation of one extra object are required to run the proxy _before_ deploying your operator with OLM:
396
+
397
+
1. Create a proxy server secret containing a local Kubeconfig:
398
+
1. Generate a username using the scorecard proxy's namespaced owner reference.
399
+
```sh
400
+
# Substitute "$your_namespace" for the namespace your operator will be deployed in (if any).
1. Write a `Config` manifest `scorecard-config.yaml` using the following template, substituting `${your_username}` for the base64 username generated above:
1. Create a `Secret` manifest `scorecard-secret.yaml` containing the operator's namespace (if any) the `Config`'s base64 encoding as a `spec.data` value under the key `kubeconfig`:
432
+
```yaml
433
+
apiVersion: v1
434
+
kind: Secret
435
+
metadata:
436
+
name: scorecard-kubeconfig
437
+
namespace: ${your_namespace}
438
+
data:
439
+
kubeconfig: ${kubeconfig_base64}
440
+
```
441
+
1. Apply the secret in-cluster:
442
+
```sh
443
+
$ kubectl apply -f scorecard-secret.yaml
444
+
```
445
+
1. Insert a volume referring to the `Secret` into the operator's `Deployment`:
446
+
```yaml
447
+
spec:
448
+
install:
449
+
spec:
450
+
deployments:
451
+
- name: memcached-operator
452
+
spec:
453
+
...
454
+
template:
455
+
...
456
+
spec:
457
+
containers:
458
+
...
459
+
volumes:
460
+
# scorecard kubeconfig volume
461
+
- name: scorecard-kubeconfig
462
+
secret:
463
+
secretName: scorecard-kubeconfig
464
+
items:
465
+
- key: kubeconfig
466
+
path: config
467
+
```
468
+
1. Insert a volume mount and `KUBECONFIG` environment variable into each container in your operator's `Deployment`:
469
+
```yaml
470
+
spec:
471
+
install:
472
+
spec:
473
+
deployments:
474
+
- name: memcached-operator
475
+
spec:
476
+
...
477
+
template:
478
+
...
479
+
spec:
480
+
containers:
481
+
- name: container1
482
+
...
483
+
volumeMounts:
484
+
# scorecard kubeconfig volume mount
485
+
- name: scorecard-kubeconfig
486
+
mountPath: /scorecard-secret
487
+
env:
488
+
# scorecard kubeconfig env
489
+
- name: KUBECONFIG
490
+
value: /scorecard-secret/config
491
+
- name: container2
492
+
# Do the same for this and all other containers.
493
+
...
494
+
```
495
+
1. Insert the scorecard proxy container into the operator's `Deployment`:
Running with a CSV alone requires both the `--csv-path=<CSV manifest path>` and `--olm-deployed` flags to be set. The scorecard assumes your CSV and relevant CRD's have been deployed onto the cluster using the OLM when using `--olm-deployed`. This [document][olm-deploy-operator] walks through bundling your CSV and CRD's, deploying the OLM on minikube or [OKD][okd], and deploying your operator. Once these steps have been completed, run the scorecard with both the `--csv-path=<CSV manifest path>` and `--olm-deployed` flags.
539
+
Once done, follow the steps in this [document][olm-deploy-operator] to bundle your CSV and CRD's, deploy OLM on minikube or [OKD][okd], and deploy your operator. Once these steps have been completed, run the scorecard with both the `csv-path: <CSV manifest path>` and `olm-deployed` options set.
394
540
395
541
**NOTES:**
396
542
397
543
- As of now, using the scorecard with a CSV does not permit multiple CR manifests to be set through the CLI/config/CSV annotations. You will have to tear down your operator in the cluster, re-deploy, and re-run the scorecard for each CR being tested. In the future the scorecard will fully support testing multiple CR's without requiring users to teardown/standup each time.
398
-
- You can either use `--cr-manifest` or your CSV's [`metadata.annotations['alm-examples']`][olm-csv-alm-examples] to provide CR's to the scorecard, but not both.
544
+
- You can either set `cr-manifest` or your CSV's [`metadata.annotations['alm-examples']`][olm-csv-alm-examples] to provide CR's to the scorecard, but not both.
0 commit comments