You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| balancer.containerSecurityContext.enabled | bool |`true`| If true, sets the securityContext on the created containers. This is required for the podSecurityPolicy to work |
| balancer.cookie.cookieParserSecret | string |`nil`| Set this to a fixed random alpa-numeric string (recommended length 24 chars). If not set this get randomly generated with every helm upgrade, each rotation invalidates all active cookies / sessions requirering users to login again. |
70
-
| balancer.cookie.name | string |`"balancer"`| Changes the cookies name used to identify teams. Note will automatically be prefixed with "__Secure-" when balancer.cookie.secure is set to `true`|
71
-
| balancer.cookie.secure | bool |`false`| Sets the secure attribute on cookie so that it only be send over https |
| balancer.metrics.dashboards.enabled | bool |`false`| if true, creates a Grafana Dashboard Config Map. (also requires metrics.enabled to be true). These will automatically be imported by Grafana when using the Grafana helm chart, see: https://github.com/helm/charts/tree/main/stable/grafana#sidecar-for-dashboards|
87
-
| balancer.metrics.enabled | bool |`true`| enables prometheus metrics for the balancer. If set to true you should change the prometheus-scraper password |
88
-
| balancer.metrics.serviceMonitor.enabled | bool |`false`| If true, creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true). This will also deploy a servicemonitor which monitors metrics from the Wrongsecrets instances |
89
-
| balancer.metrics.serviceMonitor.path | string |`"/balancer/metrics"`| Path to scrape for metrics |
90
-
| balancer.metrics.serviceMonitor.targetPort | int |`3000`| Target port for the ServiceMonitor to scrape |
91
-
| balancer.podSecurityContext.enabled | bool |`true`| If true, sets the securityContext on the created pods. This is required for the podSecurityPolicy to work |
92
-
| balancer.podSecurityContext.fsGroup | int |`2000`||
93
-
| balancer.podSecurityContext.runAsGroup | int |`3000`||
94
-
| balancer.podSecurityContext.runAsUser | int |`1000`||
| balancer.readinessProbe | object |`{"httpGet":{"path":"/balancer/","port":"http"}}`| readinessProbe: Checks if the balancer pod is ready to receive traffic |
97
-
| balancer.replicas | int |`2`| Number of replicas of the wrongsecrets-balancer deployment. Changing this in a commit? PLEASE UPDATE THE GITHUB WORKLFOWS THEN!(NUMBER OF "TRUE") |
| balancer.resources | object |`{"limits":{"cpu":"1000m","memory":"1024Mi"},"requests":{"cpu":"400m","memory":"256Mi"}}`| Resource limits and requests for the balancer pods |
100
-
| balancer.service.clusterIP | string |`nil`| internal cluster service IP |
101
-
| balancer.service.externalIPs | string |`nil`| IP address to assign to load balancer (if supported) |
102
-
| balancer.service.loadBalancerIP | string |`nil`| IP address to assign to load balancer (if supported) |
103
-
| balancer.service.loadBalancerSourceRanges | string |`nil`| list of IP CIDRs allowed access to lb (if supported) |
104
-
| balancer.service.type | string |`"ClusterIP"`| Kubernetes service type |
105
-
| balancer.skipOwnerReference | bool |`false`| If set to true this skips setting ownerReferences on the teams wrongsecrets Deployment and Services. This lets MultiJuicer run in older kubernetes cluster which don't support the reference type or the app/v1 deployment type |
106
-
| balancer.tag | string |`"1.6.6aws"`||
107
-
| balancer.tolerations | list |`[]`| Optional Configure kubernetes toleration for the created wrongsecrets instances (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)|
108
-
| balancer.volumeMounts[0]| object |`{"mountPath":"/home/app/config/","name":"config-volume"}`| If true, creates a volumeMount for the created pods. This is required for the podSecurityPolicy to work |
109
-
| balancer.volumes[0]| object |`{"configMap":{"name":"wrongsecrets-balancer-config"},"name":"config-volume"}`| If true, creates a volume for the created pods. This is required for the podSecurityPolicy to work |
110
-
| imagePullPolicy | string |`"IfNotPresent"`||
111
-
| ingress.annotations | object |`{}`| Annotations to be added to the ingress object. |
112
-
| ingress.enabled | bool |`false`| If true, Wrongsecrets will create an Ingress object for the balancer service. Useful if you want to expose the balancer service externally for example with a loadbalancer in order to view any webpages that are hosted on the balancer service. |
113
-
| ingress.hosts | list |`[{"host":"wrongsecrets-ctf-party.local","paths":["/"]}]`| Hostnames to your Wrongsecrets balancer installation. |
114
-
| ingress.tls | list |`[]`| TLS configuration for Wrongsecrets balancer |
115
-
| nodeSelector | object |`{}`||
116
-
| service.port | int |`3000`||
117
-
| service.portName | string |`"web"`||
118
-
| service.type | string |`"ClusterIP"`||
119
-
| vaultContainer.affinity | object |`{}`||
120
-
| vaultContainer.envFrom | list |`[]`||
121
-
| vaultContainer.image | string |`"hashicorp/vault"`| Juice Shop Image to use |
122
-
| vaultContainer.maxInstances | int |`500`| Specifies how many JuiceShop instances MultiJuicer should start at max. Set to -1 to remove the max Juice Shop instance cap |
| virtualdesktop.image | string |`"jeroenwillemsen/wrongsecrets-desktop-k8s"`| Wrongsecrets Image to use |
139
-
| virtualdesktop.maxInstances | int |`500`| Specifies how many Wrongsecrets instances balancer should start at max. Set to -1 to remove the max Wrongsecrets instance cap |
| wrongsecrets.affinity | object |`{}`| Optional Configure kubernetes scheduling affinity for the created Wrongsecrets instances (see: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)|
154
-
| wrongsecrets.config | string | See values.yaml for full details | Specify a custom Wrongsecrets config.yaml. See the Wrongsecrets Docs for any needed ENVs: https://github.com/OWASP/wrongsecrets|
155
-
| wrongsecrets.ctfKey | string |`"[email protected]!9uR_K!NfkkTr"`| Change the key when hosting a CTF event. This key gets used to generate the challenge flags. See: https://github.com/OWASP/wrongsecrets#ctf|
156
-
| wrongsecrets.env | list |`[{"name":"K8S_ENV","value":"k8s"},{"name":"SPECIAL_K8S_SECRET","valueFrom":{"configMapKeyRef":{"key":"funny.entry","name":"secrets-file"}}},{"name":"SPECIAL_SPECIAL_K8S_SECRET","valueFrom":{"secretKeyRef":{"key":"funnier","name":"funnystuff"}}}]`| Optional environment variables to set for each Wrongsecrets instance (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/)|
157
-
| wrongsecrets.envFrom | list |`[]`| Optional mount environment variables from configMaps or secrets (see: https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#configure-all-key-value-pairs-in-a-secret-as-container-environment-variables)|
158
-
| wrongsecrets.image | string |`"jeroenwillemsen/wrongsecrets"`| Wrongsecrets Image to use |
159
-
| wrongsecrets.maxInstances | int |`500`| Specifies how many Wrongsecrets instances should start at max. Set to -1 to remove the max Wrongsecrets instance cap |
160
-
| wrongsecrets.nodeEnv | string |`"wrongsecrets-ctf-party"`| Specify a custom NODE_ENV for Wrongsecrets. If value is changed to something other than 'wrongsecrets-ctf-party' it's not possible to set a custom config via `wrongsecrets-balancer-config`. |
161
-
| wrongsecrets.resources | object |`{"requests":{"cpu":"256Mi","memory":"300Mi"}}`| Optional resources definitions to set for each Wrongsecrets instance |
162
-
| wrongsecrets.runtimeClassName | string |`nil`| Optional Can be used to configure the runtime class for the Wrongsecrets instances pods to add an additional layer of isolation to reduce the impact of potential container escapes. (see: https://kubernetes.io/docs/concepts/containers/runtime-class/)|
163
-
| wrongsecrets.securityContext | object |`{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}}`| Optional securityContext definitions to set for each Wrongsecrets instance |
| wrongsecrets.tolerations | list |`[]`| Optional Configure kubernetes toleration for the created Wrongsecrets instances (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)|
166
-
| wrongsecrets.volumes | list |`[]`| Optional Volumes to set for each Wrongsecrets instance (see: https://kubernetes.io/docs/concepts/storage/volumes/)|
167
-
| wrongsecretsCleanup.affinity | object |`{}`| Optional Configure kubernetes scheduling affinity for the wrongsecretsCleanup Job(see: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)|
| wrongsecretsCleanup.containerSecurityContext.enabled | bool |`true`| If true, sets the securityContext on the created containers. This is required for the podSecurityPolicy to work |
| wrongsecretsCleanup.cron | string |`"0,15,30,45 * * * *"`| Cron in which the clean up job is run. Defaults to once in a quarter. Change this if your grace period if shorter than 15 minutes. See "https://crontab.guru/#0,15,30,45_*_*_*_*" for more details. |
| wrongsecretsCleanup.failedJobsHistoryLimit | int |`1`||
179
-
| wrongsecretsCleanup.podSecurityContext.enabled | bool |`true`| If true, sets the securityContext on the created pods. This is required for the podSecurityPolicy to work |
180
-
| wrongsecretsCleanup.podSecurityContext.fsGroup | int |`2000`||
181
-
| wrongsecretsCleanup.podSecurityContext.runAsGroup | int |`3000`||
182
-
| wrongsecretsCleanup.podSecurityContext.runAsUser | int |`1000`||
| wrongsecretsCleanup.successfulJobsHistoryLimit | int |`1`||
187
-
| wrongsecretsCleanup.tag | float |`0.4`||
188
-
| wrongsecretsCleanup.tolerations | list |`[]`| Optional Configure kubernetes toleration for the wrongsecretsCleanup Job (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)|
189
-
190
-
----------------------------------------------
191
-
Autogenerated from chart metadata using [helm-docs v1.7.0](https://github.com/norwoodj/helm-docs/releases/v1.7.0)
0 commit comments