Skip to content

Commit 700cd13

Browse files
authored
Merge pull request #132 from OWASP/quick_fixes
Quick fixes for future usage
2 parents 52148a1 + 3fb9c64 commit 700cd13

File tree

7 files changed

+41
-18
lines changed

7 files changed

+41
-18
lines changed

aws/README.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ terraform apply
3232
```
3333

3434
The bucket name should be in the output. Please use that to configure the Terraform backend in `main.tf`.
35+
The bucket ARN will be printed, make a note of this as it will be used in the next steps.
3536

3637
## Installation
3738

@@ -44,11 +45,12 @@ The terraform code is loosely based on [this EKS managed Node Group TF example](
4445
1. export your AWS credentials (`export AWS_PROFILE=awsuser`)
4546
2. check whether you have the right profile by doing `aws sts get-caller-identity` and make sure you have enough rights with the caller its identity and that the actual accountnumber displayed is the account designated for you to apply this TF to.
4647
3. Do `terraform init` (if required, use tfenv to select TF 0.13.1 or higher )
47-
4. Do `terraform plan`
48-
5. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
49-
6. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
50-
7. Do `export KUBECONFIG=~/.kube/wrongsecrets`
51-
8. Run `./build-an-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)
48+
4. The bucket ARN will be asked for in the next 2 steps. Take the one provided to you and add `arn:aws:s3:::` to the start. e.g. ``arn:aws:s3:::terraform-20221208123456789100000001`
49+
5. Do `terraform plan`
50+
6. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
51+
7. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
52+
8. Do `export KUBECONFIG=~/.kube/wrongsecrets`
53+
9. Run `./build-an-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)
5254

5355
Your EKS cluster should be visible in [EU-West-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
5456

@@ -81,17 +83,17 @@ Now visit the CTFd instance and setup your CTF. If you haven't set up a load bal
8183
_!!NOTE:_ **The following can be dangerous if you use CTFd `>= 3.5.0` with wrongsecrets `< 1.5.11`. Check the `challenges.json` and make sure it's 1-indexed - a 0-indexed file will break CTFd!** _/NOTE!!_
8284

8385
Then use the administrative backup function to import the zipfile you created with the juice-shop-ctf command.
84-
After that you will still need to override the flags with their actual values if you do use the 2-domain configuration.
86+
After that you will still need to override the flags with their actual values if you do use the 2-domain configuration. For a guide on how to do this see the 2-domain setup steps in the general [README](../readme.md)
8587
Want to setup your own? You can! Watch out for people finding your key though, so secure it properly: make sure the running container with the actual ctf-key is not exposed to the audience, similar to our heroku container.
8688

87-
Want to make the CTFD instance look pretty? Include the fragment logated at [./k8s/ctfd_resources/index_fragment.html](/k8s/ctfd_resources/index_fragment.html) in your index.html via the admin panel.
89+
Want to make the CTFD instance look pretty? Include the fragment located at [./k8s/ctfd_resources/index_fragment.html](/k8s/ctfd_resources/index_fragment.html) in your index.html via the admin panel.
8890

8991
### Clean it up
9092

9193
When you're done:
9294

9395
1. Kill the port forward.
94-
2. Run the cleanup script: `cleanup-aws-autoscaling-and-helm.sh`
96+
2. Run the cleanup script: `./cleanup-aws-autoscaling-and-helm.sh`
9597
3. Run `terraform destroy` to clean up the infrastructure.
9698
1. If you've deployed the `shared-state` s3 bucket, also `cd shared-state` and `terraform destroy` there.
9799
4. Run `unset KUBECONFIG` to unset the KUBECONFIG env var.
@@ -112,8 +114,8 @@ We added additional scripts for adding an ALB and ingress so that you can use yo
112114
Do the following:
113115

114116
1. Follow the installation section first.
115-
2. Run `k8s-aws-alb-script.sh` and the script will return the url at which you can reach the application.
116-
3. When you are done, before you do cleanup, first run `k8s-aws-alb-script-cleanup.sh`.
117+
2. Run `./k8s-aws-alb-script.sh` and the script will return the url at which you can reach the application. (Be aware this opens the url's to the internet in general, if you'd like to limit the access please do this using the security groups in AWS)
118+
3. When you are done, before you do cleanup, first run `./k8s-aws-alb-script-cleanup.sh`.
117119

118120
Note that you might have to do some manual cleanups after that.
119121

aws/build-an-deploy-aws.sh

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,20 @@ else
2828
echo "CLUSTERNAME is not set or empty, defaulting to ${CLUSTERNAME}"
2929
fi
3030

31+
echo "Checking for compatible shell"
32+
case "$SHELL" in
33+
*bash*)
34+
echo "BASH detected"
35+
;;
36+
*zsh*)
37+
echo "ZSH detected"
38+
;;
39+
*)
40+
echo "🛑🛑 Unknown shell $SHELL, this script has only been tested on BASH and ZSH. Please be aware there may be some issues 🛑🛑"
41+
sleep 2
42+
;;
43+
esac
44+
3145
ACCOUNT_ID=$(aws sts get-caller-identity | jq '.Account' -r)
3246
echo "ACCOUNT_ID=${ACCOUNT_ID}"
3347

@@ -128,8 +142,8 @@ helm upgrade --install mj ../helm/wrongsecrets-ctf-party \
128142
export HELM_EXPERIMENTAL_OCI=1
129143
kubectl create namespace ctfd
130144
helm -n ctfd install ctfd oci://ghcr.io/bman46/ctfd/ctfd \
131-
--set="redis.auth.password=${$(openssl rand -base64 24)}" \
132-
--set="mariadb.auth.rootPassword=${$(openssl rand -base64 24)}" \
133-
--set="mariadb.auth.password=${$(openssl rand -base64 24)}" \
134-
--set="mariadb.auth.replicationPassword=${$(openssl rand -base64 24)}" \
145+
--set="redis.auth.password=$(openssl rand -base64 24)" \
146+
--set="mariadb.auth.rootPassword=$(openssl rand -base64 24)" \
147+
--set="mariadb.auth.password=$(openssl rand -base64 24)" \
148+
--set="mariadb.auth.replicationPassword=$(openssl rand -base64 24)" \
135149
--set="env.open.SECRET_KEY=test" # this key isn't actually necessary in a setup with CTFd

aws/k8s-aws-alb-script.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
# set -o nounset
55

66
source ../scripts/check-available-commands.sh
7-
checkCommandsAvailable helm jq vault sed grep docker grep cat aws curl eksctl kubectl
7+
checkCommandsAvailable helm jq sed grep docker grep cat aws curl eksctl kubectl
88

99
if test -n "${AWS_REGION-}"; then
1010
echo "AWS_REGION is set to <$AWS_REGION>"

aws/k8s/secret-challenge-vault-deployment.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ spec:
3737
volumeAttributes:
3838
secretProviderClass: "wrongsecrets-aws-secretsmanager"
3939
containers:
40-
- image: jeroenwillemsen/wrongsecrets:1.5.3-k8s-vault
40+
- image: jeroenwillemsen/wrongsecrets:1.5.12-no-vault
4141
imagePullPolicy: IfNotPresent
4242
ports:
4343
- containerPort: 8080

build-an-deploy-container.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,4 @@ docker pull $WRONGSECRETS_BALANCER_IMAGE:$WRONGSECRETS_BALANCER_TAG &
3131
docker pull $WRONGSECRETS_CLEANER_IMAGE:$WRONGSECRETS_CLEANER_TAG
3232
wait
3333

34-
helm upgrade --install mj ./helm/wrongsecrets-ctf-party --set="imagePullPolicy=Never"
34+
helm upgrade --install mj ./helm/wrongsecrets-ctf-party --set="imagePullPolicy=IfNotPresent"

helm/wrongsecrets-ctf-party/values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ virtualdesktop:
184184
maxInstances: 500
185185
# -- Juice Shop Image to use
186186
image: jeroenwillemsen/wrongsecrets-desktop-k8s
187-
tag: 1.5.12
187+
tag: ctf-party1
188188
repository: commjoenie/wrongSecrets
189189
resources:
190190
request:

readme.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,13 @@ You need 2 things:
8585
- This infrastructure
8686
- A CTFD/Facebook-CTF host which is populated with the challenges based on your secondary hosted WrongSecrets application (this can be the helm chart included in the EKS installation script)
8787

88+
To use the 2 domain setup with CTFD:
89+
90+
1. Set up the CTFD and WrongSecrets instances using your preferred method and docs e.g. AWS and the docs [here](aws/README.md).
91+
2. Set up a team with spoilers available (On AWS this can be done by changing the deployment of a team you have created and setting ctf-mode=false).
92+
3. Use these spoilers to manually copy the answers from WrongSecrets to CTFD.]
93+
4. Delete the team used to get these spoilers (On AWS you can delete the entire namespace of the team)
94+
8895
### General Helm usage
8996

9097
This setup works best if you have Calico installed as your CNI, if you want to use the helm directly, without the AWS Challenges, do:

0 commit comments

Comments
 (0)