Skip to content

feat: refactor part of bash script to be included in TF #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 22 additions & 15 deletions aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,11 @@ Have the following tools installed:

Make sure you have an active account at AWS for which you have configured the credentials on the system where you will execute the steps below. In this example we stored the credentials under an aws profile as `awsuser`.

### Multi-user setup: shared state
## Installation

If you want to host a multi-user setup, you will probably want to share the state file so that everyone can try related challenges. We have provided a starter to easily do so using a Terraform S3 backend.
First, we want to create a shared state. We've provided the terraform code for this in the `shared-state` subfolder.

First, create an s3 bucket (optionally add `-var="region=YOUR_DESIRED_REGION"` to the apply to use a region other than the default eu-west-1):
To create an s3 bucket (optionally add `-var="region=YOUR_DESIRED_REGION"` to the apply to use a region other than the default eu-west-1):

```bash
cd shared-state
Expand All @@ -32,9 +32,7 @@ terraform apply
```

The bucket name should be in the output. Please use that to configure the Terraform backend in `main.tf`.
The bucket ARN will be printed, make a note of this as it will be used in the next steps.

## Installation
The bucket ARN will be printed, make a note of this as it will be used in the next steps. It should look something like `arn:aws:s3:::terraform-20230102231352749300000001`.

The terraform code is loosely based on [this EKS managed Node Group TF example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group).

Expand All @@ -43,18 +41,18 @@ The terraform code is loosely based on [this EKS managed Node Group TF example](
**Note-II**: The cluster you create has its access bound to the public IP of the creator. In other words: the cluster you create with this code has its access bound to your public IP-address if you apply it locally.

1. export your AWS credentials (`export AWS_PROFILE=awsuser`)
2. check whether you have the right profile by doing `aws sts get-caller-identity` and make sure you have enough rights with the caller its identity and that the actual accountnumber displayed is the account designated for you to apply this TF to.
3. Do `terraform init` (if required, use tfenv to select TF 0.13.1 or higher )
4. The bucket ARN will be asked for in the next 2 steps. Take the one provided to you and add `arn:aws:s3:::` to the start. e.g. ``arn:aws:s3:::terraform-20230102231352749300000001`
2. check whether you have the right profile by doing `aws sts get-caller-identity`. Make sure you have the right account and have the rights to do this.
3. Do `terraform init` (if required, use tfenv to select TF 0.14.0 or higher )
4. The bucket ARN will be asked in the next 2 steps. Take the one provided to you in the output earlier (e.g., `arn:aws:s3:::terraform-20230102231352749300000001`).
5. Do `terraform plan`
6. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
7. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
8. Do `export KUBECONFIG=~/.kube/wrongsecrets`
9. Run `./build-an-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)

Your EKS cluster should be visible in [EU-West-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
Your EKS cluster should be visible in [eu-west-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.

Are you done playing? Please run `terraform destroy` twice to clean up.
Are you done playing? Please run `terraform destroy` twice to clean up (first in the main `aws` folder, then the `shared-state` subfolder).

### Test it

Expand Down Expand Up @@ -137,15 +135,18 @@ The documentation below is auto-generated to give insight on what's created via

| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | 4.31.0 |
| <a name="provider_http"></a> [http](#provider\_http) | 3.1.0 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | 4.48.0 |
| <a name="provider_http"></a> [http](#provider\_http) | 3.2.1 |
| <a name="provider_random"></a> [random](#provider\_random) | 3.4.3 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 18.30.2 |
| <a name="module_cluster_autoscaler_irsa_role"></a> [cluster\_autoscaler\_irsa\_role](#module\_cluster\_autoscaler\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
| <a name="module_ebs_csi_irsa_role"></a> [ebs\_csi\_irsa\_role](#module\_ebs\_csi\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 19.4.2 |
| <a name="module_load_balancer_controller_irsa_role"></a> [load\_balancer\_controller\_irsa\_role](#module\_load\_balancer\_controller\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.9.0 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 3.18.1 |

## Resources
Expand Down Expand Up @@ -199,7 +200,13 @@ The documentation below is auto-generated to give insight on what's created via
| Name | Description |
|------|-------------|
| <a name="output_cluster_endpoint"></a> [cluster\_endpoint](#output\_cluster\_endpoint) | Endpoint for EKS control plane. |
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | The id of the cluster |
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | The EKS cluster name |
| <a name="output_cluster_security_group_id"></a> [cluster\_security\_group\_id](#output\_cluster\_security\_group\_id) | Security group ids attached to the cluster control plane. |
| <a name="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role) | The role ARN used in the IRSA setup |
| <a name="output_ebs_role"></a> [ebs\_role](#output\_ebs\_role) | EBS CSI driver role |
| <a name="output_ebs_role_arn"></a> [ebs\_role\_arn](#output\_ebs\_role\_arn) | EBS CSI driver role |
| <a name="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role) | The role name used in the IRSA setup |
| <a name="output_irsa_role_arn"></a> [irsa\_role\_arn](#output\_irsa\_role\_arn) | The role ARN used in the IRSA setup |
| <a name="output_secrets_manager_secret_name"></a> [secrets\_manager\_secret\_name](#output\_secrets\_manager\_secret\_name) | The name of the secrets manager secret |
| <a name="output_state_bucket_name"></a> [state\_bucket\_name](#output\_state\_bucket\_name) | Terraform s3 state bucket name |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
133 changes: 76 additions & 57 deletions aws/build-an-deploy-aws.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,10 @@ echo "Make sure you have updated your AWS credentials and your kubeconfig prior
echo "For this to work the AWS kubernetes cluster must have access to the same local registry / image cache which 'docker build ...' writes its image to"
echo "For example docker-desktop with its included k8s cluster"

echo "NOTE: WE ARE WORKING HERE WITH A 5 LEGGED BALANCER on aWS which costs money by themselves!"
echo "NOTE: WE ARE WORKING HERE WITH A 5 LEGGED LOAD BALANCER on AWS which costs money by themselves!"

echo "NOTE2: please replace balancer.cookie.cookieParserSecret witha value you fanchy and ensure you have TLS on (see outdated guides)."
echo "NOTE 2: You can replace balancer.cookie.cookieParserSecret with a value you fancy."
echo "Note 3: Ensure you turn TLS on :)."

echo "Usage: ./build-an-deploy-aws.sh "

Expand All @@ -17,17 +18,10 @@ checkCommandsAvailable helm aws kubectl eksctl sed
if test -n "${AWS_REGION-}"; then
echo "AWS_REGION is set to <$AWS_REGION>"
else
AWS_REGION=eu-west-1
export AWS_REGION=eu-west-1
echo "AWS_REGION is not set or empty, defaulting to ${AWS_REGION}"
fi

if test -n "${CLUSTERNAME-}"; then
secho "CLUSTERNAME is set to <$CLUSTERNAME> which is different than the default. Please update the cluster-autoscaler-policy.json."
else
CLUSTERNAME=wrongsecrets-exercise-cluster
echo "CLUSTERNAME is not set or empty, defaulting to ${CLUSTERNAME}"
fi

echo "Checking for compatible shell"
case "$SHELL" in
*bash*)
Expand All @@ -45,12 +39,18 @@ esac
ACCOUNT_ID=$(aws sts get-caller-identity | jq '.Account' -r)
echo "ACCOUNT_ID=${ACCOUNT_ID}"

CLUSTERNAME="$(terraform output -raw cluster_name)"
STATE_BUCKET="$(terraform output -raw state_bucket_name)"
IRSA_ROLE_ARN="$(terraform output -raw irsa_role_arn)"
EBS_ROLE_ARN="$(terraform output -raw ebs_role_arn)"

version="$(uuidgen)"
echo "CLUSTERNAME=${CLUSTERNAME}"
echo "STATE_BUCKET=${STATE_BUCKET}"
echo "IRSA_ROLE_ARN=${IRSA_ROLE_ARN}"
echo "EBS_ROLE_ARN=${EBS_ROLE_ARN}"

AWS_REGION="eu-west-1"
version="$(uuidgen)"

echo "Install autoscaler first!"
echo "If the below output is different than expected: please hard stop this script (running aws sts get-caller-identity first)"

aws sts get-caller-identity
Expand All @@ -59,23 +59,23 @@ echo "Giving you 4 seconds before we add autoscaling"

sleep 4

echo "Installing policies and service accounts"
# echo "Installing policies and service accounts"

aws iam create-policy \
--policy-name AmazonEKSClusterAutoscalerPolicy \
--policy-document file://cluster-autoscaler-policy.json
# aws iam create-policy \
# --policy-name AmazonEKSClusterAutoscalerPolicy \
# --policy-document file://cluster-autoscaler-policy.json

echo "Installing iamserviceaccount"
# echo "Installing iamserviceaccount"

eksctl create iamserviceaccount \
--cluster=$CLUSTERNAME \
--region=$AWS_REGION \
--namespace=kube-system \
--name=cluster-autoscaler \
--role-name=AmazonEKSClusterAutoscalerRole \
--attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
--override-existing-serviceaccounts \
--approve
# eksctl create iamserviceaccount \
# --cluster=$CLUSTERNAME \
# --region=$AWS_REGION \
# --namespace=kube-system \
# --name=cluster-autoscaler \
# --role-name=AmazonEKSClusterAutoscalerRole \
# --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
# --override-existing-serviceaccounts \
# --approve

echo "Deploying the k8s autoscaler for eks through kubectl"

Expand All @@ -87,7 +87,7 @@ kubectl apply -f cluster-autoscaler-autodiscover.yaml
echo "annotating service account for cluster-autoscaler"
kubectl annotate serviceaccount cluster-autoscaler \
-n kube-system \
eks.amazonaws.com/role-arn=arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKSClusterAutoscalerRole
eks.amazonaws.com/role-arn=${CLUSTER_AUTOSCALER}

kubectl patch deployment cluster-autoscaler \
-n kube-system \
Expand Down Expand Up @@ -123,43 +123,62 @@ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/late

wait

DEFAULT_PASSWORD=thankyou
#TODO: REWRITE ABOVE, REWRITE THE HARDCODED DEPLOYMENT VALS INTO VALUES AND OVERRIDE THEM HERE!
echo "default password is ${DEFAULT_PASSWORD}"
# if passed as arguments, use those
# otherwise, create new default values

if [[ -z $APP_PASSWORD ]]; then
echo "No app password passed, creating a new one"
APP_PASSWORD="$(uuidgen)"
else
echo "App password already set"
fi

if [[ -z $CREATE_TEAM_HMAC ]]; then
CREATE_TEAM_HMAC="$(openssl rand -base64 24)"
else
echo "Create team HMAC already set"
fi

if [[ -z $COOKIE_PARSER_SECRET ]]; then
COOKIE_PARSER_SECRET="$(openssl rand -base64 24)"
else
echo "Cookie parser secret already set"
fi

echo "App password is ${APP_PASSWORD}"
helm upgrade --install mj ../helm/wrongsecrets-ctf-party \
--set="imagePullPolicy=Always" \
--set="balancer.env.K8S_ENV=aws" \
--set="balancer.env.IRSA_ROLE=arn:aws:iam::${ACCOUNT_ID}:role/wrongsecrets-secret-manager" \
--set="balancer.env.REACT_APP_ACCESS_PASSWORD=${DEFAULT_PASSWORD}" \
--set="balancer.cookie.cookieParserSecret=thisisanewrandomvaluesowecanworkatit" \
--set="balancer.repository=jeroenwillemsen/wrongsecrets-balancer" \
--set="balancer.replicas=4" \
--set="wrongsecretsCleanup.repository=jeroenwillemsen/wrongsecrets-ctf-cleaner" \
--set="wrongsecrets.ctfKey=test" # this key isn't actually necessary in a setup with CTFd
--set="balancer.env.IRSA_ROLE=${IRSA_ROLE_ARN}" \
--set="balancer.env.REACT_APP_ACCESS_PASSWORD=${APP_PASSWORD}" \
--set="balancer.env.REACT_APP_S3_BUCKET_URL=s3://${STATE_BUCKET}" \
--set="balancer.env.REACT_APP_CREATE_TEAM_HMAC_KEY=${CREATE_TEAM_HMAC}" \
--set="balancer.cookie.cookieParserSecret=${COOKIE_PARSER_SECRET}"

# echo "Installing EBS CSI driver"
# eksctl create iamserviceaccount \
# --name ebs-csi-controller-sa \
# --namespace kube-system \
# --cluster $CLUSTERNAME \
# --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
# --approve \
# --role-only \
# --role-name AmazonEKS_EBS_CSI_DriverRole
# --region $AWS_REGION

# echo "managing EBS CSI Driver as a separate eks addon"
# eksctl create addon --name aws-ebs-csi-driver \
# --cluster $CLUSTERNAME \
# --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
# --force \
# --region $AWS_REGION

# Install CTFd

echo "Installing EBS CSI driver"
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster $CLUSTERNAME \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRole
--region $AWS_REGION

echo "managing EBS CSI Driver as a separate eks addon"
eksctl create addon --name aws-ebs-csi-driver \
--cluster $CLUSTERNAME \
--service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole \
--force \
--region $AWS_REGION
echo "Installing CTFd"

export HELM_EXPERIMENTAL_OCI=1
kubectl create namespace ctfd
helm -n ctfd install ctfd oci://ghcr.io/bman46/ctfd/ctfd \
helm upgrade --install ctfd -n ctfd oci://ghcr.io/bman46/ctfd/ctfd \
--set="redis.auth.password=$(openssl rand -base64 24)" \
--set="mariadb.auth.rootPassword=$(openssl rand -base64 24)" \
--set="mariadb.auth.password=$(openssl rand -base64 24)" \
Expand Down
66 changes: 33 additions & 33 deletions aws/cluster-autoscaler-policy.json
Original file line number Diff line number Diff line change
@@ -1,36 +1,36 @@
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeImages",
"ec2:GetInstanceTypesFromInstanceRequirements",
"eks:DescribeNodegroup"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/k8s.io/cluster-autoscaler/wrongsecrets-exercise-cluster": "owned"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeScalingActivities",
"autoscaling:DescribeTags",
"ec2:DescribeInstanceTypes",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": "*"
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeImages",
"ec2:GetInstanceTypesFromInstanceRequirements",
"eks:DescribeNodegroup"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/k8s.io/cluster-autoscaler/wrongsecrets-exercise-cluster": "owned"
}
]
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeScalingActivities",
"autoscaling:DescribeTags",
"ec2:DescribeInstanceTypes",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": "*"
}
]
}
2 changes: 1 addition & 1 deletion aws/k8s/ctfd_resources/index_fragment.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,4 @@ <h4 class="text-center">
<a href="challenges">Click here</a> to start hacking!
</h4>
</div>
</div>
</div>
Loading