Skip to content

Loadbalancing stickiness #30

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 21, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ The bucket name should be in the output. Please use that to configure the Terraf

The terraform code is loosely based on [this EKS managed Node Group TF example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group).

**Note**: Applying the Terraform means you are creating cloud infrastructure which actually costs you money. The authors are not responsible for any cost coming from following the instructions below.
**Note**: Applying the Terraform means you are creating cloud infrastructure which actually costs you money. **_the current boundary is 50 t3a-Xlarge nodes_**. Please adapt the servers you deploy to in `main.tf` in this folder to your liking to reduce possible costs. Note that this project can run on a single T3A-Large instance, but this would require reducing the amount of wrongsecretbalancers to 1 (`balancer.replicas=1`). **_The authors are not responsible for any cost coming from following the instructions below_**.

**Note-II**: The cluster you create has its access bound to the public IP of the creator. In other words: the cluster you create with this code has its access bound to your public IP-address if you apply it locally.

Expand Down
8 changes: 6 additions & 2 deletions build-an-deploy-aws.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,11 @@ echo "Make sure you have updated your AWS credentials and your kubeconfig prior
echo "For this to work the AWS kubernetes cluster must have access to the same local registry / image cache which 'docker build ...' writes its image to"
echo "For example docker-desktop with its included k8s cluster"

echo "Usage: ./build-and-deploy-aws.sh"
echo "NOTE: WE ARE WORKING HERE WITH A 5 LEGGED BALANCER on aWS which costs money by themselves!"

echo "NOTE2: please replace balancer.cookie.cookieParserSecret witha value you fanchy and ensure you have TLS on (see outdated guides)."

echo "Usage: ./build-an-deploy-aws.sh"

version="$(uuidgen)"
AWS_REGION="eu-west-1"
Expand All @@ -29,4 +33,4 @@ aws ssm put-parameter --name wrongsecretvalue --overwrite --type SecureString --
wait

#TODO: REWRITE ABOVE, REWRITE THE HARDCODED DEPLOYMENT VALS INTO VALUES AND OVERRIDE THEM HERE!
helm upgrade --install mj ./helm/wrongsecrets-ctf-party --set="imagePullPolicy=Always" --set="balancer.env.K8S_ENV=aws" --set="balancer.repository=jeroenwillemsen/wrongsecrets-balancer" --set="balancer.tag=0.76aws" --set="wrongsecretsCleanup.repository=jeroenwillemsen/wrongsecrets-ctf-cleaner" --set="wrongsecretsCleanup.tag=0.2"
helm upgrade --install mj ./helm/wrongsecrets-ctf-party --set="imagePullPolicy=Always" --set="balancer.env.K8S_ENV=aws" --set="balancer.cookie.cookieParserSecret=thisisanewrandomvaluesowecanworkatit" --set="balancer.repository=jeroenwillemsen/wrongsecrets-balancer" --set="balancer.tag=0.76aws" --set="balancer.replicas=5" --set="wrongsecretsCleanup.repository=jeroenwillemsen/wrongsecrets-ctf-cleaner" --set="wrongsecretsCleanup.tag=0.2"
2 changes: 1 addition & 1 deletion wrongsecrets-balancer/src/teams/teams.js
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@ async function createAWSTeam(req, res) {
logger.error(`Error while network security policies for team ${team}: ${error}`);
res.status(500).send({ message: 'Failed to Create Instance' });
}

try {
loginCounter.inc({ type: 'registration', userType: 'user' }, 1);

Expand Down