Skip to content

Bump node from 16-alpine to 18-alpine in /wrongsecrets-balancer #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 18
- name: Install Balancer
run: |
cd cleaner
Expand All @@ -23,6 +26,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 18
- name: "Install & Build BalancerUI"
run: |
cd wrongsecrets-balancer/ui
Expand Down
4 changes: 2 additions & 2 deletions aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Are you done playing? Please run `terraform destroy` twice to clean up.
### Test it
When you have completed the installation steps, you can do `kubectl port-forward service/wrongsecrets-balancer 3000:3000` and then go to [http://localhost:3000](http://localhost:3000).

Want to know how well your cluster is holding up? Check with
Want to know how well your cluster is holding up? Check with

```sh
kubectl top nodes
Expand All @@ -69,7 +69,7 @@ Want to know how well your cluster is holding up? Check with
When you're done:

1. Kill the port forward.
2. Run the cleanup script: `cleanup-aws-loadbalancing-and-helm.sh`
2. Run the cleanup script: `cleanup-aws-autoscaling-and-helm.sh`
3. Run `terraform destroy` to clean up the infrastructure.
1. If you've deployed the `shared-state` s3 bucket, also `cd shared-state` and `terraform destroy` there.
4. Run `unset KUBECONFIG` to unset the KUBECONFIG env var.
Expand Down
1 change: 1 addition & 0 deletions aws/build-an-deploy-aws.sh
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ eksctl create iamserviceaccount \
--region=$AWS_REGION \
--namespace=kube-system \
--name=cluster-autoscaler \
--role-name=AmazonEKSClusterAutoscalerRole \
--attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
--override-existing-serviceaccounts \
--approve
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ eksctl delete iamserviceaccount \
sleep 5 # Prevents race condition - command below may error out because it's still 'attached'

aws iam delete-policy \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy
6 changes: 3 additions & 3 deletions cleaner/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
FROM node:16-alpine as build
FROM node:18-alpine as build
RUN mkdir -p /home/app
WORKDIR /home/app
COPY package.json package-lock.json ./
RUN npm ci --production
RUN npm ci --omit=dev

FROM node:16-alpine
FROM node:18-alpine
RUN addgroup --system --gid 1001 app && adduser app --system --uid 1001 --ingroup app
WORKDIR /home/app/
COPY --from=build --chown=app:app /home/app/node_modules/ ./node_modules/
Expand Down
Loading