-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Add code for filtering target group & load balancers by VPC ID #2157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Welcome @Shreya027! |
Hi @Shreya027. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Codecov Report
@@ Coverage Diff @@
## main #2157 +/- ##
==========================================
+ Coverage 52.04% 52.34% +0.29%
==========================================
Files 132 132
Lines 7130 7189 +59
==========================================
+ Hits 3711 3763 +52
- Misses 3131 3137 +6
- Partials 288 289 +1
Continue to review full report at Codecov.
|
could you describe the manual tests you ran to verify the changes? Also explain more about why this change is needed in the PR description. |
}, | ||
}, | ||
{ | ||
name: "1/3 targetGroups matches single tagFilter", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test has the same name as the one from line 817. it is helpful to use more descriptive names.
Tags: map[string]string{ | ||
"keyA": "valueA3", | ||
"keyB": "valueB3", | ||
}, | ||
}, | ||
}, | ||
}, | ||
{ | ||
name: "1/3 loadBalancers matches single tagFilter", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same name the test from line 443.
/ok-to-test |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kishorj, Shreya027 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
…netes-sigs#2157) * Add code for filtering target group & LB by VPC ID; Add IT code * Add detailed comments for tests
Issue
#1566
Description
Added Filtering of Target Group and Load Balancer by VPC ID.
Checklist
Issue Description: The conditions that led to the error involved dangling TargetGroup that was created for an old (deleted) cluster but was left uncleaned up. In such a scenario, if a user created a new cluster (in a new VPC) and deployed alb resources to the cluster, there is a possibility that the alb controller will reuse to old TargetGroup if the tag values for previous TargetGroup matches. In case of the customer issue, the cluster was deleted but the TargetGroup was not cleaned up by the user. So when the user tried to created a new cluster (in a new VPC with same cluster-name), it found the old TargetGroup and tried re-using it.
Solution: The code fixes this behavior by filtering TargetGroup and Load Balancers by vpc-id. However, customers are responsible for deleting Ingresses before deleting the cluster (so target groups and load balancers can be cleaned up properly).
Log on Issue Reproduction: (shows status: 404 )
{"level":"info","ts":1627076759.10457,"logger":"controllers.ingress","msg":"successfully built model","model":"{\"id\":\"game-2048/ingress-2048\",\"resources\":{\"AWS::EC2::SecurityGroup\":{\"ManagedLBSecurityGroup\":{\"spec\":{\"groupName\":\"k8s-game2048-ingress2-f1f400dac5\",\"description\":\"[k8s] Managed SecurityGroup for LoadBalancer\",\"ingress\":[{\"ipProtocol\":\"tcp\",\"fromPort\":80,\"toPort\":80,\"ipRanges\":[{\"cidrIP\":\"0.0.0.0/0\"}]}]}}},\"AWS::ElasticLoadBalancingV2::Listener\":{\"80\":{\"spec\":{\"loadBalancerARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\"},\"port\":80,\"protocol\":\"HTTP\",\"defaultActions\":[{\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"404\"
Manual Tests:
kubectl logs <pod_name> -n <ns_name>
{"level":"info","ts":1628208589.8923402,"logger":"controllers.ingress","msg":"successfully built model","model":"{\"id\":\"s-game-2048/s-ingress-2048\",\"resources\":{\"AWS::EC2::SecurityGroup\":{\"ManagedLBSecurityGroup\":{\"spec\":{\"groupName\":\"k8s-sgame204-singress-d845ce2c30\",\"description\":\"[k8s] Managed SecurityGroup for LoadBalancer\",\"ingress\":[{\"ipProtocol\":\"tcp\",\"fromPort\":80,\"toPort\":80,\"ipRanges\":[{\"cidrIP\":\"0.0.0.0/0\"}]}]}}},\"AWS::ElasticLoadBalancingV2::Listener\":{\"80\":{\"spec\":{\"loadBalancerARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\"},\"port\":80,\"protocol\":\"HTTP\",\"defaultActions\":[{\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"404\"}}]}}},
kubectl set image deployment/aws-load-balancer-controller aws-load-balancer-controller=$IMAGE -n <ns_name>
where IMAGE name was exported to be same as the image pushed to ECR.kubectl logs <pod_name> -n <ns_name>
. It did not have any error logs as above.kubectl describe <alb_pod_name> -n <ns_name>