Skip to content

Expose metric endpoint on https #368

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion cmd/machine-api-operator/start.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ func startMetricsCollectionAndServer(ctx *ControllerContext) {
metricsPort = v
}
glog.V(4).Info("Starting server to serve prometheus metrics")
go startHTTPMetricServer(fmt.Sprintf(":%d", metricsPort))
go startHTTPMetricServer(fmt.Sprintf("localhost:%d", metricsPort))
}

func startHTTPMetricServer(metricsPort string) {
Expand Down
85 changes: 85 additions & 0 deletions config/machine-api-operator-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: machine-api-operator
namespace: openshift-machine-api
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you elaborate why we need this file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is required for the k8s-e2e ci job which runs test against the k8s. In the original deployment, we are mounting a secret into the pod. Based on the annotation in the mao metrics service object, this secret will get created automatically in the openshift cluster. With the minikube cluster, this secret will not be there and thus deployment will fail. To workaround this, in the k8s-e2e tests this deployment will be used which is not mounting secrets.

labels:
k8s-app: machine-api-operator
spec:
replicas: 1
selector:
matchLabels:
k8s-app: machine-api-operator
template:
metadata:
labels:
k8s-app: machine-api-operator
spec:
priorityClassName: system-node-critical
serviceAccountName: machine-api-operator
containers:
- name: kube-rbac-proxy
image: quay.io/openshift/origin-kube-rbac-proxy:4.2.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be an image stream no?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nevermind, it is

args:
- "--secure-listen-address=0.0.0.0:8443"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will use an at startup generated self-signed cert, is that intended?

Copy link
Contributor Author

@vikaschoudhary16 vikaschoudhary16 Sep 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- "--upstream=http://localhost:8080/"
- "--config-file=/etc/kube-rbac-proxy/config-file.yaml"
- "--logtostderr=true"
- "--v=10"
ports:
- containerPort: 8443
name: https
volumeMounts:
- name: config
mountPath: /etc/kube-rbac-proxy
- name: machine-api-operator
image: docker.io/openshift/origin-machine-api-operator:v4.0.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image refers to v4.0 whereas the kube-rbac-proxy refers to 4.2.0. Is this intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mao image version is same as it is today in https://github.com/openshift/machine-api-operator/blob/master/install/0000_30_machine-api-operator_11_deployment.yaml#L22

kube-rbac-proxy image 4.2.0 is taken just to be latest. Not sure if these two should be same. I see both image streams 4.0.0 and 4.2.0 in the install/image-references, so chose to follow the same pattern and use 4.2.0

command:
- "/machine-api-operator"
args:
- "start"
- "--images-json=/etc/machine-api-operator-config/images/images.json"
- "--alsologtostderr"
- "--v=3"
env:
- name: RELEASE_VERSION
value: "0.0.1-snapshot"
- name: COMPONENT_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METRICS_PORT
value: "8080"
resources:
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: images
mountPath: /etc/machine-api-operator-config/images
nodeSelector:
node-role.kubernetes.io/master: ""
restartPolicy: Always
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 120
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 120
volumes:
- name: config
configMap:
name: kube-rbac-proxy
- name: images
configMap:
name: machine-api-operator-images
15 changes: 14 additions & 1 deletion install/0000_30_machine-api-operator_09_rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,14 @@ kind: ClusterRole
metadata:
name: machine-api-operator
rules:

- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]
- apiGroups:
- config.openshift.io
resources:
Expand Down Expand Up @@ -319,6 +326,12 @@ metadata:
name: prometheus-k8s-machine-api-operator
namespace: openshift-machine-api
rules:
- apiGroups:
- ""
resources:
- namespace/metrics
verbs:
- get
- apiGroups:
- ""
resources:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-rbac-proxy
namespace: openshift-machine-api
data:
config-file.yaml: |+
authorization:
resourceAttributes:
apiVersion: v1
resource: namespace
subresource: metrics
namespace: openshift-machine-api

9 changes: 5 additions & 4 deletions install/0000_30_machine-api-operator_10_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,16 @@ kind: Service
metadata:
name: machine-api-operator
namespace: openshift-machine-api
annotations:
service.alpha.openshift.io/serving-cert-secret-name: machine-api-operator-tls
labels:
k8s-app: machine-api-operator
spec:
type: ClusterIP
ports:
- name: metrics
port: 8080
targetPort: metrics
protocol: TCP
- name: https
port: 8443
targetPort: https
selector:
k8s-app: machine-api-operator
sessionAffinity: None
27 changes: 24 additions & 3 deletions install/0000_30_machine-api-operator_11_deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,24 @@ spec:
priorityClassName: system-node-critical
serviceAccountName: machine-api-operator
containers:
- name: kube-rbac-proxy
image: quay.io/openshift/origin-kube-rbac-proxy:4.2.0
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://localhost:8080/"
- "--tls-cert-file=/etc/tls/private/tls.crt"
- "--tls-private-key-file=/etc/tls/private/tls.key"
- "--config-file=/etc/kube-rbac-proxy/config-file.yaml"
- "--logtostderr=true"
- "--v=10"
ports:
- containerPort: 8443
name: https
volumeMounts:
- name: config
mountPath: /etc/kube-rbac-proxy
- mountPath: /etc/tls/private
name: machine-api-operator-tls
- name: machine-api-operator
image: docker.io/openshift/origin-machine-api-operator:v4.0.0
command:
Expand All @@ -36,9 +54,6 @@ spec:
fieldPath: metadata.namespace
- name: METRICS_PORT
value: "8080"
ports:
- name: metrics
containerPort: 8080
resources:
requests:
cpu: 10m
Expand All @@ -65,6 +80,12 @@ spec:
effect: "NoExecute"
tolerationSeconds: 120
volumes:
- name: config
configMap:
name: kube-rbac-proxy
- name: images
configMap:
name: machine-api-operator-images
- name: machine-api-operator-tls
secret:
secretName: machine-api-operator-tls
7 changes: 5 additions & 2 deletions install/0000_90_machine-api-operator_03_servicemonitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,11 @@ spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 30s
port: metrics
scheme: http
port: https
scheme: https
tlsConfig:
caFile: /etc/prometheus/configmaps/serving-certs-ca-bundle/service-ca.crt
serverName: machine-api-operator.openshift-machine-api.svc
namespaceSelector:
matchNames:
- openshift-machine-api
Expand Down
4 changes: 4 additions & 0 deletions install/image-references
Original file line number Diff line number Diff line change
Expand Up @@ -54,3 +54,7 @@ spec:
from:
kind: DockerImage
name: quay.io/openshift/origin-ironic-static-ip-manager:v4.2.0
- name: kube-rbac-proxy
from:
kind: DockerImage
name: quay.io/openshift/origin-kube-rbac-proxy:4.2.0
3 changes: 2 additions & 1 deletion kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,9 @@ resources:
- install/0000_30_machine-api-operator_07_machinehealthcheck.crd.yaml
- install/0000_30_machine-api-operator_08_machinedisruptionbudget.crd.yaml
- install/0000_30_machine-api-operator_09_rbac.yaml
- install/0000_30_machine-api-operator_10_kube-rbac-proxy-config.yaml
- install/0000_30_machine-api-operator_10_service.yaml
- install/0000_30_machine-api-operator_11_deployment.yaml
- config/machine-api-operator-deployment.yaml
- install/0000_30_machine-api-operator_12_clusteroperator.yaml


Expand Down