Skip to content

WIP three node cluster without PetSet #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 26 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
5178026
Single pod zookeeper, now that PetSet is de-supported in GKE
solsson Oct 17, 2016
5136cbb
Tree pods, and before we scale that up we probably want PetSet back
solsson Oct 17, 2016
ce64b00
Labels have changed a bit here in nopetset
solsson Oct 17, 2016
659ca06
Manages to get a single instance ready
solsson Oct 17, 2016
08cf9e8
No need to have zookeeper data persistent, when clustered
solsson Oct 18, 2016
ecc545b
Tries to configure more like the petset example, though there we have…
solsson Oct 18, 2016
3abef82
Using 0.0.0.0 for "my" server solves bind 3888 issue at start
solsson Oct 18, 2016
d604847
Zookeeper 3.5+ exposes an admin service, for dynamic configuration, w…
solsson Oct 18, 2016
fb83ca4
Back to zookeeper 3.4, built from https://github.com/solsson/zookeepe…
solsson Oct 18, 2016
05b086b
Can't have a readiness probe until tolereate-unready-endpoints goes b…
solsson Oct 18, 2016
90f1219
We probably don't need longer timeouts so the current official build …
solsson Oct 18, 2016
2b01d41
Keeps a namespace file in zookeeper too, because bootstrapping starts…
solsson Oct 25, 2016
8e7be7e
Without PetSet and 3.5+ dynamic config you definitely want to avoid s…
solsson Oct 25, 2016
01061ae
Merge branch 'zookeeper-5-nodes' into nopetset-zk
solsson Oct 25, 2016
59016b5
First shot at Kafka as noPetSet
solsson Oct 18, 2016
9de803b
Hostnames are random without PetSet
solsson Oct 18, 2016
d7de878
Adds two new instances that differ only in broker id
solsson Oct 18, 2016
74b3dbe
test/21consumer-test1.yml failed to connect to kafka because it got t…
solsson Oct 18, 2016
cfd2d3b
Advertises a service, so consumers can get the kafka endpoint through…
solsson Oct 18, 2016
86dea9b
Updates test commands for nopetset
solsson Oct 18, 2016
44d3eb6
Kafka needs bootstrapping because it has persistent voluemes
solsson Oct 25, 2016
8b16c6c
Oops. Brokers must of course use the same image.
solsson Oct 25, 2016
41d2001
No need to wrap regular startup with args in bash -c. It might even a…
solsson Oct 25, 2016
1f7f53d
With the new preconfigured image we don't intend to mount conf
solsson Oct 25, 2016
034c2aa
Upgrades to the image with log.dirs inside /opt/kafka/data, current d…
solsson Nov 9, 2016
7a30e26
Switches to Deployment so we can use strategy type: Recreate, hopeful…
solsson Nov 9, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions 20broker-service.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
apiVersion: v1
kind: Service
metadata:
name: broker-0
namespace: kafka
spec:
selector:
app: kafka
petindex: "0"
ports:
- port: 9092
---
apiVersion: v1
kind: Service
metadata:
name: broker-1
namespace: kafka
spec:
selector:
app: kafka
petindex: "1"
ports:
- port: 9092
---
apiVersion: v1
kind: Service
metadata:
name: broker-2
namespace: kafka
spec:
selector:
app: kafka
petindex: "2"
ports:
- port: 9092
16 changes: 0 additions & 16 deletions 20dns.yml

This file was deleted.

50 changes: 24 additions & 26 deletions 50kafka.yml
Original file line number Diff line number Diff line change
@@ -1,40 +1,38 @@
apiVersion: apps/v1alpha1
kind: PetSet
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka
name: kafka-0
namespace: kafka
spec:
serviceName: "broker"
replicas: 3
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: kafka
annotations:
pod.alpha.kubernetes.io/initialized: "true"
pod.alpha.kubernetes.io/init-containers: '[
]'
petindex: "0"
spec:
containers:
- name: broker
image: solsson/kafka-persistent:0.10.1
image: solsson/kafka-persistent:0.10.1@sha256:110f9e866acd4fb9e059b45884c34a210b2f40d6e2f8afe98ded616f43b599f9
#resources:
# requests:
# cpu: 100m
# memory: 100Mi
ports:
- containerPort: 9092
- containerPort: 9092
command:
- sh
- -c
- "./bin/kafka-server-start.sh config/server.properties --override broker.id=$(hostname | awk -F'-' '{print $2}')"
- ./bin/kafka-server-start.sh
- config/server.properties
- --override
- broker.id=0
- --override
- advertised.listeners=PLAINTEXT://broker-0.kafka.svc.cluster.local:9092
volumeMounts:
- name: datadir
mountPath: /opt/kafka/data
volumes:
- name: datadir
mountPath: /opt/kafka/data
volumeClaimTemplates:
- metadata:
name: datadir
namespace: kafka
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Mi
persistentVolumeClaim:
claimName: datadir-kafka-0
38 changes: 38 additions & 0 deletions 51kafka.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-1
namespace: kafka
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: kafka
petindex: "1"
spec:
containers:
- name: broker
image: solsson/kafka-persistent:0.10.1@sha256:110f9e866acd4fb9e059b45884c34a210b2f40d6e2f8afe98ded616f43b599f9
#resources:
# requests:
# cpu: 100m
# memory: 100Mi
ports:
- containerPort: 9092
command:
- ./bin/kafka-server-start.sh
- config/server.properties
- --override
- broker.id=1
- --override
- advertised.listeners=PLAINTEXT://broker-1.kafka.svc.cluster.local:9092
volumeMounts:
- name: datadir
mountPath: /opt/kafka/data
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir-kafka-1
38 changes: 38 additions & 0 deletions 52kafka.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-2
namespace: kafka
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: kafka
petindex: "2"
spec:
containers:
- name: broker
image: solsson/kafka-persistent:0.10.1@sha256:110f9e866acd4fb9e059b45884c34a210b2f40d6e2f8afe98ded616f43b599f9
#resources:
# requests:
# cpu: 100m
# memory: 100Mi
ports:
- containerPort: 9092
command:
- ./bin/kafka-server-start.sh
- config/server.properties
- --override
- broker.id=2
- --override
- advertised.listeners=PLAINTEXT://broker-2.kafka.svc.cluster.local:9092
volumeMounts:
- name: datadir
mountPath: /opt/kafka/data
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir-kafka-2
14 changes: 9 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,17 +30,21 @@ The volume size in the example is very small. The numbers don't really matter as

## Set up Zookeeper

This module contains a copy of `pets/zookeeper/` from https://github.com/kubernetes/contrib.
With [nopetset](https://github.com/Yolean/kubernetes-kafka/pull/9) we also discarded persistent storage, so there is no zookeeper bootstrap or [README](https://github.com/Yolean/kubernetes-kafka/blob/master/zookeeper/README.md). Just run:

See the `./zookeeper` folder and follow the README there.

An additional service has been added here, create using:
```
kubectl create -f ./zookeeper/service.yml
kubectl create -f ./zookeeper/
```

## Start Kafka

Set up persistent volumes:
```
./bootstrap/pv.sh
kubectl create -f bootstrap/pvc.yml
```

Set up kafka:
```
kubectl create -f ./
```
Expand Down
13 changes: 8 additions & 5 deletions test/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,13 @@ kubectl exec -ti testclient -- ./bin/kafka-console-consumer.sh --zookeeper zooke

# Go ahead and produce messages
echo "Write a message followed by enter, exit using Ctrl+C"
kubectl exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list kafka-0.broker.kafka.svc.cluster.local:9092 --topic test1
kubectl exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list broker-0.kafka.svc.cluster.local:9092 --topic test1

# Bootstrap even if two nodes are down (shorter name requires same namespace)
kubectl exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list kafka-0.broker:9092,kafka-1.broker:9092,kafka-2.broker:9092 --topic test1
# Bootstrap even if two nodes are down
kubectl exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list broker-0.kafka.svc.cluster.local:9092,broker-1.kafka.svc.cluster.local:9092,broker-2.kafka.svc.cluster.local:9092 --topic test1

# Get a broker through the service
kubectl exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list kafka.kafka.svc.cluster.local:9092 --topic test1

# The following commands run in the pod
kubectl exec -ti testclient -- /bin/bash
Expand All @@ -22,13 +25,13 @@ kubectl exec -ti testclient -- /bin/bash
./bin/kafka-topics.sh --zookeeper zookeeper:2181 --describe --topic test2

./bin/kafka-verifiable-consumer.sh \
--broker-list=kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092 \
--broker-list=kafka.kafka.svc.cluster.local:9092 \
--topic=test2 --group-id=A --verbose

# If a topic isn't available this producer will tell you
# WARN Error while fetching metadata with correlation id X : {topicname=LEADER_NOT_AVAILABLE}
# ... but with current config Kafka will auto-create the topic
./bin/kafka-verifiable-producer.sh \
--broker-list=kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092 \
--broker-list=kafka.kafka.svc.cluster.local:9092 \
--value-prefix=1 --topic=test2 \
--acks=1 --throughput=1 --max-messages=10
5 changes: 5 additions & 0 deletions zookeeper/00namespace.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: kafka
85 changes: 85 additions & 0 deletions zookeeper/20zoo-service.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: zoo-0
namespace: kafka
spec:
selector:
app: zookeeper
petindex: "0"
ports:
- port: 2888
name: peer
- port: 3888
name: leader-election
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: zoo-1
namespace: kafka
spec:
selector:
app: zookeeper
petindex: "1"
ports:
- port: 2888
name: peer
- port: 3888
name: leader-election
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: zoo-2
namespace: kafka
spec:
selector:
app: zookeeper
petindex: "2"
ports:
- port: 2888
name: peer
- port: 3888
name: leader-election
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: zoo-3
namespace: kafka
spec:
selector:
app: zookeeper
petindex: "3"
ports:
- port: 2888
name: peer
- port: 3888
name: leader-election
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: zoo-4
namespace: kafka
spec:
selector:
app: zookeeper
petindex: "4"
ports:
- port: 2888
name: peer
- port: 3888
name: leader-election
2 changes: 1 addition & 1 deletion zookeeper/service.yml → zookeeper/30service.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ spec:
- port: 2181
name: client
selector:
app: zk
app: zookeeper
38 changes: 38 additions & 0 deletions zookeeper/50zoo.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: zoo-0
namespace: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper
petindex: "0"
spec:
containers:
- name: zookeeper
image: zookeeper@sha256:bb1a12a2168fc5e508ee019aea2d45bf846e99ea87d6bcaf2ede5c59fd439368
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: server.1=0.0.0.0:2888:3888:participant server.2=zoo-1.kafka.svc.cluster.local:2888:3888:participant server.3=zoo-2.kafka.svc.cluster.local:2888:3888:participant server.4=zoo-3.kafka.svc.cluster.local:2888:3888:participant server.5=zoo-4.kafka.svc.cluster.local:2888:3888:participant
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /tmp/zookeeper
- name: opt
mountPath: /opt/
volumes:
- name: opt
emptyDir: {}
- name: datadir
emptyDir: {}
Loading