Skip to content

Commit 5fa1078

Browse files
committed
Merge branch 'master' into test-producer-acks-config
2 parents e420fe2 + 198666d commit 5fa1078

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1243
-249
lines changed
File renamed without changes.
File renamed without changes.

README.md

Lines changed: 48 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -1,78 +1,69 @@
1-
_Manifests here require Kubernetes 1.8 now.
2-
On earlier versions use [v2.1.0](https://github.com/Yolean/kubernetes-kafka/tree/v2.1.0)._
1+
# Kafka for Kubernetes
32

4-
# Kafka on Kubernetes
3+
This community seeks to provide:
4+
* Production-worthy Kafka setup for persistent (domain- and ops-) data at small scale.
5+
* Operational knowledge, biased towards resilience over throughput, as Kubernetes manifest.
6+
* A platform for event-driven (streaming!) microservices design using Kubernetes.
57

6-
Transparent Kafka setup that you can grow with.
7-
Good for both experiments and production.
8+
To quote [@arthurk](https://github.com/Yolean/kubernetes-kafka/issues/82#issuecomment-337532548):
89

9-
How to use:
10-
* Good to know: you'll likely want to fork this repo. It prioritizes clarity over configurability, using plain manifests and .propeties files; no client side logic.
11-
* Run a Kubernetes cluster, [minikube](https://github.com/kubernetes/minikube) or real.
12-
* Quickstart: use the `kubectl apply`s below.
13-
* Have a look at [addon](https://github.com/Yolean/kubernetes-kafka/labels/addon)s, or the official forks:
14-
- [kubernetes-kafka-small](https://github.com/Reposoft/kubernetes-kafka-small) for single-node clusters like Minikube.
15-
- [StreamingMicroservicesPlatform](https://github.com/StreamingMicroservicesPlatform/kubernetes-kafka) Like Confluent's [platform quickstart](https://docs.confluent.io/current/connect/quickstart.html) but for Kubernetes.
16-
* Join the discussion in issues and PRs.
10+
> thanks for creating and maintaining this Kubernetes files, they're up-to-date (unlike the kubernetes contrib files, don't require helm and work great!
1711
18-
No readable readme can properly introduce both [Kafka](http://kafka.apache.org/) and [Kubernetes](https://kubernetes.io/),
19-
but we think the combination of the two is a great backbone for microservices.
20-
Back when we read [Newman](http://samnewman.io/books/building_microservices/) we were beginners with both.
21-
Now we've read [Kleppmann](http://dataintensive.net/), [Confluent](https://www.confluent.io/blog/) and [SRE](https://landing.google.com/sre/book.html) and enjoy this "Streaming Platform" lock-in :smile:.
12+
## Getting started
2213

23-
We also think the plain-yaml approach of this project is easier to understand and evolve than [helm](https://github.com/kubernetes/helm) [chart](https://github.com/kubernetes/charts/tree/master/incubator/kafka)s.
14+
We suggest you `apply -f` manifests in the following order:
15+
* Your choice of storage classes from [./configure](./configure/)
16+
* [namespace](./00-namespace.yml)
17+
* [./rbac-namespace-default](./rbac-namespace-default/)
18+
* [./zookeeper](./zookeeper/)
19+
* [./kafka](./kafka/)
2420

25-
## What you get
21+
That'll give you client "bootstrap" `bootstrap.kafka.svc.cluster.local:9092`.
2622

27-
Keep an eye on `kubectl --namespace kafka get pods -w`.
23+
## Fork
2824

29-
The goal is to provide [Bootstrap servers](http://kafka.apache.org/documentation/#producerconfigs): `kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092,kafka-2.broker.kafka.svc.cluster.local:9092`
30-
`
25+
Our only dependency is `kubectl`. Not because we dislike Helm or Operators, but because we think plain manifests make it easier to collaborate.
26+
If you begin to rely on this kafka setup we recommend you fork, for example to edit [broker config](https://github.com/Yolean/kubernetes-kafka/blob/master/kafka/10broker-config.yml#L47).
3127

32-
Zookeeper at `zookeeper.kafka.svc.cluster.local:2181`.
28+
## Version history
3329

34-
## Prepare storage classes
30+
| tag | k8s ≥ | highlights |
31+
| ----- | ------ | ---------- |
32+
| 4.x | 1.9+ | Kafka 1.1 dynamic config |
33+
| v4.1 | 1.9+ | Kafka 1.0.1 new [default](#148) [config](#170) |
34+
| v3.2 | 1.9.4, 1.8.9, 1.7.14 | Required for read-only ConfigMaps [#162](https://github.com/Yolean/kubernetes-kafka/issues/162) [#163](https://github.com/Yolean/kubernetes-kafka/pull/163) [k8s #58720](https://github.com/kubernetes/kubernetes/pull/58720) |
35+
| v3.1 | 1.8 | The painstaking path to `min.insync.replicas`=2 |
36+
| v3.0 | 1.8 | [Outside access](#78), [modern manifests](#84), [bootstrap.kafka](#52) |
37+
| v2.1 | 1.5 | Kafka 1.0, the init script concept |
38+
| v2.0 | 1.5 | [addon](https://github.com/Yolean/kubernetes-kafka/labels/addon)s |
39+
| v1.0 | 1 | Stateful? In Kubernetes? In 2016? Yes. |
3540

36-
For Minikube run `kubectl apply -f configure/minikube-storageclass-broker.yml; kubectl apply -f configure/minikube-storageclass-zookeeper.yml`.
41+
All available as [releases](https://github.com/Yolean/kubernetes-kafka/releases).
3742

38-
There's a similar setup for GKE, `configure/gke-*`. You might want to tweak it before creating.
43+
## Monitoring
3944

40-
## Start Zookeeper
45+
Have a look at:
46+
* [./prometheus](./prometheus/)
47+
* [./linkedin-burrow](./linkedin-burrow/)
48+
* [or plain JMX](https://github.com/Yolean/kubernetes-kafka/pull/96)
49+
* what's happening in the [monitoring](https://github.com/Yolean/kubernetes-kafka/labels/monitoring) label.
50+
* Note that this repo is intentionally light on [automation](https://github.com/Yolean/kubernetes-kafka/labels/automation). We think every SRE team must build the operational knowledge first.
4151

42-
The [Kafka book](https://www.confluent.io/resources/kafka-definitive-guide-preview-edition/) recommends that Kafka has its own Zookeeper cluster with at least 5 instances.
52+
## Outside (out-of-cluster) access
4353

44-
```
45-
kubectl apply -f ./zookeeper/
46-
```
54+
Available for:
4755

48-
To support automatic migration in the face of availability zone unavailability we mix persistent and ephemeral storage.
56+
* [Brokers](./outside-services/)
4957

50-
## Start Kafka
58+
## Fewer than three nodes?
5159

52-
```
53-
kubectl apply -f ./kafka/
54-
```
60+
For [minikube](https://github.com/kubernetes/minikube/), [youkube](https://github.com/Yolean/youkube) etc:
5561

56-
You might want to verify in logs that Kafka found its own DNS name(s) correctly. Look for records like:
57-
```
58-
kubectl -n kafka logs kafka-0 | grep "Registered broker"
59-
# INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(kafka-0.broker.kafka.svc.cluster.local,9092,PLAINTEXT)
60-
```
62+
* [Scale 1](https://github.com/Yolean/kubernetes-kafka/pull/44)
63+
* [Scale 2](https://github.com/Yolean/kubernetes-kafka/pull/118)
6164

62-
That's it. Just add business value :wink:.
65+
## Stream...
6366

64-
## RBAC
65-
66-
For clusters that enfoce [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) there's a minimal set of policies in
67-
```
68-
kubectl apply -f rbac-namespace-default/
69-
```
70-
71-
## Tests
72-
73-
Tests are based on the [kube-test](https://github.com/Yolean/kube-test) concept.
74-
Like the rest of this repo they have `kubectl` as the only local dependency.
75-
76-
Run self-tests or not. They do generate some load, but indicate if the platform is working or not.
77-
* To include tests, replace `apply -f` with `apply -R -f` in your `kubectl`s above.
78-
* Anything that isn't READY in `kubectl get pods -l test-type=readiness --namespace=test-kafka` is a failed test.
67+
* [Kubernetes events to Kafka](./events-kube/)
68+
* [Container logs to Kafka](https://github.com/Yolean/kubernetes-kafka/pull/131)
69+
* [Heapster metrics to Kafka](https://github.com/Yolean/kubernetes-kafka/pull/120)

avro-tools/avro-tools-config.yml

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
kind: ConfigMap
2+
metadata:
3+
name: avro-tools-config
4+
namespace: kafka
5+
apiVersion: v1
6+
data:
7+
schema-registry.properties: |-
8+
port=80
9+
listeners=http://0.0.0.0:80
10+
kafkastore.bootstrap.servers=PLAINTEXT://bootstrap.kafka:9092
11+
kafkastore.topic=_schemas
12+
debug=false
13+
14+
# https://github.com/Landoop/schema-registry-ui#prerequisites
15+
access.control.allow.methods=GET,POST,PUT,OPTIONS
16+
access.control.allow.origin=*
17+
18+
kafka-rest.properties: |-
19+
#id=kafka-rest-test-server
20+
listeners=http://0.0.0.0:80
21+
bootstrap.servers=PLAINTEXT://bootstrap.kafka:9092
22+
schema.registry.url=http://avro-schemas.kafka:80
23+
24+
# https://github.com/Landoop/kafka-topics-ui#common-issues
25+
access.control.allow.methods=GET,POST,PUT,DELETE,OPTIONS
26+
access.control.allow.origin=*
27+
28+
log4j.properties: |-
29+
log4j.rootLogger=INFO, stdout
30+
31+
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
32+
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
33+
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
34+
35+
log4j.logger.kafka=WARN, stdout
36+
log4j.logger.org.apache.zookeeper=WARN, stdout
37+
log4j.logger.org.apache.kafka=WARN, stdout
38+
log4j.logger.org.I0Itec.zkclient=WARN, stdout
39+
log4j.additivity.kafka.server=false
40+
log4j.additivity.kafka.consumer.ZookeeperConsumerConnector=false
41+
42+
log4j.logger.org.apache.kafka.clients.Metadata=DEBUG, stdout
43+
log4j.logger.org.apache.kafka.clients.consumer.internals.AbstractCoordinator=INFO, stdout

avro-tools/rest-service.yml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: avro-rest
5+
namespace: kafka
6+
spec:
7+
ports:
8+
- port: 80
9+
selector:
10+
app: rest-proxy

avro-tools/rest.yml

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
apiVersion: apps/v1beta2
2+
kind: Deployment
3+
metadata:
4+
name: avro-rest
5+
namespace: kafka
6+
spec:
7+
replicas: 1
8+
selector:
9+
matchLabels:
10+
app: rest-proxy
11+
strategy:
12+
type: RollingUpdate
13+
rollingUpdate:
14+
maxUnavailable: 0
15+
maxSurge: 1
16+
template:
17+
metadata:
18+
labels:
19+
app: rest-proxy
20+
spec:
21+
containers:
22+
- name: cp
23+
image: solsson/kafka-cp@sha256:2797da107f477ede2e826c29b2589f99f22d9efa2ba6916b63e07c7045e15044
24+
env:
25+
- name: KAFKAREST_LOG4J_OPTS
26+
value: -Dlog4j.configuration=file:/etc/kafka-rest/log4j.properties
27+
command:
28+
- kafka-rest-start
29+
- /etc/kafka-rest/kafka-rest.properties
30+
readinessProbe:
31+
httpGet:
32+
path: /
33+
port: 80
34+
livenessProbe:
35+
httpGet:
36+
path: /
37+
port: 80
38+
ports:
39+
- containerPort: 80
40+
volumeMounts:
41+
- name: config
42+
mountPath: /etc/kafka-rest
43+
volumes:
44+
- name: config
45+
configMap:
46+
name: avro-tools-config

avro-tools/schemas-service.yml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: avro-schemas
5+
namespace: kafka
6+
spec:
7+
ports:
8+
- port: 80
9+
selector:
10+
app: schema-registry

avro-tools/schemas.yml

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
apiVersion: apps/v1beta2
2+
kind: Deployment
3+
metadata:
4+
name: avro-schemas
5+
namespace: kafka
6+
spec:
7+
replicas: 1
8+
selector:
9+
matchLabels:
10+
app: schema-registry
11+
strategy:
12+
type: RollingUpdate
13+
rollingUpdate:
14+
maxUnavailable: 0
15+
maxSurge: 1
16+
template:
17+
metadata:
18+
labels:
19+
app: schema-registry
20+
spec:
21+
containers:
22+
- name: cp
23+
image: solsson/kafka-cp@sha256:2797da107f477ede2e826c29b2589f99f22d9efa2ba6916b63e07c7045e15044
24+
env:
25+
- name: SCHEMA_REGISTRY_LOG4J_OPTS
26+
value: -Dlog4j.configuration=file:/etc/schema-registry/log4j.properties
27+
command:
28+
- schema-registry-start
29+
- /etc/schema-registry/schema-registry.properties
30+
readinessProbe:
31+
httpGet:
32+
path: /
33+
port: 80
34+
livenessProbe:
35+
httpGet:
36+
path: /
37+
port: 80
38+
initialDelaySeconds: 60
39+
ports:
40+
- containerPort: 80
41+
volumeMounts:
42+
- name: config
43+
mountPath: /etc/schema-registry
44+
volumes:
45+
- name: config
46+
configMap:
47+
name: avro-tools-config

avro-tools/test/70rest-test1.yml

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
apiVersion: batch/v1
2+
kind: Job
3+
metadata:
4+
name: rest-test1
5+
namespace: kafka
6+
spec:
7+
backoffLimit: 1
8+
template:
9+
metadata:
10+
name: rest-test1
11+
spec:
12+
containers:
13+
- name: curl
14+
image: solsson/curl@sha256:523319afd39573746e8f5a7c98d4a6cd4b8cbec18b41eb30c8baa13ede120ce3
15+
env:
16+
- name: REST
17+
value: http://rest.kafka.svc.cluster.local
18+
- name: TOPIC
19+
value: test1
20+
command:
21+
- /bin/bash
22+
- -ce
23+
- >
24+
curl --retry 10 --retry-delay 30 --retry-connrefused -I $REST;
25+
26+
curl -H 'Accept: application/vnd.kafka.v2+json' $REST/topics;
27+
28+
curl --retry 10 -H 'Accept: application/vnd.kafka.v2+json' $REST/topics/test1;
29+
curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" --data "{\"records\":[{\"value\":\"Test from $HOSTNAME at $(date)\"}]}" $REST/topics/$TOPIC -v;
30+
curl --retry 10 -H 'Accept: application/vnd.kafka.v2+json' $REST/topics/test2;
31+
32+
curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" --data '{"records":[{"value":{"foo":"bar"}}]}' $REST/topics/$TOPIC -v;
33+
34+
curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset": "earliest"}' $REST/consumers/my_json_consumer -v;
35+
36+
curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data "{\"topics\":[\"$TOPIC\"]}" $REST/consumers/my_json_consumer/instances/my_consumer_instance/subscription -v;
37+
38+
curl -X GET -H "Accept: application/vnd.kafka.json.v2+json" $REST/consumers/my_json_consumer/instances/my_consumer_instance/records -v;
39+
40+
curl -X DELETE -H "Content-Type: application/vnd.kafka.v2+json" $REST/consumers/my_json_consumer/instances/my_consumer_instance -v;
41+
42+
sleep 300
43+
restartPolicy: Never

0 commit comments

Comments
 (0)