Skip to content

Use Kafka config defaults as much as possible #148

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 8, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 15 additions & 15 deletions kafka/10broker-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ data:

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
#num.recovery.threads.per.data.dir=1

############################# Server Basics #############################

Expand All @@ -75,7 +75,7 @@ data:

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
Expand All @@ -84,7 +84,7 @@ data:
#listeners=PLAINTEXT://:9092
listeners=OUTSIDE://:9094,PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
Expand All @@ -96,19 +96,19 @@ data:
inter.broker.listener.name=PLAINTEXT

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
#num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
#num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
#socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
#socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
#socket.request.max.bytes=104857600

############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
Expand All @@ -124,7 +124,7 @@ data:
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

Expand All @@ -144,16 +144,16 @@ data:
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=-1

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
#log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
#log.retention.check.interval.ms=300000

############################# Zookeeper #############################

Expand All @@ -165,7 +165,7 @@ data:
zookeeper.connect=zookeeper:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
#zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################
Expand All @@ -175,7 +175,7 @@ data:
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
#group.initial.rebalance.delay.ms=0

log4j.properties: |-
# Unspecified loggers and loggers with additivity=true output to server.log and stdout
Expand Down