Skip to content

[WIP] prelim set-up of resources to test kafka-connect source/sink demo #69

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

Analect
Copy link

@Analect Analect commented Oct 3, 2017

@solsson
In an attempt to take your addon-connect branch a step further, I branched to connect-test-debezium (this PR) in order to test the set-up in this blog, which seeks to use a debezium source connector for mySQL and a confluent JDBC sink connector.

I created a basic helm chart outside this repo to initially test, which was based on this docker-compose file and was essentially using the various debezium images, which themselves are built on top of the confluent ones, as far as I understand. This appeared to work well in terms of getting changes in mySQL flowing to postgres.

I then tried to get things working with the same resources in here (although not structured as a helm chart) using your pre-existing set-up for kafka, zookeeper, rest and schema-registry. However, the connect image is based on the Dockerfile in the PR, which itself is a replica of this one.

Initially, I was getting a problem with Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value tcp://10.7.245.223:80 for configuration rest.port: Not a number of type INT (see below), where somehow that tcp address above was getting set as an ENV VARIABLE KAFKA_PORT. So I added REST_PORT key/value in the connect-deployment.yaml which seems to have overcome that problem.

2017-10-02 22:42:58,913 INFO   ||  Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter'   [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
2017-10-02 22:42:58,913 INFO   ||  Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey'   [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader]
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value tcp://10.7.245.223:80 for configuration rest.port: Not a number of type INT
	at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:713)
	at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:460)
	at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453)
	at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
	at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:75)
	at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:197)
	at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:289)
	at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:65)

However, as per this gist, there is still something causing the connector to crash and I was wondering if you might have any insights. It could be down to something with the connector image (built on a debezium image with a confluent plugin/connector on-board) that is somehow not sitting well with your solsson images. Would you have any thoughts on what might be going on here? Thanks.

@solsson
Copy link
Contributor

solsson commented Oct 3, 2017

Haven't got time to test on my own now, so I jump to the conclusion that this is a case in point (of the argument I made here) that using .properties files is more transparent than the use of env to override property values :)

Doesn't the log show effective configurations, prior to the stuff in the gist? Both Kafka and Confluent's components tend to do so, and it is highly useful. For example from the KSQL experiment I see:

[2017-10-03 07:58:55,059] INFO KsqlRestConfig values: 
	metric.reporters = []
	ksql.command.topic.suffix = commands
	ssl.client.auth = false
...

solsson added a commit that referenced this pull request Dec 1, 2017
rather than a couple of sentences. In response to #103.

Regarding Kafka Streams I no longer think we need examples,
because any dockerized streams application can run as a deployment.

An example of Kafka Connect would be useful, in particular
the combination of a custom image (or one with stock connectors)
and a Connect cluster manifest.
This is tracked in #69, but not the relation to
https://github.com/solsson/dockerfiles/tree/master/connect-*

KSQL (#68) is highly interesting.
@solsson
Copy link
Contributor

solsson commented Nov 28, 2018

Please reopen if this is still relevant. I'm going through PRs for the v5.0 i.e. Kafka 2.1 on JDK 11.

@solsson solsson closed this Nov 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants