WorkerSinkTask not committing offsets (potential bug)

2017-09-07 Thread Shrijeet Paliwal
*Kafka Version: 0.10.2.1* Hi, I am running a custom connector (in distributed mode) and noticed one of the partition has its lag increasing consistently although it's assigned to a connect worker. Log messages in the connect log follow: [DEBUG] 2017-09-07 14:32:54,572 runtime.WorkerSinkTask onPa

StackOverflowError for Connect WorkerTask

2017-09-07 Thread Vladoiu Catalin
I am trying to use Confluent Platform 3.3.0 and the S3-Connector and I get a StackOverflowError error: java.lang.StackOverflowError at java.util.HashMap.hash(HashMap.java:338) at java.util.LinkedHashMap.get(LinkedHashMap.java:440) at org.apache.avro.JsonProperties.getJsonProp(JsonProperties.java:1

StackOverflowError for Connect WorkerTask

2017-09-07 Thread Vladoiu Catalin
Hi guys, I am trying to use Confluent Platform 3.3.0 and the S3-Connector and I get a StackOverflowError error: java.lang.StackOverflowError at java.util.HashMap.hash(HashMap.java:338) at java.util.LinkedHashMap.get(LinkedHashMap.java:440) at org.apache.avro.JsonProperties.getJsonProp(JsonPropert

StackOverflowError for Connect WorkerTask

2017-09-07 Thread Vladoiu Catalin
Hi guys, I am trying to use Confluent Platform 3.3.0 and the S3-Connector and I get a StackOverflowError error: java.lang.StackOverflowError at java.util.HashMap.hash(HashMap.java:338) at java.util.LinkedHashMap.get(LinkedHashMap.java:440) at org.apache.avro.JsonProperties.getJsonProp(JsonPropert

Kafka will not process SIGTERM if it's trying to connect to Zookeeper

2017-09-07 Thread Aragao, Andre Augusto de Oliveira (Andre)
Apparently, Kafka ignores SIGTERM sign if it’s trying to connect to Zookeeper and can’t reach it. I can end it normally with kill –s TERM $PID under normal operation, but not when it’s trying to reconnect. I know it’s a small issue that will almost never happen, but it would be nice if it coul

SQL Server 2012 (or Later) Connectors for Kafka Connect

2017-09-07 Thread M. Manna
Hi All, We are running a PoC which consists of some DB level syncrhonisation between two servers. The idea is that we will synchronise a set of tables either at a certain interval or just using initial settings from Kafka Connect API. Assuming that Connect is the correct route for us, does anyone

Fwd: StackOverflowError for Connect WorkerTask

2017-09-07 Thread Vladoiu Catalin
Hi guys, I am trying to use Confluent Platform 3.3.0 and the S3-Connector and I get a StackOverflowError error: java.lang.StackOverflowError at java.util.HashMap.hash(HashMap.java:338) at java.util.LinkedHashMap.get(LinkedHashMap.java:440) at org.apache.avro.JsonProperties.getJsonProp(JsonPropert

Re: Reduce Kafka Client logging

2017-09-07 Thread Raghav
Hi Viktor Can you pleas share the log4j config snippet that I should use. My Java code's current log4j looks like this. How should I add this new entry that you mentioned ? Thanks. log4j.rootLogger=INFO, STDOUT log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender log4j.appender.STDOUT.layout=

StackOverflowError for Connect WorkerTask

2017-09-07 Thread Vladoiu Catalin
Hi guys, I am trying to use Confluent Platform 3.3.0 and the S3-Connector and I get a StackOverflowError error: java.lang.StackOverflowError at java.util.HashMap.hash(HashMap.java:338) at java.util.LinkedHashMap.get(LinkedHashMap.java:440) at org.apache.avro.JsonProperties.getJsonProp(JsonPropert

Re: Reduce Kafka Client logging

2017-09-07 Thread Viktor Somogyi
Hi Raghav, I think it is enough to raise the logging level of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j. Also I'd like to mention that if possible, don't recreate the Kafka producer each time. The protocol is designed for long-living connections and recreating the connectio

Re: [VOTE] 0.11.0.1 RC0

2017-09-07 Thread Magnus Edenhill
+1 (non-binding) Verified with librdkafka regression test suite 2017-09-06 11:52 GMT+02:00 Damian Guy : > Resending as i wasn't part of the kafka-clients mailing list > > On Tue, 5 Sep 2017 at 21:34 Damian Guy wrote: > > > Hello Kafka users, developers and client-developers, > > > > This is the