Kafka Streams Transformer: context.forward() from different thread

2017-10-10 Thread Murad Mamedov
Hi, here is the question:

Transformer's transform() implementation starts some processing
asynchronously, i.e. transform() implementation returns null. Then once
asynchronous processing is complete in another thread, is it correct to
call context.forward() from that thread?

Thanks in advance


Re: NPE on ConsumerRecords$ConcatenatedIterable$1.makeNext() while iterating records

2017-10-10 Thread Michael Keinan
Hi
So far I was unable to reproduce. I will try again.
The link you pasted gets me to the site but I can’t create an issue - how can I 
do it ?

Michael





On Oct 9, 2017, at 12:38 PM, Manikumar 
mailto:manikumar.re...@gmail.com>> wrote:

Hi,

Can you reproduce the error? Is it happening at the same offset every time?
Try to reproduce with the console-consumer tool.

You can raise JIRA issue here.
https://issues.apache.org/jira/projects/KAFKA

On Mon, Oct 9, 2017 at 3:00 PM, Michael Keinan 
wrote:

Thank you for your response.
No consumer interceptor involved

Michael




On Oct 8, 2017, at 7:04 PM, Ted Yu mailto:yu
zhih...@gmail.com>> wrote:

Was there any consumer interceptor involved ?

Cheers

On Sun, Oct 8, 2017 at 6:29 AM, Michael Keinan mailto:micha...@capitolis.com>>
wrote:

Hi
Using Kafka 0.10.2.0
I get a NPE while iterating the records after polling them using poll
method.
- Any idea where does it come from ?
- How can I open an issue to Kafka team ?

Stacktrace:
java.lang.NullPointerException
at org.apache.kafka.clients.consumer.ConsumerRecords$
ConcatenatedIterable$1.makeNext(ConsumerRecords.java:112)
at org.apache.kafka.clients.consumer.ConsumerRecords$
ConcatenatedIterable$1.makeNext(ConsumerRecords.java:101)
at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(
AbstractIterator.java:79)
at org.apache.kafka.common.utils.AbstractIterator.hasNext(
AbstractIterator.java:45)
at com.capitolis.messagespersistency.KafkaListener.listen(
KafkaListener.java:104)

My code



while (true) {
  ConsumerRecords records = consumer.poll(timeout);

(Error occurs here—>)for (ConsumerRecord record :
records) {
  String value = record.value();

  }






Michael Keinan












Re: NPE on ConsumerRecords$ConcatenatedIterable$1.makeNext() while iterating records

2017-10-10 Thread Manikumar
sign up for JIRA account and login to create an issue.

On Tue, Oct 10, 2017 at 1:50 PM, Michael Keinan 
wrote:

> Hi
> So far I was unable to reproduce. I will try again.
> The link you pasted gets me to the site but I can’t create an issue - how
> can I do it ?
>
> Michael
>
>
>
>
>
> On Oct 9, 2017, at 12:38 PM, Manikumar  manikumar.re...@gmail.com>> wrote:
>
> Hi,
>
> Can you reproduce the error? Is it happening at the same offset every time?
> Try to reproduce with the console-consumer tool.
>
> You can raise JIRA issue here.
> https://issues.apache.org/jira/projects/KAFKA
>
> On Mon, Oct 9, 2017 at 3:00 PM, Michael Keinan 
> wrote:
>
> Thank you for your response.
> No consumer interceptor involved
>
> Michael
>
>
>
>
> On Oct 8, 2017, at 7:04 PM, Ted Yu mailto:yu
> zhih...@gmail.com>> wrote:
>
> Was there any consumer interceptor involved ?
>
> Cheers
>
> On Sun, Oct 8, 2017 at 6:29 AM, Michael Keinan  mailto:micha...@capitolis.com>>
> wrote:
>
> Hi
> Using Kafka 0.10.2.0
> I get a NPE while iterating the records after polling them using poll
> method.
> - Any idea where does it come from ?
> - How can I open an issue to Kafka team ?
>
> Stacktrace:
> java.lang.NullPointerException
> at org.apache.kafka.clients.consumer.ConsumerRecords$
> ConcatenatedIterable$1.makeNext(ConsumerRecords.java:112)
> at org.apache.kafka.clients.consumer.ConsumerRecords$
> ConcatenatedIterable$1.makeNext(ConsumerRecords.java:101)
> at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(
> AbstractIterator.java:79)
> at org.apache.kafka.common.utils.AbstractIterator.hasNext(
> AbstractIterator.java:45)
> at com.capitolis.messagespersistency.KafkaListener.listen(
> KafkaListener.java:104)
>
> My code
>
>
>
> while (true) {
>   ConsumerRecords records = consumer.poll(timeout);
>
> (Error occurs here—>)for (ConsumerRecord record :
> records) {
>   String value = record.value();
>
>   }
>
>
>
>
>
>
> Michael Keinan
>
>
>
>
>
>
>
>
>
>
>


Re: Consumer service that supports retry with exponential backoff

2017-10-10 Thread Michal Michalski
Hi John,

It doesn't seem like you care for the ordering (since you're using multiple
"fallback" topics that are processed in parallel if I understood you
correctly), but the alternative would be to implement the backoff using the
same topic and consumer. We're using the "pausing" feature of the consumer
for that purpose (
https://kafka.apache.org/0102/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#pause(java.util.Collection)
) to make sure that we maintain the ordering on the partitions with events
that cannot be immediately processed (we're pausing them waiting e.g. for
the external service to become available again, which is fine for us),
while other partitions can still progress normally.

M.

On 9 October 2017 at 22:41, John Walker  wrote:

> I have a pair of services.  One dispatches commands to the other for
> processing.
>
> My consumer sometimes fails to execute commands as a result of transient
> errors.  To deal with this, commands are retried after an exponentially
> increasing delay up to a maximum of 4 times.  (Delays: 1hr, 2hr, 4hr, 8hr.)
>   What's the standard way to set up something like this using Kafka?
>
> The only solution I've found so far is to setup 5 topics (main_topic,
> delayed_1hr, delayed_2hr, delayed_4hr,delayed_8hr), and then have pollers
> that poll each of these topics, enforce delays, and escalate messages from
> one topic to another if errors occur.
>
> Thanks in advance for any pointers you guys can give me,
>
> -John
>


Re: Kafka Streams Transformer: context.forward() from different thread

2017-10-10 Thread Damian Guy
Hi,
No, context.forward() always needs to be called from the StreamThread. If
you call it from another thread the behaviour is undefined and in most
cases will be incorrect, likely resulting in an exception.

On Tue, 10 Oct 2017 at 09:04 Murad Mamedov  wrote:

> Hi, here is the question:
>
> Transformer's transform() implementation starts some processing
> asynchronously, i.e. transform() implementation returns null. Then once
> asynchronous processing is complete in another thread, is it correct to
> call context.forward() from that thread?
>
> Thanks in advance
>


Re: NPE on ConsumerRecords$ConcatenatedIterable$1.makeNext() while iterating records

2017-10-10 Thread Ismael Juma
Can you please try with a newer version? Either 0.10.2.1 or 0.11.0.1?

Ismael

On Sun, Oct 8, 2017 at 2:29 PM, Michael Keinan 
wrote:

> Hi
> Using Kafka 0.10.2.0
> I get a NPE while iterating the records after polling them using poll
> method.
> - Any idea where does it come from ?
> - How can I open an issue to Kafka team ?
>
> Stacktrace:
> java.lang.NullPointerException
> at org.apache.kafka.clients.consumer.ConsumerRecords$
> ConcatenatedIterable$1.makeNext(ConsumerRecords.java:112)
> at org.apache.kafka.clients.consumer.ConsumerRecords$
> ConcatenatedIterable$1.makeNext(ConsumerRecords.java:101)
> at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(
> AbstractIterator.java:79)
> at org.apache.kafka.common.utils.AbstractIterator.hasNext(
> AbstractIterator.java:45)
> at com.capitolis.messagespersistency.KafkaListener.listen(
> KafkaListener.java:104)
>
> My code
>
>
>
> while (true) {
> ConsumerRecords records = consumer.poll(timeout);
>
> (Error occurs here—>)for (ConsumerRecord record :
> records) {
> String value = record.value();
>
> }
>
>
>
>
>
>
> Michael Keinan
>
>
>
>
>
>
>


Incorrect consumer offsets after broker restart 0.11.0.0

2017-10-10 Thread Phil Luckhurst
We have a Kafka broker we use for testing that we have recently updated from 
0.9.0.1 to 0.11.0.0 and our java consumer is built using the 0.11.0.0 client. 
The consumers manually commit offsets and are consuming messages as expected 
since the upgrade. If we restart the consumers they fetch the previously 
committed offset from the broker and restart processing new messages as 
expected. Kafka Manager reports the offsets we expect to see. However, if we 
restart the broker the consumer receives an old offset from the broker and we 
can end up re-processing several days' worth of messages.

We have identified the __consumers_offset partition where the offsets are being 
stored and if we use the console consumer to consume from that partition we see 
a new message appear each time our consumer commits its offsets. The commands 
we use are:

echo "exclude.internal.topics=false" > /tmp/consumer.config
/opt/kafka/bin/kafka-console-consumer.sh --consumer.config /tmp/consumer.config 
--formatter 
"kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" 
--bootstrap-server localhost:9092 --topic __consumer_offsets 
--consumer-property group.id=test-offsets-consumer-group --partition 43

And the output shows our consumer group and topic partition for each commit the 
consumer sends, the reported offset is correct.

[ta-eng-cob1-tstat-events,ta-eng-cob1-ayla,0]::[OffsetMetadata[1833602,NO_METADATA],CommitTime
 1507632714328,ExpirationTime 1507719114328]

We also used the following command to check that these commits also trigger a 
new record to be written to the latest __consumer_offset_43 partition log file 
and we see a new record added to the partition log file every time the consumer 
commits offsets.

/opt/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments --print-data-log 
--files /var/lib/kafka/__consumer_offsets-43/03780553.log

baseOffset: 4006387 lastOffset: 4006387 baseSequence: 0 lastSequence: 0 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 16 isTransactional: 
false position: 30359857 CreateTime: 1507632866696 isvalid: true size: 147 
magic: 2 compresscodec: NONE crc: 1175188994

Everything appears to be working as expected until we restart the broker which 
then returns an old offset to the consumer.

For example, in the consumer debug output we see the last commit before the 
broker restart is 1828033

2017-10-09 11:55:16,056 DEBUG [pool-7-thread-2] 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: Group 
ta-eng-cob1-tstat-events committed offset 1828033 for partition 
ta-eng-cob1-ayla-0

After we restart the broker we see the consumer receive an old offset of 
1791273 from the broker.

2017-10-09 11:57:22,735 DEBUG [pool-7-thread-2] 
org.apache.kafka.clients.consumer.internals.Fetcher: Resetting offset for 
partition ta-eng-cob1-ayla-0 to the committed offset 1791273

If we just restart the consumer the fetch returns the correct offset 
information from the broker.

2017-10-09 11:52:25,984 DEBUG [pool-7-thread-2] 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator: Group 
ta-eng-cob1-tstat-events committed offset 1828015 for partition 
ta-eng-cob1-ayla-0
2017-10-09 11:53:21,658 DEBUG [pool-7-thread-2] 
org.apache.kafka.clients.consumer.internals.Fetcher: Resetting offset for 
partition ta-eng-cob1-ayla-0 to the committed offset 1828015

There don't appear to be any errors in the broker logs to indicate a problem, 
so the question is what is making the broker return the incorrect offset when 
it is restarted?

Thanks,
Phil Luckhurst



kafka-streams dying if can't create internal topics

2017-10-10 Thread Dmitriy Vsekhvalnov
Hi all,

still doing disaster testing with Kafka cluster, when crashing several
brokers at once sometimes we observe exception in kafka-stream app about
inability to create internal topics:

[WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
[Could not create internal topics: Found only 2 brokers,  but replication
factor is 3. Decrease replication factor for internal topics via
StreamsConfig parameter "replication.factor" or add more brokers to your
cluster. Retry #2]
[WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
[Could not create internal topics: Found only 2 brokers,  but replication
factor is 3. Decrease replication factor for internal topics via
StreamsConfig parameter "replication.factor" or add more brokers to your
cluster. Retry #3]
[WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
[Could not create internal topics: Found only 2 brokers,  but replication
factor is 3. Decrease replication factor for internal topics via
StreamsConfig parameter "replication.factor" or add more brokers to your
cluster. Retry #4]
[INFO ] [org.apache.kafka.streams.processor.internals.StreamThread]
[stream-thread [Shutting down]

The problem is that number of retries seems to be hardcoded to 5.
>From InternalTopicManager.MAX_TOPIC_READY_TRY constant.

Any way to make to configurable? It's really not nice that app is shutting
down, instead of just re-trying (potentially with exponential backoff)
until all broker are available back.

Should we open feature/issue request?

Thank you.


Re: Incorrect consumer offsets after broker restart 0.11.0.0

2017-10-10 Thread Elyahou Ittah
It is a known bug, fixed in 0.11.0.1

On Oct 10, 2017 15:20, "Phil Luckhurst"  wrote:

> We have a Kafka broker we use for testing that we have recently updated
> from 0.9.0.1 to 0.11.0.0 and our java consumer is built using the 0.11.0.0
> client. The consumers manually commit offsets and are consuming messages as
> expected since the upgrade. If we restart the consumers they fetch the
> previously committed offset from the broker and restart processing new
> messages as expected. Kafka Manager reports the offsets we expect to see.
> However, if we restart the broker the consumer receives an old offset from
> the broker and we can end up re-processing several days' worth of messages.
>
> We have identified the __consumers_offset partition where the offsets are
> being stored and if we use the console consumer to consume from that
> partition we see a new message appear each time our consumer commits its
> offsets. The commands we use are:
>
> echo "exclude.internal.topics=false" > /tmp/consumer.config
> /opt/kafka/bin/kafka-console-consumer.sh --consumer.config
> /tmp/consumer.config --formatter "kafka.coordinator.group.
> GroupMetadataManager\$OffsetsMessageFormatter" --bootstrap-server
> localhost:9092 --topic __consumer_offsets --consumer-property group.id
> =test-offsets-consumer-group --partition 43
>
> And the output shows our consumer group and topic partition for each
> commit the consumer sends, the reported offset is correct.
>
> [ta-eng-cob1-tstat-events,ta-eng-cob1-ayla,0]::[OffsetMetadata[1833602,NO_METADATA],CommitTime
> 1507632714328,ExpirationTime 1507719114328]
>
> We also used the following command to check that these commits also
> trigger a new record to be written to the latest __consumer_offset_43
> partition log file and we see a new record added to the partition log file
> every time the consumer commits offsets.
>
> /opt/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments
> --print-data-log --files /var/lib/kafka/__consumer_offsets-43/
> 03780553.log
>
> baseOffset: 4006387 lastOffset: 4006387 baseSequence: 0 lastSequence: 0
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 16 isTransactional:
> false position: 30359857 CreateTime: 1507632866696 isvalid: true size: 147
> magic: 2 compresscodec: NONE crc: 1175188994
>
> Everything appears to be working as expected until we restart the broker
> which then returns an old offset to the consumer.
>
> For example, in the consumer debug output we see the last commit before
> the broker restart is 1828033
>
> 2017-10-09 11:55:16,056 DEBUG [pool-7-thread-2] org.apache.kafka.clients.
> consumer.internals.ConsumerCoordinator: Group ta-eng-cob1-tstat-events
> committed offset 1828033 for partition ta-eng-cob1-ayla-0
>
> After we restart the broker we see the consumer receive an old offset of
> 1791273 from the broker.
>
> 2017-10-09 11:57:22,735 DEBUG [pool-7-thread-2] 
> org.apache.kafka.clients.consumer.internals.Fetcher:
> Resetting offset for partition ta-eng-cob1-ayla-0 to the committed offset
> 1791273
>
> If we just restart the consumer the fetch returns the correct offset
> information from the broker.
>
> 2017-10-09 11:52:25,984 DEBUG [pool-7-thread-2] org.apache.kafka.clients.
> consumer.internals.ConsumerCoordinator: Group ta-eng-cob1-tstat-events
> committed offset 1828015 for partition ta-eng-cob1-ayla-0
> 2017-10-09 11:53:21,658 DEBUG [pool-7-thread-2] 
> org.apache.kafka.clients.consumer.internals.Fetcher:
> Resetting offset for partition ta-eng-cob1-ayla-0 to the committed offset
> 1828015
>
> There don't appear to be any errors in the broker logs to indicate a
> problem, so the question is what is making the broker return the incorrect
> offset when it is restarted?
>
> Thanks,
> Phil Luckhurst
>
>


RE: Incorrect consumer offsets after broker restart 0.11.0.0

2017-10-10 Thread Phil Luckhurst
Thanks, we'll try upgrading to 0.11.0.1 and see if it fixes the problem.

Is this the bug you are referring to?
https://issues.apache.org/jira/browse/KAFKA-5600

-Original Message-
From: Elyahou Ittah [mailto:elyaho...@fiverr.com] 
Sent: 10 October 2017 13:41
To: users@kafka.apache.org
Subject: Re: Incorrect consumer offsets after broker restart 0.11.0.0

It is a known bug, fixed in 0.11.0.1

On Oct 10, 2017 15:20, "Phil Luckhurst"  wrote:

> We have a Kafka broker we use for testing that we have recently 
> updated from 0.9.0.1 to 0.11.0.0 and our java consumer is built using 
> the 0.11.0.0 client. The consumers manually commit offsets and are 
> consuming messages as expected since the upgrade. If we restart the 
> consumers they fetch the previously committed offset from the broker 
> and restart processing new messages as expected. Kafka Manager reports the 
> offsets we expect to see.
> However, if we restart the broker the consumer receives an old offset 
> from the broker and we can end up re-processing several days' worth of 
> messages.
>
> We have identified the __consumers_offset partition where the offsets 
> are being stored and if we use the console consumer to consume from 
> that partition we see a new message appear each time our consumer 
> commits its offsets. The commands we use are:
>
> echo "exclude.internal.topics=false" > /tmp/consumer.config 
> /opt/kafka/bin/kafka-console-consumer.sh --consumer.config 
> /tmp/consumer.config --formatter "kafka.coordinator.group.
> GroupMetadataManager\$OffsetsMessageFormatter" --bootstrap-server
> localhost:9092 --topic __consumer_offsets --consumer-property group.id 
> =test-offsets-consumer-group --partition 43
>
> And the output shows our consumer group and topic partition for each 
> commit the consumer sends, the reported offset is correct.
>
> [ta-eng-cob1-tstat-events,ta-eng-cob1-ayla,0]::[OffsetMetadata[1833602
> ,NO_METADATA],CommitTime 1507632714328,ExpirationTime 1507719114328]
>
> We also used the following command to check that these commits also 
> trigger a new record to be written to the latest __consumer_offset_43 
> partition log file and we see a new record added to the partition log 
> file every time the consumer commits offsets.
>
> /opt/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments 
> --print-data-log --files /var/lib/kafka/__consumer_offsets-43/
> 03780553.log
>
> baseOffset: 4006387 lastOffset: 4006387 baseSequence: 0 lastSequence: 
> 0
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 16 isTransactional:
> false position: 30359857 CreateTime: 1507632866696 isvalid: true size: 
> 147
> magic: 2 compresscodec: NONE crc: 1175188994
>
> Everything appears to be working as expected until we restart the 
> broker which then returns an old offset to the consumer.
>
> For example, in the consumer debug output we see the last commit 
> before the broker restart is 1828033
>
> 2017-10-09 11:55:16,056 DEBUG [pool-7-thread-2] org.apache.kafka.clients.
> consumer.internals.ConsumerCoordinator: Group ta-eng-cob1-tstat-events 
> committed offset 1828033 for partition ta-eng-cob1-ayla-0
>
> After we restart the broker we see the consumer receive an old offset 
> of
> 1791273 from the broker.
>
> 2017-10-09 11:57:22,735 DEBUG [pool-7-thread-2] 
> org.apache.kafka.clients.consumer.internals.Fetcher:
> Resetting offset for partition ta-eng-cob1-ayla-0 to the committed 
> offset
> 1791273
>
> If we just restart the consumer the fetch returns the correct offset 
> information from the broker.
>
> 2017-10-09 11:52:25,984 DEBUG [pool-7-thread-2] org.apache.kafka.clients.
> consumer.internals.ConsumerCoordinator: Group ta-eng-cob1-tstat-events 
> committed offset 1828015 for partition ta-eng-cob1-ayla-0
> 2017-10-09 11:53:21,658 DEBUG [pool-7-thread-2] 
> org.apache.kafka.clients.consumer.internals.Fetcher:
> Resetting offset for partition ta-eng-cob1-ayla-0 to the committed 
> offset
> 1828015
>
> There don't appear to be any errors in the broker logs to indicate a 
> problem, so the question is what is making the broker return the 
> incorrect offset when it is restarted?
>
> Thanks,
> Phil Luckhurst
>
>


Getting started with stream processing

2017-10-10 Thread RedShift

Hi all

Complete noob with regards to stream processing, this is my first attempt. I'm 
going to try and explain my thought process, here's what I'm trying to do:

I would like to create a sum of "load" for every hour, for every device.

Incoming stream of data:

{"deviceId":"1234","data":{"tss":1507619473,"load":9}}
{"deviceId":"1234","data":{"tss":1507619511,"load":8}}
{"deviceId":"1234","data":{"tss":1507619549,"load":5}}
{"deviceId":"9876","data":{"tss":1507619587,"load":8}}
{"deviceId":"1234","data":{"tss":1507619625,"load":8}}
{"deviceId":"1234","data":{"tss":1507619678,"load":8}}
{"deviceId":"9876","data":{"tss":1507619716,"load":8}}
{"deviceId":"9876","data":{"tss":1507619752,"load":9}}
{"deviceId":"1234","data":{"tss":1507619789,"load":8}}
{"deviceId":"9876","data":{"tss":1507619825,"load":8}}
{"deviceId":"9876","data":{"tss":1507619864,"load":8}}

Where
deviceId: unique ID for every device, which also doubles as the key I use
tss: UNIX timestamp in seconds
load: load indication

Expected outcome something like this:
deviceId: 1234, time: 2017-10-01 18:00, load: 25
deviceId: 1234, time: 2017-10-01 19:00, load: 13
deviceId: 9876, time: 2017-10-01 18:00, load: 33
deviceId: 9876, time: 2017-10-01 19:00, load: 5
...


So I started:

  SimpleDateFormat dateFormat = new SimpleDateFormat("-MM-dd HH"); // 
Important bit here, I use this to construct a grouping key
  KStreamBuilder builder = new KStreamBuilder();
  KStream data = builder.stream("telemetry");

We need to group by device, so:

  KGroupedStream grouped = data.groupBy((k, v) -> 
v.get("deviceId").asString());

But now I can't group the data again by date. So I made a combined grouping key 
like this:

  KGroupedStream grouped = data.groupBy(
  (k, v) ->
  {
  Date dt = 
Date.from(Instant.ofEpochSecond(v.get("data").asObject().get("tss").asLong()));
  return v.get("deviceId").asString() + dateFormat.format(dt);
  }
  );

Now I need to reduce the groups to sum the load:

  grouped.reduce(new Reducer()
  {
  @Override
  public JsonObject apply(JsonObject v1, JsonObject v2)
  {
  return null;
  }
  });

But that's a problem. I'm supposed to sum "load" here, but I also have to return a 
JsonObject. That doesn't seem right. So now I figure I have to extract the "load" before 
the reducer, but a KGroupedStream doesn't have a map() function.

Back to the drawing board. So I figure let's extract the "load" and grouping 
key first:

  KStream map = data.map(new KeyValueMapper>()
  {
  @Override
  public KeyValue apply(String s, JsonObject v)
  {
  Date dt = 
Date.from(Instant.ofEpochSecond(v.get("data").asObject().get("tss").asLong()));
  String key = v.get("deviceId").asString() + dateFormat.format(dt);

  return new KeyValue<>(
  key, v.get("data").asObject().get("load").asInt()
  );
  }
  });

But now I'm left with a KStream of . I've lost my types. If I 
change it to:
Kstream, the compiler has this to say:

Error:(35, 54) java: incompatible types: inference variable KR has incompatible 
bounds
equality constraints: java.lang.String
lower bounds: java.lang.Object

Makes sense, as there's no garantuee that a random given object is a string. But how do I preserve types then?


I'm also unsure about the way I'm grouping things. It seems to me I have to group by 
deviceId, and then using windowing to get the "per hour" part. But I'm even 
more clueless how and where that fits in. For some reason I also think a KTable should be 
the final result?

Thanks,

Best regards,


Custom converter with Kafka Connect ?

2017-10-10 Thread Jehan Bruggeman
Hello,

I'm trying to use a custom converter with Kafka Connect and I cannot seem
to get it right. I'm hoping someone has experience with this and could help
me figure it out !


Initial situation


- my custom converter's class path is 'custom.CustomStringConverter'.

- to avoid any mistakes, my custom converter is currently just a copy/paste
of the pre-existing StringConverter (of course, this will change when I'll
get it to work).
https://github.com/apache/kafka/blob/trunk/connect/api/src/main/java/org/apache/kafka/connect/storage/StringConverter.java

- I have a kafka connect cluster of 3 nodes, The nodes are running
confluent's official docker images ( confluentinc/cp-kafka-connect:3.3.0 ).

- Each node is configured to load a jar with my converter in it (using a
docker volume).



What happens ?


When the connectors start, they correctly load the jars and find the custom
converter. Indeed, this is what I see in the logs :

[2017-10-10 13:06:46,274] INFO Registered loader:
PluginClassLoader{pluginLocation=file:/opt/custom-connectors/custom-converter-1.0-SNAPSHOT.jar}
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-10-10 13:06:46,274] INFO Added plugin 'custom.CustomStringConverter'
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[...]
[2017-10-10 13:07:43,454] INFO Added aliases 'CustomStringConverter' and
'CustomString' to plugin 'custom.CustomStringConverter'
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)

I then POST a JSON config to one of the connector nodes to create my
connector :

{
  "name": "hdfsSinkCustom",
  "config": {
"topics": "yellow",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "custom.CustomStringConverter",
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"hdfs.url": "hdfs://hdfs-namenode:8020/hdfs-sink",
"topics.dir": "yellow_storage",
"flush.size": "1",
"rotate.interval.ms": "1000"
  }
}

And receive the following reply :

{
   "error_code": 400,
   "message": "Connector configuration is invalid and contains the
following 1 error(s):\nInvalid value custom.CustomStringConverter for
configuration value.converter: Class custom.CustomStringConverter could not
be found.\nYou can also find the above list of errors at the endpoint
`/{connectorType}/config/validate`"
}



If I try running Kafka Connect stadnalone, the error message is the same.

Has anybody faced this already ? What am I missing ?

Many thanks to anybody reading this !

Jehan


Add Kafka user list

2017-10-10 Thread 丁晓坤
Add Kafka user list


Re: Add Kafka user list

2017-10-10 Thread Matthias J. Sax
If you want to subscribe follow instructions here:
http://kafka.apache.org/contact

On 10/10/17 2:07 AM, shawnding(丁晓坤) wrote:
> Add Kafka user list
> 



signature.asc
Description: OpenPGP digital signature


Re: Serve interactive queries from standby replicas

2017-10-10 Thread Matthias J. Sax
Thanks!

On 10/9/17 1:27 PM, Stas Chizhov wrote:
> Hi,
> 
> I have created a ticker: https://issues.apache.org/jira/browse/KAFKA-6031
> 
> Best regards,
> Stas.
> 
> 2017-10-06 23:39 GMT+02:00 Guozhang Wang :
> 
>> Hi Stas,
>>
>> Would you mind creating a JIRA for this functionality request so that we
>> won't forget about it and drop on the floor?
>>
>>
>> Guozhang
>>
>> On Fri, Oct 6, 2017 at 1:10 PM, Stas Chizhov  wrote:
>>
>>> Thank you!
>>>
>>> I guess eventually consistent reads might be a reasonable trade off if
>> you
>>> can get ability to serve reads without downtime in some cases.
>>>
>>> By the way standby replicas are just extra consumers/processors of input
>>> topics? Or is there  some custom protocol for sinking the state?
>>>
>>>
>>>
>>> fre 6 okt. 2017 kl. 20:03 skrev Matthias J. Sax :
>>>
 No, that is not possible.

 Note: standby replicas might "lag" behind the active store, and thus,
 you would get different results if querying standby replicas would be
 supported.

 We might add this functionality at some point though -- but there are
>> no
 concrete plans atm. Contributions are always welcome of course :)


 -Matthias

 On 10/6/17 4:18 AM, Stas Chizhov wrote:
> Hi
>
> Is there a way to serve read read requests from standby replicas?
> StreamsMeatadata does not seem to provide standby end points as far
>> as
>>> I
> can see.
>
> Thank you,
> Stas
>


>>>
>>
>>
>>
>> --
>> -- Guozhang
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: kafka-streams dying if can't create internal topics

2017-10-10 Thread Matthias J. Sax
Yes, please file a Jira. We need to fix this. Thanks a lot!

-Matthias

On 10/10/17 5:24 AM, Dmitriy Vsekhvalnov wrote:
> Hi all,
> 
> still doing disaster testing with Kafka cluster, when crashing several
> brokers at once sometimes we observe exception in kafka-stream app about
> inability to create internal topics:
> 
> [WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
> [Could not create internal topics: Found only 2 brokers,  but replication
> factor is 3. Decrease replication factor for internal topics via
> StreamsConfig parameter "replication.factor" or add more brokers to your
> cluster. Retry #2]
> [WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
> [Could not create internal topics: Found only 2 brokers,  but replication
> factor is 3. Decrease replication factor for internal topics via
> StreamsConfig parameter "replication.factor" or add more brokers to your
> cluster. Retry #3]
> [WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager]
> [Could not create internal topics: Found only 2 brokers,  but replication
> factor is 3. Decrease replication factor for internal topics via
> StreamsConfig parameter "replication.factor" or add more brokers to your
> cluster. Retry #4]
> [INFO ] [org.apache.kafka.streams.processor.internals.StreamThread]
> [stream-thread [Shutting down]
> 
> The problem is that number of retries seems to be hardcoded to 5.
> From InternalTopicManager.MAX_TOPIC_READY_TRY constant.
> 
> Any way to make to configurable? It's really not nice that app is shutting
> down, instead of just re-trying (potentially with exponential backoff)
> until all broker are available back.
> 
> Should we open feature/issue request?
> 
> Thank you.
> 



signature.asc
Description: OpenPGP digital signature


Re: Getting started with stream processing

2017-10-10 Thread Matthias J. Sax
Hi,

if the aggregation returns a different type, you can use .aggregate(...)
instead of .reduce(...)

Also, for you time based computation, did you consider to use windowing?


-Matthias

On 10/10/17 6:27 AM, RedShift wrote:
> Hi all
> 
> Complete noob with regards to stream processing, this is my first
> attempt. I'm going to try and explain my thought process, here's what
> I'm trying to do:
> 
> I would like to create a sum of "load" for every hour, for every device.
> 
> Incoming stream of data:
> 
> {"deviceId":"1234","data":{"tss":1507619473,"load":9}}
> {"deviceId":"1234","data":{"tss":1507619511,"load":8}}
> {"deviceId":"1234","data":{"tss":1507619549,"load":5}}
> {"deviceId":"9876","data":{"tss":1507619587,"load":8}}
> {"deviceId":"1234","data":{"tss":1507619625,"load":8}}
> {"deviceId":"1234","data":{"tss":1507619678,"load":8}}
> {"deviceId":"9876","data":{"tss":1507619716,"load":8}}
> {"deviceId":"9876","data":{"tss":1507619752,"load":9}}
> {"deviceId":"1234","data":{"tss":1507619789,"load":8}}
> {"deviceId":"9876","data":{"tss":1507619825,"load":8}}
> {"deviceId":"9876","data":{"tss":1507619864,"load":8}}
> 
> Where
> deviceId: unique ID for every device, which also doubles as the key I use
> tss: UNIX timestamp in seconds
> load: load indication
> 
> Expected outcome something like this:
> deviceId: 1234, time: 2017-10-01 18:00, load: 25
> deviceId: 1234, time: 2017-10-01 19:00, load: 13
> deviceId: 9876, time: 2017-10-01 18:00, load: 33
> deviceId: 9876, time: 2017-10-01 19:00, load: 5
> ...
> 
> 
> So I started:
> 
>   SimpleDateFormat dateFormat = new SimpleDateFormat("-MM-dd HH");
> // Important bit here, I use this to construct a grouping key
>   KStreamBuilder builder = new KStreamBuilder();
>   KStream data = builder.stream("telemetry");
> 
> We need to group by device, so:
> 
>   KGroupedStream grouped = data.groupBy((k, v) ->
> v.get("deviceId").asString());
> 
> But now I can't group the data again by date. So I made a combined
> grouping key like this:
> 
>   KGroupedStream grouped = data.groupBy(
>   (k, v) ->
>   {
>   Date dt =
> Date.from(Instant.ofEpochSecond(v.get("data").asObject().get("tss").asLong()));
> 
>   return v.get("deviceId").asString() + dateFormat.format(dt);
>   }
>   );
> 
> Now I need to reduce the groups to sum the load:
> 
>   grouped.reduce(new Reducer()
>   {
>   @Override
>   public JsonObject apply(JsonObject v1, JsonObject v2)
>   {
>   return null;
>   }
>   });
> 
> But that's a problem. I'm supposed to sum "load" here, but I also have
> to return a JsonObject. That doesn't seem right. So now I figure I have
> to extract the "load" before the reducer, but a KGroupedStream doesn't
> have a map() function.
> 
> Back to the drawing board. So I figure let's extract the "load" and
> grouping key first:
> 
>   KStream map = data.map(new KeyValueMapper JsonObject, KeyValue>()
>   {
>   @Override
>   public KeyValue apply(String s, JsonObject v)
>   {
>   Date dt =
> Date.from(Instant.ofEpochSecond(v.get("data").asObject().get("tss").asLong()));
> 
>   String key = v.get("deviceId").asString() +
> dateFormat.format(dt);
> 
>   return new KeyValue<>(
>   key, v.get("data").asObject().get("load").asInt()
>   );
>   }
>   });
> 
> But now I'm left with a KStream of . I've lost my types.
> If I change it to:
> Kstream, the compiler has this to say:
> 
> Error:(35, 54) java: incompatible types: inference variable KR has
> incompatible bounds
>     equality constraints: java.lang.String
>     lower bounds: java.lang.Object
>     Makes sense, as there's no garantuee that a random given object is a
> string. But how do I preserve types then?
> 
> I'm also unsure about the way I'm grouping things. It seems to me I have
> to group by deviceId, and then using windowing to get the "per hour"
> part. But I'm even more clueless how and where that fits in. For some
> reason I also think a KTable should be the final result?
> 
> Thanks,
> 
> Best regards,



signature.asc
Description: OpenPGP digital signature


Re: kafka-streams dying if can't create internal topics

2017-10-10 Thread Dmitriy Vsekhvalnov
Hi Matthias,

thanks. Would you mind point me to correct Jira URL where i can file an
issue?

Thanks again.

On Tue, Oct 10, 2017 at 8:38 PM, Matthias J. Sax 
wrote:

> Yes, please file a Jira. We need to fix this. Thanks a lot!
>
> -Matthias
>
> On 10/10/17 5:24 AM, Dmitriy Vsekhvalnov wrote:
> > Hi all,
> >
> > still doing disaster testing with Kafka cluster, when crashing several
> > brokers at once sometimes we observe exception in kafka-stream app about
> > inability to create internal topics:
> >
> > [WARN ] [org.apache.kafka.streams.processor.internals.
> InternalTopicManager]
> > [Could not create internal topics: Found only 2 brokers,  but replication
> > factor is 3. Decrease replication factor for internal topics via
> > StreamsConfig parameter "replication.factor" or add more brokers to your
> > cluster. Retry #2]
> > [WARN ] [org.apache.kafka.streams.processor.internals.
> InternalTopicManager]
> > [Could not create internal topics: Found only 2 brokers,  but replication
> > factor is 3. Decrease replication factor for internal topics via
> > StreamsConfig parameter "replication.factor" or add more brokers to your
> > cluster. Retry #3]
> > [WARN ] [org.apache.kafka.streams.processor.internals.
> InternalTopicManager]
> > [Could not create internal topics: Found only 2 brokers,  but replication
> > factor is 3. Decrease replication factor for internal topics via
> > StreamsConfig parameter "replication.factor" or add more brokers to your
> > cluster. Retry #4]
> > [INFO ] [org.apache.kafka.streams.processor.internals.StreamThread]
> > [stream-thread [Shutting down]
> >
> > The problem is that number of retries seems to be hardcoded to 5.
> > From InternalTopicManager.MAX_TOPIC_READY_TRY constant.
> >
> > Any way to make to configurable? It's really not nice that app is
> shutting
> > down, instead of just re-trying (potentially with exponential backoff)
> > until all broker are available back.
> >
> > Should we open feature/issue request?
> >
> > Thank you.
> >
>
>


Re: kafka-streams dying if can't create internal topics

2017-10-10 Thread Matthias J. Sax
https://issues.apache.org/jira/browse/KAFKA-1?jql=project%20%3D%20KAFKA

Or just put "apache kafka jira" into your favorite search engine...


-Matthias


On 10/10/17 10:48 AM, Dmitriy Vsekhvalnov wrote:
> Hi Matthias,
> 
> thanks. Would you mind point me to correct Jira URL where i can file an
> issue?
> 
> Thanks again.
> 
> On Tue, Oct 10, 2017 at 8:38 PM, Matthias J. Sax 
> wrote:
> 
>> Yes, please file a Jira. We need to fix this. Thanks a lot!
>>
>> -Matthias
>>
>> On 10/10/17 5:24 AM, Dmitriy Vsekhvalnov wrote:
>>> Hi all,
>>>
>>> still doing disaster testing with Kafka cluster, when crashing several
>>> brokers at once sometimes we observe exception in kafka-stream app about
>>> inability to create internal topics:
>>>
>>> [WARN ] [org.apache.kafka.streams.processor.internals.
>> InternalTopicManager]
>>> [Could not create internal topics: Found only 2 brokers,  but replication
>>> factor is 3. Decrease replication factor for internal topics via
>>> StreamsConfig parameter "replication.factor" or add more brokers to your
>>> cluster. Retry #2]
>>> [WARN ] [org.apache.kafka.streams.processor.internals.
>> InternalTopicManager]
>>> [Could not create internal topics: Found only 2 brokers,  but replication
>>> factor is 3. Decrease replication factor for internal topics via
>>> StreamsConfig parameter "replication.factor" or add more brokers to your
>>> cluster. Retry #3]
>>> [WARN ] [org.apache.kafka.streams.processor.internals.
>> InternalTopicManager]
>>> [Could not create internal topics: Found only 2 brokers,  but replication
>>> factor is 3. Decrease replication factor for internal topics via
>>> StreamsConfig parameter "replication.factor" or add more brokers to your
>>> cluster. Retry #4]
>>> [INFO ] [org.apache.kafka.streams.processor.internals.StreamThread]
>>> [stream-thread [Shutting down]
>>>
>>> The problem is that number of retries seems to be hardcoded to 5.
>>> From InternalTopicManager.MAX_TOPIC_READY_TRY constant.
>>>
>>> Any way to make to configurable? It's really not nice that app is
>> shutting
>>> down, instead of just re-trying (potentially with exponential backoff)
>>> until all broker are available back.
>>>
>>> Should we open feature/issue request?
>>>
>>> Thank you.
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Kafka 0.10: Two Kafka Consumers, two different topics, same group ID - commit exception occurs

2017-10-10 Thread Matthias J. Sax
Why do you use the same groupId? This sound not correct.

You would use a consumer group to share load of a single topic based on
partitions. Ie. if a topic has multiple partitions, different partitions
are processed by different consumer within the same group.

But in your case, the second process read data the first process writes.
This sound like a pattern for different consumer groups.


-Matthias

On 10/1/17 9:02 AM, A Gardner wrote:
> Hi there
> 
> [ Running Kafka 0.10 and using Java api ]
> 
> Please could you confirm the subscription behaviour for the KafkaConsumer
> when using a group ID.
> 
> I have two separate KafkaConsumer processes both using same groupID on
> different topics A & B:
> 
> Process 1) Subscribes to Topic A with groupID = 123 and writes to topic B
> Process 2) Subscribes to Topic B with groupID = 123
> 
> If I start Process 1 (Process 2 not yet running) it functions correctly,
> consuming from Topic A and writing to topic B.
> 
> As soon as I start Process 2, and Process 1 subsequently attempts to commit
> the offsets after say consuming 100 messages from Topic A, if fails to
> commit the offsets with the CommitFailedException shown below.
> 
> It appears that as soon as process 2 subscribes with groupID = 123 on Topic
> B, it kicks off process A's subscription (also on groupID=123 but on Topic
> B); is the consumer coordinator responsible?
> 
> Incidentally, process 1 is committing the offsets at regular intervals even
> if the offset has or hasn't moved on, or regardless of whether process 1
> has consumed more msgs or hasn't and still performs a asyncCommit you get
> the exception regardless.
> 
> My understanding was that if two separate consumers both subscribing to a
> single but different topic used the same groupID, this would be ok?
> 
> Have I configured the Kafka / brokers incorrectly or is this correct
> behaviour?
> 
> Btw, if I change process 2 to use a different groupID = 789 it's fine and
> there is no rebalancing behaviour by the consumer cordinator.
> 
> Regards
> 
> 
> Alex Gardner
> 
> 
> Exception
> 
> 
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be
> completed since the group has already rebalanced and assigned the
> partitions to another member. This means that the time between subsequent
> calls to poll() was longer than the configured max.poll.interval.ms, which
> typically implies that the poll loop is spending too much time message
> processing. You can address this either by increasing the session timeout
> or by reducing the maximum size of batches returned in poll() with
> max.poll.records.
> 
> at org.apache.kafka.clients.consumer.internals.
> ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.
> java:770)
> 
> at org.apache.kafka.clients.consumer.internals.
> ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.
> java:716)
> 
> at org.apache.kafka.clients.consumer.internals.
> AbstractCoordinator$CoordinatorResponseHandler.
> onSuccess(AbstractCoordinator.java:784)
> 
> at org.apache.kafka.clients.consumer.internals.
> AbstractCoordinator$CoordinatorResponseHandler.
> onSuccess(AbstractCoordinator.java:765)
> 
> at org.apache.kafka.clients.consumer.internals.
> RequestFuture$1.onSuccess(RequestFuture.java:186)
> 
> at org.apache.kafka.clients.consumer.internals.
> RequestFuture.fireSuccess(RequestFuture.java:149)
> 
> at org.apache.kafka.clients.consumer.internals.
> RequestFuture.complete(RequestFuture.java:116)
> 
> at org.apache.kafka.clients.consumer.internals.
> ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(
> ConsumerNetworkClient.java:493)
> 
> at org.apache.kafka.clients.consumer.internals.
> ConsumerNetworkClient.firePendingCompletedRequests(
> ConsumerNetworkClient.java:322)
> 
> at org.apache.kafka.clients.consumer.internals.
> ConsumerNetworkClient.poll(ConsumerNetworkClient.java:253)
> 
> at org.apache.kafka.clients.consumer.KafkaConsumer.
> pollOnce(KafkaConsumer.java:1047)
> 
> at org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> KafkaConsumer.java:995)
> 



signature.asc
Description: OpenPGP digital signature


Re: kafka-streams dying if can't create internal topics

2017-10-10 Thread Dmitriy Vsekhvalnov
Thanks Matthias,

here is issue: https://issues.apache.org/jira/browse/KAFKA-6047

On Tue, Oct 10, 2017 at 8:50 PM, Matthias J. Sax 
wrote:

> https://issues.apache.org/jira/browse/KAFKA-1?jql=project%20%3D%20KAFKA
>
> Or just put "apache kafka jira" into your favorite search engine...
>
>
> -Matthias
>
>
> On 10/10/17 10:48 AM, Dmitriy Vsekhvalnov wrote:
> > Hi Matthias,
> >
> > thanks. Would you mind point me to correct Jira URL where i can file an
> > issue?
> >
> > Thanks again.
> >
> > On Tue, Oct 10, 2017 at 8:38 PM, Matthias J. Sax 
> > wrote:
> >
> >> Yes, please file a Jira. We need to fix this. Thanks a lot!
> >>
> >> -Matthias
> >>
> >> On 10/10/17 5:24 AM, Dmitriy Vsekhvalnov wrote:
> >>> Hi all,
> >>>
> >>> still doing disaster testing with Kafka cluster, when crashing several
> >>> brokers at once sometimes we observe exception in kafka-stream app
> about
> >>> inability to create internal topics:
> >>>
> >>> [WARN ] [org.apache.kafka.streams.processor.internals.
> >> InternalTopicManager]
> >>> [Could not create internal topics: Found only 2 brokers,  but
> replication
> >>> factor is 3. Decrease replication factor for internal topics via
> >>> StreamsConfig parameter "replication.factor" or add more brokers to
> your
> >>> cluster. Retry #2]
> >>> [WARN ] [org.apache.kafka.streams.processor.internals.
> >> InternalTopicManager]
> >>> [Could not create internal topics: Found only 2 brokers,  but
> replication
> >>> factor is 3. Decrease replication factor for internal topics via
> >>> StreamsConfig parameter "replication.factor" or add more brokers to
> your
> >>> cluster. Retry #3]
> >>> [WARN ] [org.apache.kafka.streams.processor.internals.
> >> InternalTopicManager]
> >>> [Could not create internal topics: Found only 2 brokers,  but
> replication
> >>> factor is 3. Decrease replication factor for internal topics via
> >>> StreamsConfig parameter "replication.factor" or add more brokers to
> your
> >>> cluster. Retry #4]
> >>> [INFO ] [org.apache.kafka.streams.processor.internals.StreamThread]
> >>> [stream-thread [Shutting down]
> >>>
> >>> The problem is that number of retries seems to be hardcoded to 5.
> >>> From InternalTopicManager.MAX_TOPIC_READY_TRY constant.
> >>>
> >>> Any way to make to configurable? It's really not nice that app is
> >> shutting
> >>> down, instead of just re-trying (potentially with exponential backoff)
> >>> until all broker are available back.
> >>>
> >>> Should we open feature/issue request?
> >>>
> >>> Thank you.
> >>>
> >>
> >>
> >
>
>


Kafka cluster Error

2017-10-10 Thread Kannappan, Saravanan (Contractor)
Hello, Someone can you help me kafka server not starting after rebooting , the 
below is the error message

[2017-10-10 22:25:25,365] INFO shutting down (kafka.server.KafkaServer)
[2017-10-10 22:25:25,374] INFO shut down completed (kafka.server.KafkaServer)
[2017-10-10 22:25:25,375] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
zookeeper server 
'172.24.235.155:2181,172.24.235.162:2181,172.24.235.180:2181,172.24.235.181:2181,172.24.235.185:2181'
 with timeout of 6000 ms
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1233)
at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
at 
kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)

Thanks
Saravanan


Re: Kafka cluster Error

2017-10-10 Thread Ted Yu
Can you check the health of zookeeper quorum ?

Cheers

On Tue, Oct 10, 2017 at 3:35 PM, Kannappan, Saravanan (Contractor) <
saravanan_kannap...@comcast.com> wrote:

> Hello, Someone can you help me kafka server not starting after rebooting ,
> the below is the error message
>
> [2017-10-10 22:25:25,365] INFO shutting down (kafka.server.KafkaServer)
> [2017-10-10 22:25:25,374] INFO shut down completed
> (kafka.server.KafkaServer)
> [2017-10-10 22:25:25,375] FATAL Fatal error during KafkaServerStartable
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to
> zookeeper server '172.24.235.155:2181,172.24.235.162:2181,172.24.235.180:
> 2181,172.24.235.181:2181,172.24.235.185:2181' with timeout of 6000 ms
> at org.I0Itec.zkclient.ZkClient.
> connect(ZkClient.java:1233)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(
> ZkUtils.scala:79)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
> at kafka.server.KafkaServerStartable.startup(
> KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
>
> Thanks
> Saravanan
>


Re: Kafka cluster Error

2017-10-10 Thread Avinash Shahdadpuri
The error seems pretty obvious. Is your zookeeper address correct and is it
running?

On Tue, Oct 10, 2017 at 3:35 PM, Kannappan, Saravanan (Contractor) <
saravanan_kannap...@comcast.com> wrote:

> Hello, Someone can you help me kafka server not starting after rebooting ,
> the below is the error message
>
> [2017-10-10 22:25:25,365] INFO shutting down (kafka.server.KafkaServer)
> [2017-10-10 22:25:25,374] INFO shut down completed
> (kafka.server.KafkaServer)
> [2017-10-10 22:25:25,375] FATAL Fatal error during KafkaServerStartable
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to
> zookeeper server '172.24.235.155:2181,172.24.235.162:2181,172.24.235.180:
> 2181,172.24.235.181:2181,172.24.235.185:2181' with timeout of 6000 ms
> at org.I0Itec.zkclient.ZkClient.
> connect(ZkClient.java:1233)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(
> ZkUtils.scala:79)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
> at kafka.server.KafkaServerStartable.startup(
> KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
>
> Thanks
> Saravanan
>


Re: Kafka cluster Error

2017-10-10 Thread R Krishna
"Unable to connect"

Try pinging and running ZK cli commands on one of the ZKs from the Kafka
Broker that is failing to come up.

On Tue, Oct 10, 2017 at 3:35 PM, Kannappan, Saravanan (Contractor) <
saravanan_kannap...@comcast.com> wrote:

> Hello, Someone can you help me kafka server not starting after rebooting ,
> the below is the error message
>
> [2017-10-10 22:25:25,365] INFO shutting down (kafka.server.KafkaServer)
> [2017-10-10 22:25:25,374] INFO shut down completed
> (kafka.server.KafkaServer)
> [2017-10-10 22:25:25,375] FATAL Fatal error during KafkaServerStartable
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to
> zookeeper server '172.24.235.155:2181,172.24.235.162:2181,172.24.235.180:
> 2181,172.24.235.181:2181,172.24.235.185:2181' with timeout of 6000 ms
> at org.I0Itec.zkclient.ZkClient.
> connect(ZkClient.java:1233)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(
> ZkUtils.scala:79)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
> at kafka.server.KafkaServerStartable.startup(
> KafkaServerStartable.scala:39)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
>
> Thanks
> Saravanan
>



-- 
Radha Krishna, Proddaturi
253-234-5657


[VOTE] 1.0.0 RC0

2017-10-10 Thread Guozhang Wang
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.0.0.

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
(KIP)
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113)
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc0/RELEASE_NOTES.html
*



*** Please download, test and vote by Friday, October 13, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
*http://home.apache.org/~guozhang/kafka-1.0.0-rc0/
*

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc0/javadoc/
*

* Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc0 tag:

https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2f97bc6a9ee269bf90b019e50b4eeb43df2f1143

* Documentation:
Note the documentation can't be pushed live due to changes that will not go
live until the release. You can manually verify by downloading
http://home.apache.org/~guozhang/kafka-1.0.0-rc0/kafka_2.11-1.0.0-site-docs.tgz

* Successful Jenkins builds for the 1.0.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/20/


/**


Thanks,
-- Guozhang