Re: Kafka Streams backend for Apache Beam?

2016-06-26 Thread Eno Thereska
Hi Alex, There are no such plans currently. Thanks Eno > On 26 Jun 2016, at 17:13, Alex Glikson wrote: > > Hi all, > > I wonder whether there are plans to implement Apache Beam backend based on > Kafka Streams? > > Thanks, > Alex > >

Re: Halting because log truncation is not allowed for topic __consumer_offsets

2016-06-26 Thread Peter Davis
Thanks James. Has anyone else seen this happen under normal operation? So far I have not thought of how to reliably recreate the issue under normal(ish) circumstances. Haven't even been able to prove yet the true nature of the network issues. Only evidence is that it happened 3 times last week in

Re: Halting because log truncation is not allowed for topic __consumer_offsets

2016-06-26 Thread James Cheng
Peter, can you add some of your observations to those JIRAs? You seem to have a good understanding of the problem. Maybe there is something that can be improved in the codebase to prevent this from happening, or reduce the impact of it. Wanny, you might want to add a "me too" to the JIRAs as we

Re: Messages delayed in jUnit test (version 0.9.0.0)

2016-06-26 Thread Rodrigo Ottero
Hello. I still could not progress in this issue. As per Jay Kreps recent email in thread 'delay of producer and consumer in kafka 0.9 is too big to be accepted', I will do the *TestEndToEndLatency *test to see my latency. But, besides that, am I doing something wrong in the code below? Thanks &

Re: Halting because log truncation is not allowed for topic __consumer_offsets

2016-06-26 Thread Peter Davis
We have seen this several times and it's quite frustrating. It seems to happen due to the fact that the leader for a partition writes to followers ahead of committing itself, especially for a topic like __consumer_offsets that is written with acks=all. If a brief network interruption occurs (a

Kafka Streams backend for Apache Beam?

2016-06-26 Thread Alex Glikson
Hi all, I wonder whether there are plans to implement Apache Beam backend based on Kafka Streams? Thanks, Alex

using Kafka Streams in connectors?

2016-06-26 Thread Alex Glikson
Hi all, Would it make sense to use Kafka Streams within Kafka Connect connectors? For example, if I need to aggregate messages in a sink connector. Or, alternatively, would it make sense to have the aggregation logic implemented using Kafka Connect, write the result into a new topic, and then h

Re: Kafka Streams reducebykey and tumbling window - need final windowed KTable values

2016-06-26 Thread Clive Cox
Following on from this thread, if I want to iterate over a KTable at the end of its hopping/tumbling Time Window how can I do this at present myself? Is there a way to access these structures? If this is not possible it would seem I need to duplicate and manage something similar to a list of win

Re: Kafka producer metadata issue

2016-06-26 Thread Enrico Olivelli
Shekar, do you see any error in broker side logs? IMHO It appears that you have some error on the broker or that you are not connecting to a kafka broker Enrico Il Dom 26 Giu 2016 11:44 Shekar Tippur ha scritto: > I added > > Future ret = producer.send(new ProducerRecord("test1", > Integer.toSt

Re: Kafka producer metadata issue

2016-06-26 Thread Shekar Tippur
I added Future ret = producer.send(new ProducerRecord("test1", Integer.toString(i), "xyz")); ret.get(); This is the exception I see .. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting me

Re: Kafka producer metadata issue

2016-06-26 Thread Shekar Tippur
Enrico, I dint quite get it. Can you please elaborate? - Shekar On Sun, Jun 26, 2016 at 12:06 AM, Enrico Olivelli wrote: > Hi, > I think you should call 'get' on the Future returned by 'send' or issue a > producer.flush. > Producer.send is async > > Enrico > > Il Dom 26 Giu 2016 07:07 Shekar T

Re: Kafka producer metadata issue

2016-06-26 Thread Enrico Olivelli
Hi, I think you should call 'get' on the Future returned by 'send' or issue a producer.flush. Producer.send is async Enrico Il Dom 26 Giu 2016 07:07 Shekar Tippur ha scritto: > Another observation .. > > The below code produces. Cant understand this randomness :/ > > 2 xyz > > 3 xyz > > 3 xyz >