Re: Kafka broker crash - broker id then changed

2016-09-09 Thread cs user
Coming back to this issue, looks like it was a result of the centos 7 systemd cleanup task on tmp: /usr/lib/tmpfiles.d/tmp.conf # This file is part of systemd. # # systemd is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License

Re: enhancing KStream DSL

2016-09-09 Thread Michael Noll
Ara, you have shared this code snippet: >allRecords.branch( >(imsi, callRecord) -> "VOICE".equalsIgnoreCase( callRecord.getCallCommType()), >(imsi, callRecord) -> "DATA".equalsIgnoreCase( callRecord.getCallCommType()), >(imsi, callRecord) -> true >); T

Re: enhancing KStream DSL

2016-09-09 Thread Michael Noll
Oh, my bad. Updating the third predicate in `branch()` may not even be needed. You could simply do: KStream[] branches = allRecords .branch( (imsi, callRecord) -> "VOICE".equalsIgnoreCase(callR ecord.getCallCommType()), (imsi, callRecord) -> "DATA".equalsIgnoreCas

Multiple producer instances choose same partition

2016-09-09 Thread devoss ind
Hi, Am a newbie to Kafka and would like to use it. I have a question regarding the same partition selection from multiple producers. Assume that I did not specify any key while sending a message from producer and let producer to choose partition in a round-robin manner. I have multiple producer no

No error to kafka-producer on broker shutdown

2016-09-09 Thread Agostino Calamita
Hi, I'm writing a little test to check Kafka high availability, with 2 brokers, 1 topic with replication factor = 2 and min.insync.replicas=2. This is the test: System.out.println("Building KafkaProducer..."); KafkaProducer m_kafkaProducer = new KafkaProducer(propsProducer);

RE: No error to kafka-producer on broker shutdown

2016-09-09 Thread Tauzell, Dave
The send() method returns a Future. You need to get the result at some point to see what happened. A simple way would be: m_kafkaProducer.send(prMessage).get(); -Dave -Original Message- From: Agostino Calamita [mailto:agostino.calam...@gmail.com] Sent: Friday, September 9, 2016 9:33 A

Re: Performance issue with KafkaStreams

2016-09-09 Thread Eno Thereska
Hi Caleb, Could you share your Kafka Streams configuration (i.e., StreamsConfig properties you might have set before the test)? Thanks Eno On Thu, Sep 8, 2016 at 12:46 AM, Caleb Welton wrote: > I have a question with respect to the KafkaStreams API. > > I noticed during my prototyping work tha

Asynchronous non-blocking Kafka producer without loosing messages

2016-09-09 Thread Michal Turek
Hi there, We are preparing update of our Kafka cluster and applications to Kafka 0.10.x and we have some difficulties with configuration of *Kafka producer to be asynchronous and reliably non-blocking*. As I understand KIP-19 (1), the main intention of Kafka developers was to hard-limit how

Sink Connector feature request: SinkTask.putAndReport()

2016-09-09 Thread Dean Arnold
I have a need for volume based commits in a few sink connectors, and the current interval-only based commit strategy creates some headaches. After skimming the code, it appears that an alternate put() method that returned a Map might be used to allow a sink connector to keep Kafka up to date wrt co

Re: How to decommission a broker so the controller doesn't return it in the list of known brokers?

2016-09-09 Thread Jeff Widman
It looks like this problem is caused by this bug in Kafka 8, which was fixed in Kafka 9: https://issues.apache.org/jira/browse/KAFKA-972 On Thu, Sep 8, 2016 at 3:55 PM, Jeff Widman wrote: > How do I permanently remove a broker from a Kafka cluster? > > Scenario: > > I have a stable cluster of

Re: Performance issue with KafkaStreams

2016-09-09 Thread Caleb Welton
Same in both cases: client.id=Test-Prototype application.id=test-prototype group.id=test-consumer-group bootstrap.servers=broker1:9092,broker2:9092zookeeper.connect=zk1:2181 replication.factor=2 auto.offset.reset=earliest On Friday, September 9, 2016 8:48 AM, Eno Thereska wrote: Hi Caleb,

kafkaproducer send blocks until broker is available

2016-09-09 Thread Peter Sinoros Szabo
Hi, I'd like to use the Java Kafka producer in a non-blocking async mode. My assuptions were that until the new message can fit into the producer's memory, it will queue up those messages and send out once the broker is available. I tested a simple case when I am sending messages using KafkaPro

Re: enhancing KStream DSL

2016-09-09 Thread Ara Ebrahimi
Ah works! Thanks! I was under the impression that these are sequentially chained using the DSL. Didn’t realize I can still use allRecords parallel to the branches. Ara. > On Sep 9, 2016, at 5:27 AM, Michael Noll wrote: > > Oh, my bad. > > Updating the third predicate in `branch()` may not even

Flickering Kafka Topic

2016-09-09 Thread Lawrence Weikum
Hello everyone! We seem to be experiencing some odd behavior in Kafka and were wondering if anyone has come across the same issue and if you’ve been able to fix it. Here’s the setup: 8 brokers in the cluster. Kafka 0.10.0.0. One topic, and only one topic on this cluster, is having issues whe

Custom Offset Management

2016-09-09 Thread Daniel Fagnan
I’m currently wondering if it’s possible to use the internal `__consumer_offsets` topic to manage offsets outside the consumer group APIs. I’m using the low-level API to manage the consumers but I’d still like to store offsets in Kafka. If it’s not possible to publish and fetch offsets from the

Time of derived records in Kafka Streams

2016-09-09 Thread Elias Levy
The Kafka Streams documentation discussed how to assign timestamps to records received from source topic via TimestampExtractor. But neither the Kafka nor the Confluent documentation on Kafka Streams explain what timestamp is associated with a record that has been transformed. What timestamp is a

Re: Problem consuming message using custom serializer

2016-09-09 Thread Shamik Bandopadhyay
Anyone ? On Tue, Sep 6, 2016 at 4:21 PM, Shamik Bandopadhyay wrote: > Hi, > > I'm trying to send java object using kryo object serializer . The > producer is able to send the payload to the queue, but I'm having issues > reading the data in consumer. I'm using consumer group using KafkaStream.