It should just continue consuming using the existing offsets. It'll have to
refresh metadata to pick up the leadership change, but once it does it can
just pick up where consumption from the previous leader stopped -- all the
ISRs should have the same data, so the new leader will have all the same
> On Jul 21, 2015, at 9:15 AM, Ewen Cheslack-Postava wrote:
>
> On Tue, Jul 21, 2015 at 2:38 AM, Stevo Slavić wrote:
>
>> Hello Apache Kafka community,
>>
>> I find new consumer poll/seek javadoc a bit confusing. Just by reading docs
>> I'm not sure what the outcome will be, what is expected
Hi,
The document about zookeeper.connect on Broker Configs says that
"Note that you must create this path yourself prior to starting the broker",
but it seems the broker creates the path automatically on start up
(maybe related issue: https://issues.apache.org/jira/browse/KAFKA-404 ).
So the docu
Since you mentioned consumer groups, I'm assuming you're using the high
level consumer? Do you have auto.commit.enable set to true?
It sounds like when you start up you are always getting the
auto.offset.reset behavior, which indicates you don't have any offsets
committed. By default, that behavio
If you are using the new producer api from kafka 0.8.2 there is no pluggable
partitioner in it for this you need to use the latest trunk. But in 0.8.2 if
you are using old producer code you can implement a pluggable partitioner
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka
Sriharsha, thanks for your response. I'm using version 0.8.2, and I am
implementing kafka.producer.Partitioner.
I noticed that in the latest trunk the line I specified above is replaced
with:
this.partitioner = config.getConfiguredInstance(ProducerConfig.
PARTITIONER_CLASS_CONFIG, Partitioner.clas
Hi,
Are you using the latest trunk for Producer API?. Did you implement the
interface here
https://cwiki.apache.org/confluence/display/KAFKA/KIP-+22+-+Expose+a+Partitioner+interface+in+the+new+producer
--
Harsha
On July 21, 2015 at 2:27:05 PM, JIEFU GONG (jg...@berkeley.edu) wrote:
Hi a
Hi all,
If I wanted to write my own partitioner, all I would need to do is write a
class that extends Partitioner and override the partition function,
correct? I am currently doing so, at least, with a class in the package
'services', yet when I use:
properties.put("partitioner.class", "services.
Hi,
We've been using Kafka for a couple of months, and now we're trying to to
write a Simple application using the ConsumerGroup to fully understand
Kafka.
Having the producer continually writing data, our consumer occasionally
needs to be restarted. However, once the program is brought back up,
Hey Stevo,
I think ConsumerRecords only contains the partitions which had messages.
Would you mind creating a jira for the feature request? You're welcome to
submit a patch as well.
-Jason
On Tue, Jul 21, 2015 at 2:27 AM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> New HLC poll ret
This is a known issue. There are a few relevant JIRAs and a KIP:
https://issues.apache.org/jira/browse/KAFKA-1788
https://issues.apache.org/jira/browse/KAFKA-2120
https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient
-Ewen
On Tue, Jul 21, 2015 at 7:05
On Tue, Jul 21, 2015 at 2:38 AM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> I find new consumer poll/seek javadoc a bit confusing. Just by reading docs
> I'm not sure what the outcome will be, what is expected in following
> scenario:
>
> - kafkaConsumer is instantiated with auto-com
Hello Apache Kafka community,
It seems new high level consumer coming in 0.8.3 will support only offset
storage in Kafka topic.
Can somebody please confirm/comment?
Kind regards,
Stevo Slavic.
Thank you, Nicolas!
On Tue, Jul 21, 2015 at 10:46 AM, Nicolas Phung
wrote:
> Yes indeed.
>
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/lib/kafka
>
> You can put several disk/partitions too.
>
> Regards,
>
> On Tue, Jul 21, 2015 at 4:37 PM, Yuheng Du
Yes indeed.
# A comma seperated list of directories under which to store log files
log.dirs=/var/lib/kafka
You can put several disk/partitions too.
Regards,
On Tue, Jul 21, 2015 at 4:37 PM, Yuheng Du wrote:
> Just wanna make sure, in server.properties, the configuration
> log.dirs=/tmp/kafka-
Just wanna make sure, in server.properties, the configuration
log.dirs=/tmp/kafka-logs
specifies the directory of where the log (data) stores, right?
If I want the data to be saved elsewhere, this is the configuration I need
to change, right?
Thanks for answering.
best,
Hello Apache Kafka community,
Just noticed that :
- message is successfully published using new 0.8.2.1 producer
- and then Kafka is stopped
next attempt to publish message using same instance of new producer hangs
forever, and following stacktrace gets logged repeatedly:
[WARN ] [o.a.kafka.comm
Hello Apache Kafka community,
I find new consumer poll/seek javadoc a bit confusing. Just by reading docs
I'm not sure what the outcome will be, what is expected in following
scenario:
- kafkaConsumer is instantiated with auto-commit off
- kafkaConsumer.subscribe(someTopic)
- kafkaConsumer.positi
Hello Apache Kafka community,
New HLC poll returns ConsumerRecords.
Do ConsumerRecords contain records for every partition that HLC is actively
subscribed on for every poll request, or does it contain only records for
partitions which had messages and which were retrieved in poll request?
If lat
Thanks all for fast feedback!
It's great news if that aspect is improved as well in new HLC. I will test
and get back with any related findings.
Kind regards,
Stevo Slavic.
On Mon, Jul 20, 2015 at 9:57 PM, Guozhang Wang wrote:
> Hi Stevo,
>
> I am still not very clear on your point yet, I gues
Hi Nicolas,
>From my experience there are only two ways out:
1) wait for retention time to pass, so data gets deleted (this is usually
unacceptable)
2) trace offset of corrupt message on all affected subscriptions and skip
this message by overwriting it (offset+1)
Problem is, that when encounteri
Hello,
I'm using Confluent Kafka (0.8.2.0-cp). When I'm trying to process message
from my Kafka topic with Spark Streaming, I've got the following error :
kafka.message.InvalidMessageException: Message is corrupt (stored crc =
3561357254, computed crc = 171652633)
at kafka.message.Message
22 matches
Mail list logo