Ok I'll create a JIRA issue on this.
Thanks!
2016-01-23 21:47 GMT+01:00 Bruno Rassaerts :
> +1 here
>
> As a workaround we seek to the current offset which resets the current
> clients internal states and everything continues.
>
> Regards,
> Bruno Rassaerts | Freelance Java Developer
>
> Novazon
Thank you for your replies Gwen and Jason.
Lets continue the discussion in the JIRA.
On Fri, Jan 22, 2016 at 8:37 PM, Jason Gustafson wrote:
> The offset API is one of the known gaps in the new consumer. There is a
> JIRA (KAFKA-1332), but we might need a KIP to make that change now that the
>
Hi,
I am facing the following issue:
When I start my consumer I get that the offset for one of the partitions is
going to be reset to the last committed offset
14:55:28.788 [pool-1-thread-1] DEBUG o.a.k.c.consumer.internals.Fetcher -
Resetting offset for partition t1-4 to the committed offset 20
Hi Bruno,
Can you tell me a little bit more about that? A seek() in the
`onPartitionAssigned`?
Thanks.
2016-01-25 10:51 GMT+01:00 Han JU :
> Ok I'll create a JIRA issue on this.
>
> Thanks!
>
> 2016-01-23 21:47 GMT+01:00 Bruno Rassaerts :
>
>> +1 here
>>
>> As a workaround we seek to the curren
Hi
I have kafka version 0.8.2 installed on my cluster.
I am able to create topics and write messages to it. But when I fetched
lates offsets of brokers using
kafka-run-class kafka.tools.GetOffsetShell --broker-list brokerAddr:9092
--topic topicname --time -1
I got offsets of few partitions and t
Issue created: https://issues.apache.org/jira/browse/KAFKA-3146
2016-01-25 16:07 GMT+01:00 Han JU :
> Hi Bruno,
>
> Can you tell me a little bit more about that? A seek() in the
> `onPartitionAssigned`?
>
> Thanks.
>
> 2016-01-25 10:51 GMT+01:00 Han JU :
>
>> Ok I'll create a JIRA issue on this.
Thanks!
Ismael
On Mon, Jan 25, 2016 at 4:03 PM, Han JU wrote:
> Issue created: https://issues.apache.org/jira/browse/KAFKA-3146
>
> 2016-01-25 16:07 GMT+01:00 Han JU :
>
> > Hi Bruno,
> >
> > Can you tell me a little bit more about that? A seek() in the
> > `onPartitionAssigned`?
> >
> > Thanks
Hi Kafka-Gurus,
Using Kafka in one of my projects, the question arose, how the records are
provided using KafkaCosumer.poll(long). Is the entire map of records copied
into the clients memory, or does poll(..) work on an iterator-based model?
I am asking this, as I face the following scenario: The
Apologies for the late arrival to this thread. There was a bug in the
0.9.0.0 release of Kafka which could cause the consumer to stop fetching
from a partition after a rebalance. If you're seeing this, please checkout
the 0.9.0 branch of Kafka and see if you can reproduce this problem. If you
can,
Hi Jason,
Was this a server bug or a client bug?
Thanks,
Rajiv
On Mon, Jan 25, 2016 at 11:23 AM, Jason Gustafson
wrote:
> Apologies for the late arrival to this thread. There was a bug in the
> 0.9.0.0 release of Kafka which could cause the consumer to stop fetching
> from a partition after a
Han,
>From your logs it seems the thread which cannot fetch more data is
rebalance-1, which is assigned with partitions [balance-11, balance-10,
balance-9];
>From your consumer-group command the partitions that is lagging are [balance-6,
balance-7, balance-8] which is not assigned to this process
Hey Rajiv, the bug was on the client. Here's a link to the JIRA:
https://issues.apache.org/jira/browse/KAFKA-2978.
-Jason
On Mon, Jan 25, 2016 at 11:42 AM, Rajiv Kurian wrote:
> Hi Jason,
>
> Was this a server bug or a client bug?
>
> Thanks,
> Rajiv
>
> On Mon, Jan 25, 2016 at 11:23 AM, Jason
Thanks Jason. We are using an affected client I guess.
Is there a 0.9.0 client available on maven? My search at
http://mvnrepository.com/artifact/org.apache.kafka/kafka_2.10 only shows
the 0.9.0.0 client which seems to have this issue.
Thanks,
Rajiv
On Mon, Jan 25, 2016 at 11:56 AM, Jason Gusta
I have created a JIRA, https://issues.apache.org/jira/browse/KAFKA-3147
Thanks,
Meghana
On Tue, Jan 19, 2016 at 5:07 AM, Ismael Juma wrote:
> Can you please file an issue in JIRA for this?
>
> Ismael
>
> On Tue, Jan 12, 2016 at 2:40 PM, Meghana Narasimhan <
> mnarasim...@bandwidth.com> wrote:
>
Hi Guozhang,
Sorry for that example. They does not come from the same run, just paste
that to illustrate the problem.
I'll try out what Jason suggests tomorrow and also retry the 0.9.0 branch.
2016-01-25 21:03 GMT+01:00 Rajiv Kurian :
> Thanks Jason. We are using an affected client I guess.
>
>
Hi,
I'm looking for any recommendations on how to set the Java heap of brokers.
I have found details on GC configuration but 6g looks very high fo me.
-Xms6g -Xmx6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
-XX:MinMe
Thanks.
Ismael
On Mon, Jan 25, 2016 at 8:28 PM, Meghana Narasimhan <
mnarasim...@bandwidth.com> wrote:
> I have created a JIRA, https://issues.apache.org/jira/browse/KAFKA-3147
>
> Thanks,
> Meghana
>
> On Tue, Jan 19, 2016 at 5:07 AM, Ismael Juma wrote:
>
> > Can you please file an issue in JI
When doing poll() when there is no current position on the consumer, the
position returned is the one of the last offset then? (I though that it
will return that position + 1 because it was already commited)
2016-01-25 15:07 GMT+01:00 Franco Giacosa :
> Hi,
>
> I am facing the following is
Ok, I've reproduced this again and made sure to grab the broker logs before
the instance are terminated. I posted a writeup with what seemed like the
relevant bits of the logs here:
https://gist.github.com/lukesteensen/793a467a058af51a7047
To summarize, it looks like Gwen was correct and the broke
>
> This is not an expected behavior.I think you may need to check
> replica fetcher threads to see why partitions are not getting replicated.
>
I've been looking at the server.out logs, is there different place I should
be looking for the replica fetcher threads?
> Can you confirm that IS
There does appear to be a recurring issue with the node logging closed
connections from the other 2 nodes:
https://gist.github.com/williamsjj/0481600566f10e5593c4
The other nodes though show no errors.
-J
On Mon, Jan 25, 2016 at 2:35 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
It might be a little unintuitive, but the committed position should be the
offset of the next message to consume.
-Jason
On Mon, Jan 25, 2016 at 1:26 PM, Franco Giacosa wrote:
> When doing poll() when there is no current position on the consumer, the
> position returned is the one of the last o
Hi All,
For Kafka 0.8.2, I was using kafka-consumer-offset-checker.sh to get a
consumer group's offset . Now I am testing against Kafka 0.9 , but the same
tool always gave me
Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for /consumers/
I saw "C
Yifan,
Offset checker has been deprecated in 0.9.0 (search for
"kafka-consumer-groups.sh"):
http://kafka.apache.org/documentation.html#upgrade
If you are using the new consumer, then its metadata is not registered in
ZK, so you should use the --bootstrap-server option instead of --zookeeper:
./
Thanks for reply, Guozhang. I got 'Missing required argument "[zookeeper]"'
after running that.
Yifan
On Mon, Jan 25, 2016 at 6:05 PM, Guozhang Wang wrote:
> Yifan,
>
> Offset checker has been deprecated in 0.9.0 (search for
> "kafka-consumer-groups.sh"):
>
> http://kafka.apache.org/documentat
I forgot to mention you need to add "--new-consumer" as well.
Guozhang
On Mon, Jan 25, 2016 at 7:11 PM, Yifan Ying wrote:
> Thanks for reply, Guozhang. I got 'Missing required argument "[zookeeper]"'
> after running that.
>
>
> Yifan
>
> On Mon, Jan 25, 2016 at 6:05 PM, Guozhang Wang wrote:
>
We are using the new kafka consumer with the following config (as logged by
kafka)
metric.reporters = []
metadata.max.age.ms = 30
value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = myGroup.id
partition.assignmen
Wanted to add that we are not using auto commit since we use custom
partition assignments. In fact we never call consumer.commitAsync() or
consumer.commitSync() calls. My assumption is that since we store our own
offsets these calls are not necessary. Hopefully this is not responsible
for the poor
The exception seems to be thrown here
https://github.com/apache/kafka/blob/0.9.0/clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java#L236
Is this not expected to hit often?
On Mon, Jan 25, 2016 at 9:22 PM, Rajiv Kurian wrote:
> Wanted to add that we are not using auto commit
That works, thanks.
Yifan
On Mon, Jan 25, 2016 at 7:27 PM, Guozhang Wang wrote:
> I forgot to mention you need to add "--new-consumer" as well.
>
> Guozhang
>
> On Mon, Jan 25, 2016 at 7:11 PM, Yifan Ying wrote:
>
> > Thanks for reply, Guozhang. I got 'Missing required argument
> "[zookeeper]"
30 matches
Mail list logo