- LogOffsetTest .
testEmptyLogsGetOffsets
- LogOffsetTest .
testGetOffsetsBeforeEarliestTime
- LogOffsetTest .
testGetOffsetsBeforeLatestTime
- LogOffsetTest .
testGetOffsetsBeforeNow
- ProducerSendTest .
testSendToPartition
failed.
Can I trust trunk? :)
It should be api compatible. Not sure how stable ZK 3.4.5 is though.
Thanks,
Jun
On Fri, Feb 14, 2014 at 4:32 PM, Clark Breyman wrote:
> Is anyone running 0.8 (or pre-0.8.1) with the latest Zookeeper? Any known
> compatibility issues? I didn't see any in JIRA but thought I'd give a
> shout.
>
Guozhang,
All the patches for KAFKA-992 look like they were committed in August,
which was before 0.8 was shipped. Should we really be seeing this on 0.8?
Thanks, Clark
Hello Libo,
When ZK resumes from a soft failure, like a GC, it will mark the ephemeral
nodes as session timed out, and the brokers will try to re-register upon
receiving the session timeout. You can re-produce this issue by signal
pause the ZK process.
Guozhang
On Fri, Feb 14, 2014 at 12:15 PM,
Is anyone running 0.8 (or pre-0.8.1) with the latest Zookeeper? Any known
compatibility issues? I didn't see any in JIRA but thought I'd give a
shout.
Hey, thanks so much for pointing this out. I think that this is likely
what is happening for us. I will attempt this fix.
Cheers,
Carl
On Thu, Feb 13, 2014 at 8:01 PM, zhong dong wrote:
> We encountered with this problem, too.
>
> And our problem is that we set the message.max.bytes larger than
Hi team,
We have three brokers on our production cluster. I noticed two of them somehow
got offline and then re-registered with zookeeper and got back online. It seems
the
issue was caused by some zookeeper issue. So I want to know what may be the
possible
cases of the issue. If I want to reprod
Yeah that is a bug. We should be giving an error here rather than retrying.
-Jay
On Fri, Feb 14, 2014 at 7:54 AM, Jun Rao wrote:
> Hi, Zhong,
>
> Thanks for sharing this. We probably should add a sanity check in the
> broker to make sure that replica.fetch.max.bytes >= message.max.bytes.
> Cou
Oh, interesting. So I am assuming the following implementation:
1. We have an in-memory fetch position which controls the next fetch
offset.
2. Changing this has no effect until you poll again at which point your
fetch request will be from the newly specified offset
3. We then have an in-memory but
Hi, Zhong,
Thanks for sharing this. We probably should add a sanity check in the
broker to make sure that replica.fetch.max.bytes >= message.max.bytes.
Could you file a jira for that?
Jun
On Thu, Feb 13, 2014 at 8:01 PM, zhong dong wrote:
> We encountered with this problem, too.
>
> And our p
I don't see the log in your email. Perhaps you can send out a link to
things like pastebin?
Thanks,
Jun
On Thu, Feb 13, 2014 at 8:06 AM, Arjun Kota wrote:
> Yes i have made it to trace as it will help me debug the things.
> Have u found any issue in the it.
> On Feb 13, 2014 9:12 PM, "Jun Rao
Hello,
I've been studying different options to consume messages from kafka to
hadoop(hdfs) and found three odds.
Linkedin Camus - https://github.com/linkedin/camus
kafka-hadoop-loader - https://github.com/michal-harish/kafka-hadoop-loader
hadoop-consumer -
https://github.com/apache/kafka/tree/0.8
I don't think there is any direct high level API equivalent to this.every
time you read messages using high level api your offset gets synced in zoo
keeper .auto offset is for cases where last read offset for example have
been purged n rather than getting exception you want to just fall back to
eit
Good Morning,
I am testing the Kafka High Level Consumer using the ConsumerGroupExample code
from the Kafka site. I would like to retrieve all the existing messages on the
topic called "test" that I have in the Kafka server config. Looking at other
blogs, auto.offset.reset should be set to "sma
14 matches
Mail list logo