Hi,
After migrating from 0.72 to 0.8, I still use SimpleConsumer to construct
my own consumer. By FetchRequestBuilder, I add all partitions belonging to
the same broker to a single request and get a FetchResponse for all these
partitions. However, I find the error code in FetchResponse is a littl
That was the problem. Thanks Jun!
-Robert
> On Oct 21, 2013, at 9:25 PM, Jun Rao wrote:
>
> Did you set the socket timeout to be larger than the maxWait time in the
> fetch request?
>
> Thanks,
>
> Jun
>
>
>> On Mon, Oct 21, 2013 at 9:02 PM, Robert W wrote:
>>
>> I notice that when usi
Did you set the socket timeout to be larger than the maxWait time in the
fetch request?
Thanks,
Jun
On Mon, Oct 21, 2013 at 9:02 PM, Robert W wrote:
> I notice that when using the SimpleConsumer javaapi and trying to consume
> from an existing topic and partition that has never been written t
I notice that when using the SimpleConsumer javaapi and trying to consume
from an existing topic and partition that has never been written to before,
I get a SocketTimeoutException. Is there a way around this? I'm using
kafka-0.8.0 beta1.
Thanks,
-Robert
Hi, Everyone,
At this moment, we have only one remaining jira (KAFKA-1097) that we plan
to fix in 0.8. After that, we can cut the final 0.8 release.
Thanks,
Jun
On Mon, Oct 7, 2013 at 5:33 PM, Jun Rao wrote:
> Hi, Everyone,
>
> I made another pass of the remaining jiras that we plan to fix i
If the data is compressed, the broker has to recompress the messages in
order to assign offsets. So, there is some CPU overhead. However, it
shouldn't be too high. How high of the CPU load did you observe?
Thanks,
Jun
On Mon, Oct 21, 2013 at 12:01 PM, Lu Xuechao wrote:
> Hi,
>
> We observed b
Updated the description in
http://kafka.apache.org/documentation.html#monitoring. Does that help make
things clearer?
Thanks,
Jun
On Mon, Oct 21, 2013 at 9:43 AM, Monika Garg wrote:
> Thanks for replying Neha.It helped me a lot in getting the things more
> clear.
> I just have some doubts for
Yes, I am using 0.8.
Network IO means bytes transferred.
Thanks for reply.
On Mon, Oct 21, 2013 at 2:24 PM, Neha Narkhede wrote:
> Are these for Kafka 08?
>
> For #2 above, when you say high network I/O, do you mean number of packets
> transferred or size in bytes transferred?
>
> Thanks,
> Neh
Are these for Kafka 08?
For #2 above, when you say high network I/O, do you mean number of packets
transferred or size in bytes transferred?
Thanks,
Neha
On Mon, Oct 21, 2013 at 12:01 PM, Lu Xuechao wrote:
> Hi,
>
> We observed below correlation between kafka configuration and performance
> d
Hello Xuechao,
We do not support decoupling compression from producer to broker side now,
but this feature is part of the client-rewrite project that is being worked
on right now.
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite
#ClientRewrite-ProposedProducerAPI
Guozhang
On Mo
Hi,
We observed below correlation between kafka configuration and performance
data:
1. producer insertion rate drops as compression enabled, especially gzip;
2. when the producer batch size is below 200, we got high CPU/network IO on
brokers, got high network IO on producers/consumers;
What's th
Hi,
I wonder if it is supported to enable compression for storage on brokers
but the producer sends messages uncompressed and consumer receives messages
uncompressed?
thanks.
Agreed. Tim, it would be very helpful is you could provide a patch.
Otherwise, I may be willing to create one.
On Thu, Oct 17, 2013 at 8:15 PM, Jun Rao wrote:
> Tim,
>
> This seems like a reasonable requirement. Would you be interested in
> providing a patch to the jira?
>
> Thanks,
>
> Jun
>
Thanks for replying Neha.It helped me a lot in getting the things more
clear.
I just have some doubts for the value of below Mbeans,please have a look:
(1)ISR expansion
rate::"kafka.server":name="ISRShrinksPerSec",type="ReplicaManager"
,non-zero only during broker startup.
Does the property (1) mea
Max lag corresponds to the partition that lags the most. So it could stay
high until all partitions are caught up.
The second issue is weird. Lags across consumer groups should be more or
less independent. Could this be a producer side issue? Do you see a sudden
jump in the incoming byte rate?
Th
Hi Kojie,
As Jun's FAQ indicates, today the only way you can do is to set/reset the
offset directly with Zookeeper. In addition, currently we do not have
correlations between offsets and timestamps, meaning that given a timestamp
you can not tell which message's offset is produced at around that t
You may find this useful -
http://kafka.apache.org/documentation.html#monitoring
Let us know how we can improve the documentation further.
Thanks,
Neha
On Mon, Oct 21, 2013 at 5:42 AM, Monika Garg wrote:
> Hi,
>
> I am getting so many Mbeans from My JMX like kafka.server,kafka.network
> etc.
Just added an FAQ. Does that answer your question?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowcanIrewindtheoffsetintheconsumer%3F
Thanks,
Jun
On Sun, Oct 20, 2013 at 11:15 PM, kojie.fu wrote:
> hi guozhang,
> i mean ,in a certain topic ,can i use the offset compute the
Hi,
I am getting so many Mbeans from My JMX like kafka.server,kafka.network etc.
There are of different types and have a lot of attributes.
I am trying to get when the attribute value for any Mbean is getting
changed.
But no helpful docs I am getting for it on google.
Even the terms used like
(I've changed the subject of this thread (was "Preparing for the 0.8 final
release"))
So, I'm not sure that my issue is exactly the same as that mentioned in the
FAQ.
Anyway, in looking at the MaxLag values for several consumers (not all
consuming the same topics), it looks like there was a stran
这个意识是这样的,我们有一个如下的场景:
一个A系统往kafka队列写消息,一个B系统负责从kafka消费消息
现可能由于某种原因B系统无法读取kafka的消息,此时A系统还一直在往队列写消息,所以当我们把B系统
修复起来后(比如说花了2天时间修复了),我们不想让B系统还从原来的offset读取数据,而是想从某个时间点开始读取数据,
不知道kafka有类似的 时间戳和offset对应的接口没?
From: Guozhang Wang
Date: 2013-10-21 12:49
To: users@kafka.apache.org; kojie.fu
Subject: Re: h
21 matches
Mail list logo