Guozhang,
Thanks for your prompt replay. I got two 300GB SAS disks for each borker.
At peak time, the produce speed for each broker is about 70MB/s. Apparently,
this speed is already restricted by network. While, the consume speed is
lower
for some topics are consumed by more than one group. Under
One possible approach is to change the retention policy on broker.
How large your messages can accumulate on brokers at peak time?
Guozhang
On Wed, Dec 11, 2013 at 9:09 PM, xingcan wrote:
> Hi,
>
> In my application, the produce speed could be very high at some specific
> time in a day while
Hi,
In my application, the produce speed could be very high at some specific
time in a day while return
to a low speed at the rest of time. Frequently, my data logs are flushed
away before they are being
consumed by clients due to lacking disk space during the busy times.
Increasing consume speed
I tried the sample code and it works. I also can delete the old index file
manually.
Thanks,
Liang Cui
2013/12/12 Jay Kreps
> Is the path d:\kafka-logs\test001-0\00507600.index correct?
>
> The tricky thing here is we don't have access to windows for testing so we
> will need a bit
When using ZK to keep track of last offsets metrics etc., how do you know
when you are pushing your ZK cluster to its limit?
Or can ZK handle thousands of writes/reads per second no problem since it
is all in-memory? But even so, you need some idea on its upper limits and
how close you are to tha
Is the path d:\kafka-logs\test001-0\00507600.index correct?
The tricky thing here is we don't have access to windows for testing so we
will need a bit more help for debugging. If you write a simple Java program
that does
System.out.println(new File("d:\kafka-logs\test001-0\
0
Actually, I think I isolated where the error may be. We have a library that
was recently updated to fix an issue. Other code using the same part of the
library is working properly, but for some reason in this case it isn't.
Apologies for wasting people's time, but I just never even thought to
Do you have compression turned on in the broker?
Guozhang
On Wed, Dec 11, 2013 at 8:43 AM, Sybrandy, Casey <
casey.sybra...@six3systems.com> wrote:
> First, I saw the partial message looking at raw network traffic via
> Wireshark, not the output of the iterator as the iterator never seems to
>
First, I saw the partial message looking at raw network traffic via Wireshark,
not the output of the iterator as the iterator never seems to provide me any
data. That's where the code is hanging.
Second, here's the output from the ConsumerOffsetChecker:
grp1,tdf_topic,0-0 (Group,Topic,BrokerId
Le 11/12/2013 17:09, Jun Rao a écrit :
Yes, this seems to be a bug in javaapi, could you file a jira?
Normally, a consumer will create a stream once and keep iterating on the
stream. The connection to ZK happens when the consumer connector is
created. The connection to the brokers happens after
Casey,
Just to confirm, you saw a partial message output from the iterator.next()
call, not from the consumer's fetch response, correct?
Guozhang
On Wed, Dec 11, 2013 at 8:14 AM, Jun Rao wrote:
> Have you looked at
>
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemst
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped%2Cwhy%3F?
Thanks,
Jun
On Wed, Dec 11, 2013 at 3:59 AM, shahab wrote:
> Hi,
>
> I have a problem in fetching messages from Kafka. I am using simple
> consumer API in Java to fetch message
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped%2Cwhy%3F?
If that doesn't help, could you file a jira and attach your log?
Apache
mailing list doesn't support attachments.
Thanks,
Jun
On Wed, Dec 11, 2013 at 6:15 AM, Sybrandy, Casey <
ca
Yes, this seems to be a bug in javaapi, could you file a jira?
Normally, a consumer will create a stream once and keep iterating on the
stream. The connection to ZK happens when the consumer connector is
created. The connection to the brokers happens after the creation of the
stream.
Thanks,
Jun
These numbers are a bit misleading. In Kafka, a topic partition is the
smallest unit that we distribute messages among consumers in the same
consumer group. So, if the number of consumers is larger than the total
number of partitions in a Kafka cluster, some consumers will never get any
data.
In y
Hi,
I have a problem in fetching messages from Kafka. I am using simple
consumer API in Java to fetch messages from kafka ( the same one which is
stated in Kafka introduction example). The problem is that after a while
(could be 30min or couple of hours), the consumer does not receive any
messag
Hello,
No, the entire log file isn't bigger than that buffer size and this is
occurring while trying to retrieve the first message on the topic, not the last.
I attached a log. Line 408 ( Iterating.) is where we get an iterator
and start iterating over the data. There should be subseq
Le 11/12/2013 10:34, Vincent Rischmann a écrit :
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar
compiled with Scala 2.10.
I have designed my program with a singleton class which holds a map
of (consumer group, ConsumerConnector) and a map of (topic, Producer).
This si
Hi,
I have a problem in fetching messages from Kafka. I am using simple
consumer API in Java to fetch messages from kafka ( the same one which is
stated in Kafka introduction example). The problem is that after a while
(could be 30min or couple of hours), the consumer does not receive any
messag
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar compiled
with Scala 2.10.
I have designed my program with a singleton class which holds a map of
(consumer group, ConsumerConnector) and a map of (topic, Producer).
This singleton class provides two methods, send(topic, ob
Hi,
I am trying my hands on kafka 0.8. I have 3 kafka servers and 3
zookeepers running.With the number of partitions as 10 and replication
factor of 2, 4 producers were pushing data into kafka, each has their
own topic. There are 4 consumers which are getting the data from kafka.
The problem
Update this issue.
I update the log config to log.dirs=\\kafka-logs. The log file is deleted
but still can't delete the index file. I got below error message.
[2013-12-11 00:07:59,671] INFO Deleting index
d:\kafka-logs\test001-0\00507600.index (kafka.log.OffsetIndex)
[2013-12-11 00:07:
22 matches
Mail list logo