You understanding is correct. There should be no message loss, unless the #
of correlated failures is larger than the replication factor.
Thanks,
Jun
On Mon, May 13, 2013 at 8:46 AM, Yu, Libo wrote:
> Thanks for answering my questions. Now I know why the offset is saved in
> zookeepers.
> If
Got it and thanks for your explanation~~
Best Regards,
Li Ming
On Mon, May 13, 2013 at 9:10 PM, Chris Curtin wrote:
> Yes. However be aware that starting and stopping processes will cause a
> rebalance of the consumers, so your code may find itself receiving events
> from a different partition
The loas+found directory is part of the Linux extN filesystem semantics,
and yes, it would be a terribly idea to try to remove it - it is
automatically there at the top level of a disk mount point.
Because it being there will mess up kafka. it is a good idea to create a
subdirectory there that
So a lost and found dir is created on every dir? Which environment is this?
Thanks,
Jun
On Mon, May 13, 2013 at 8:53 AM, Yu, Libo wrote:
> That is exactly the case. I am told by admin lost and found cannot be
> removed from /kafka.
>
> Regards,
>
> Libo
>
>
Note that the key is provided by the application. Kafka itself makes no
effort to make sure messages are in key order.
Thanks,
Jun
On Mon, May 13, 2013 at 8:34 AM, Yu, Libo wrote:
> Thanks for both of you. The key is actually very useful. If the key
> increases
> monotonically, it can be used
That is exactly the case. I am told by admin lost and found cannot be removed
from /kafka.
Regards,
Libo
Thanks for the example. My version is not the latest 0.8.
Regards,
Libo
Thanks for answering my questions. Now I know why the offset is saved in
zookeepers.
If a consumer group has only one consumer, when it fails and restarts, I assume
it starts
consuming from the offset saved in the zookeeper. Is that right?
If that is the case, then the consumer client does not ne
Thanks for both of you. The key is actually very useful. If the key increases
monotonically, it can be used as the version of the message. Say version 1
is sent first followed by version 2. It is possible that version 2 is received
first
by the consumer. When version 1 is received, it will be igno
The only limit is #4 in http://kafka.apache.org/faq.html
Thanks,
Jun
On Mon, May 13, 2013 at 3:15 AM, Ming Li wrote:
> Hi,
>
> Does Kafka have a limitation on the simultaneous connections (created with
> Consumer.createJavaConsumerConnector) for the same topic within the same
> group?
>
> My
I'm not sure if this relates directly to your problem, but you are using
non-standard topic names because of the @ character. Topic names should
only include alphanumeric plus hyphen and underscore. Have you checked your
logs for any errors because of processing these topic names?
Regards,
Dennis
Yes. However be aware that starting and stopping processes will cause a
rebalance of the consumers, so your code may find itself receiving events
from a different partition suddenly (so don't assume the partition you are
reading isn't going to change!) Also as things are starting up you may
find a
Hi Andrea,
Thanks for your reply~~ you mean, it is no difference between
having N threads share the same ConsumerConnector created by
Consumer.createJavaConsumerConnector,
and
having N consumer process which has its own ConsumerConnector in every one
of them?
Best Regards,
Li Ming
On Mon, May
It shouldn't.
Creating several listener / consumer processes belonging to the same
group means you are working with a point-to-point message channel so
incoming messages will be delivered only to one consumer.
Maybe I'm wrong but I believe in that scenario there's no difference
(from broker p
Hi,
Does Kafka have a limitation on the simultaneous connections (created with
Consumer.createJavaConsumerConnector) for the same topic within the same
group?
My scenario is I need to consume a topic from different process (not
thread), so I need to create lots of high level consumers.
Best Reg
15 matches
Mail list logo