Hello,
I was wondering if there is any documented way to recover from a zookeeper
error while retaining Kafka data?
I am developing right now and do not have a redundant zookeeper node. I seem to
regularly get CRC errors that prevent the zookeeper from starting. The trouble
shooting section of t
If auto.offset.reset is set to smallest, it does not mean the consumer
will always consume from the smallest. It means that if no previous offset
commit is found for this consumer group, then it will consume from the
smallest. So for mirror maker, you probably want to always use the same
consumer g
Hmm, that sounds like a bug. Can you paste the log of leader rebalance
here?
Some other things to check are:
1. The actual property name is auto.leader.rebalance.enable, not
auto.leader.rebalance. You’ve probably known this, just to double confirm.
2. In zookeeper path, can you verify /admin/prefer
Qin
Partition problem is solved by passing "--new.producer true" option in
command line, but adding auto.offset.rese=smallest config, every time i
restart the Mirror tool it copies from starting ends up having lot of
duplicate messages in destination cluster.
Could you please tell me how do i conf
I started with clean cluster and started to push data. It still does the
rebalance at random durations even though the auto.leader.relabalance is set to
false.
Thanks
Zakee
> On Mar 6, 2015, at 3:51 PM, Jiangjie Qin wrote:
>
> Yes, the rebalance should not happen in that case. That is a li
When will KAFKA-1997 be available?
Thanks
Connie
On Sat, Mar 7, 2015 at 12:48 AM, Jiangjie Qin
wrote:
> Hi Tao,
>
> Thanks a lot for finding the bug. We are actually rewriting the mirror
> maker in KAFKA-1997 with a much simplified solution using the newly added
> flush() call in new java produ
Hi,
Using kafka 0.8.2.0. (fresh download).
Started zookeeper with:
peter_v@trusty64:~/data/projects/kafka/kafka_current_version$
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
The process is running:
peter_v@trusty64:~/data/projects/kafka/kafka_current_version$ ps ax | grep
Hey Alex,
Conceptually you aren't sending a single message to all topics, but to
be available in all partitions you'd have to send a message for each
partition.
You can fan out in the client like this though:
https://gist.github.com/anonymous/92bb8b788742e95ee2e8
Best,
Mike
On Sat, Mar 7, 2015 a
Hello,
Note I am using the new 0.8.2 version of Kafka and so I'm using the new
KafkaProducer class.
I have a special type of message data that I need to push to every
partition in a topic. Can that be done with custom partitioner that
implements Partitioner when Partitioner expects you to return
This is one of the major issues that we have noted with using JBOD disk
layouts, that there is no tool like partition reassignment to move partitions
between disks.
Another is that the partition balance algorithm would need to be improved,
allowing for better selection of a mount point than rou
For data not showing up, you need to make sure mirror maker consumer
auto.offset.reset is set to smallest, otherwise when you run mirror maker
for the first time, all the pre-existing messages won¹t be consumed.
For partition sticking, can you verify if your messages are keyed messages
or not? If t
I don¹t think we can specify partition to disk mapping now. All the
partition will resides in the same directory.
Here is a wild idea but I haven¹t tried this.
1. Create the topic and make sure all the log files are created.
2. Move each partition log directory to the disk that you want them to
res
Hi,
(sorry if duplicate, my first try was before I was subscribed to the list).
Using kafka 0.8.2.0. (fresh download). Started zookeeper with:
$ bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
The process is running:
$ ps ax | grep -i 'zookeeper' | grep -v grep | awk '{
Xiao,
FileChannel.force is fsync on unix.
To force fsync on every message:
log.flush.interval.messages=1
You are looking at the time based fsync, which, naturally, as you say, is
time-based.
-Jay
On Fri, Mar 6, 2015 at 11:35 PM, Xiao wrote:
> Hi, Jay,
>
> Thank you for your answer.
>
> Sorry
And i also observed ,all the data is moving to one partition in destination
cluster though i have multiple partitions for that topic in source and
destination clusters.
SunilKalva
On Sat, Mar 7, 2015 at 9:54 PM, sunil kalva wrote:
> I ran kafka mirroring tool after producing data in source clus
I ran kafka mirroring tool after producing data in source cluster, and this
is not copied to destination cluster. If i produce data after running tool
those data are copied to destination cluster. Am i missing something ?
--
SunilKalva
please be advice on this.
On Fri, Mar 6, 2015 at 2:02 AM, sunil kalva wrote:
> Hi
>
> Can i map a specific partition to a different disk in a broker. And what
> is the general recommendations for disk to partition mapping for which that
> broker is leader. and also for replications that broker h
Yes +1 on that. Thanks for doing these polls. Quite useful.
Thanks
Jeff
On Thu, Mar 5, 2015 at 12:00 AM, Neha Narkhede wrote:
> Thanks for running the poll and sharing the results!
>
> On Wed, Mar 4, 2015 at 8:34 PM, Otis Gospodnetic <
> otis.gospodne...@gmail.com
> > wrote:
>
> > Hi,
> >
> >
Created https://issues.apache.org/jira/browse/KAFKA-2008
On Sat, Mar 7, 2015 at 1:17 AM, Jiangjie Qin
wrote:
> Hi Tao,
>
> Yes, your understanding is correct. We probably should update the document
> to make it more clear. Could you open a ticket for it?
>
> Jiangjie (Becket) Qin
>
> On 3/6/15,
Actually I was going to report another bug that was exactly caused by
UncheckedOffsets.removeOffset
issue (remove offsets before it is added)
As the current project I am working on heavily relies on the
functionalities MM offers it would be good that if you put the fix to trunk
or gives me some ad
Hi Tao,
Thanks a lot for finding the bug. We are actually rewriting the mirror
maker in KAFKA-1997 with a much simplified solution using the newly added
flush() call in new java producer.
Mirror maker in current trunk is also missing one necessary
synchronization - the UncheckedOffsets.removeOffse
Thanks for the good reference, Aditya, Daniel and Max.
After study the materials, I have a general question regarding architecture
for advice, it seems Flume could already implement messaging and tiered
messaging mechanism, it seems overlap with the function of Kafka (from
messaging system perspec
Yet another option is camus, that we are using in LinkedIn:
https://github.com/linkedin/camus
Jiangjie (Becket) Qin
On 3/6/15, 10:01 PM, "max square" wrote:
>This presentation from a recent Kafka meetup in NYC describes different
>approaches.
>http://www.slideshare.net/gwenshap/kafka-hadoop-fo
23 matches
Mail list logo