/presentation/pillai
Thanks,
Xiao Li
Hi, all,
In my previous note, the two check points per partition have to be stored in
different files. Otherwise, the files could be corrupted.
Thanks,
Xiao Li
On Mar 2, 2015, at 10:25 PM, Xiao wrote:
> Hi, all,
>
> I just started reading the source codes of Kafka. Th
been implemented in IBM Q
Replication since 2001.
Thanks,
Xiao Li
On Mar 3, 2015, at 3:36 PM, Jay Kreps wrote:
> Hey Josh,
>
> As you say, ordering is per partition. Technically it is generally possible
> to publish all changes to a database to a single partition--generally
. Unfortunately, based on my
understanding, Kafka is unable to do it because it does not do fsync regularly
for achieving better throughput.
Best wishes,
Xiao Li
On Mar 3, 2015, at 3:45 PM, Xiao wrote:
> Hey Josh,
>
> Transactions can be applied in parallel in the consumer side
ood what I explained above.
Best wishes,
Xiao Li
Best wishes,
Xiao Li
On Mar 3, 2015, at 4:23 PM, Xiao wrote:
> Hey Josh,
>
> If you put different tables into different partitions or topics, it might
> break transaction ACID at the target side. This is risky for some use cases.
fully understand Kafka source codes before
using it.
Best wishes,
Xiao Li
On Mar 4, 2015, at 5:18 AM, Josh Rader wrote:
> Thanks everyone for your responses! These are great. It seems our cases
> matches closest to Jay's recommendations.
>
> The one part that sounds a littl
mainframe?
Thanks,
Xiao Li
On Mar 4, 2015, at 8:01 AM, Jay Kreps wrote:
> Hey Xiao,
>
> 1. Nothing prevents applying transactions transactionally on the
> destination side, though that is obviously more work. But I think the key
> point here is that much of the time the replicat
, you
can have multiple producers publish the messages at the same time. This could
improve your throughput and your consumers can easily identify if any message
is lost due to any reason.
Best wishes,
Xiao Li
On Mar 4, 2015, at 4:59 PM, James Cheng wrote:
> Another thing to think about
others too?
Night,
Xiao Li
On Mar 4, 2015, at 9:00 AM, Jay Kreps wrote:
> Hey Xiao,
>
> Yeah I agree that without fsync you will not get durability in the case of
> a power outage or other correlated failure, and likewise without
> replication you won't get durability in the
we always choose the earliest time points of.
— The recovery points (offsets) in Kafka recovery-point file,
— The offsets and IDs of the last message in the partitions.
— Your local last published message IDs.
Best wishes,
Xiao Li
On Mar 5, 2015, at 11:07 AM, James Cheng wrote:
>
memory resources.
Best wishes,
Xiao Li
On Mar 5, 2015, at 11:07 AM, James Cheng wrote:
>
> On Mar 5, 2015, at 12:59 AM, Xiao wrote:
>
>> Hi, James,
>>
>> This design regarding the restart point has a few potential issues, I think.
>>
>> - The rest
askDelayMs,
period = flushCheckpointMs,
TimeUnit.MILLISECONDS)
This thread is only time-controlled. It does not check the number of messages.
Thank you,
Xiao Li
On Mar 5, 2015, at 11:59 AM, Jay Kreps wrote:
> Hey Xiao,
>
> That's n
re in Oracle and DB2
z/OS.
I believe transactional messaging is a critical feature. The design document is
not very clear. Do you have more materials or links about it?
Thanks,
Xiao Li
On Mar 7, 2015, at 9:33 AM, Jay Kreps wrote:
> Xiao,
>
> FileChannel.force is fsync on unix.
design proposal of “transactional messaging” misses a design change in
the Log recovery? Recovery checkpoints might be in the middle of multiple
in-flight transactions.
Thank you very much!
Xiao Li
On Mar 10, 2015, at 1:01 PM, Jiangjie Qin wrote:
> Hi Xiao,
>
> For z/OS, do you
Hi, Pete,
Thank you for sharing your experience with me!
sendfile and mmap are common system calls, but it sounds like we still need to
consider at least the file-system differences when deploying Kafka.
Cross-platform supports are a headache. : )
Best wishes,
Xiao Li
On Mar 10, 2015
Kafka can provide a light-weight protocol when the transaction
is only against a single partition?
We do have monster transactions, which are normally caused by uncommitted batch
jobs. However, that should be very rare. Maybe monthly, quarterly or yearly.
Thank you very much!
Xiao Li
On
I think this is a usability issue. It might need an extra admin tool to verify
if all configuration settings are correct, even if the broker can return an
error message to the consumers.
Thanks,
Xiao Li
On Mar 17, 2015, at 5:18 PM, Jiangjie Qin wrote:
> The problem is the consumers
people share it
with us? I believe it can help us a lot.
Thanks,
Xiao Li
On Mar 17, 2015, at 12:26 PM, James Cheng wrote:
> This is a great set of projects!
>
> We should put this list of projects on a site somewhere so people can more
> easily see and refer to it. These
=AGlRjlrNDYk
Event publishing is different from database replication. Kafka is used for
change publishing or maybe also used for sending changes (recorded in files).
Thanks,
Xiao Li
On Mar 17, 2015, at 7:26 PM, Arya Ketan wrote:
> AFAIK , linkedin uses databus to do the same. Aesop is built
Linkedin Gabblin compaction tool is using Hive to perform the compaction. Does
it mean Lumos is replaced?
Confused…
On Mar 17, 2015, at 10:00 PM, Xiao wrote:
> Hi, all,
>
> Do you know whether Linkedin plans to open source Lumos in the near future?
>
> I found the answer
Hi, James,
Thank you for sharing it!
The links of videos and slides are the same. Could you check the link of
slides?
Xiao Li
On Mar 20, 2015, at 11:30 AM, James Cheng wrote:
> For those who missed it:
>
> The Kafka Audit tool was also presented at the 1/27 Kafka meetu
Hi all,
I have two mirror maker processes running on two different machines
fetching messages from same topic from one data center to another data
center. These two processes are assigned to the same consumer group. If I
want no data loss or data duplication even when one of the mirror maker
proce
Hi,
I discovered that the new mirror maker implementation in trunk now only
accept one consumer.config property instead of a list of them which means
we can only supply one source per mirror maker process. Is it a reason for
it? If I have multiple source kafka clusters do I need to setup multiple
Hi team,
I got NPE when running the latest mirror maker that is in trunk
[2015-01-23 18:55:20,229] INFO
[kafkatopic-1_LM-SHC-00950667-1422010513674-cb0bb562], exception during
rebalance (kafka.consumer.ZookeeperConsumerConnector)
java.lang.NullPointerException
at
kafka.tools.MirrorMaker$Intern
Hi,
I have checked out the trunk code and tried to use Mirror Maker.
When I enabled the csv reporter in Mirror Maker consumer config
(—consumer.config=c1.properties)
kafka.metrics.polling.interval.secs=5
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter
kafka.csv.metrics.dir=/var/
Hi team,
I was running the mirror maker off the trunk code and got IOException when
configuring the mirror maker to use KafkaCSVMetricsReporter as the metric
reporter
Here is the exception I got
java.io.IOException: Unable to create /tmp/csv1/BytesPerSec.csv
at
com.yammer.metrics.reporting.CsvR
Hi,
In order to get it work you can turn off csv-reporter.
On Thu, Feb 5, 2015 at 1:06 PM, Xinyi Su wrote:
> Hi,
>
> Today I updated Kafka cluster from 0.8.2-beta to 0.8.2.0 and run kafka
> producer performance test.
>
> The test cannot continue because of some exceptions thrown which does not
Alex,
I got similar error before due to incorrect network binding of my laptop's
wireless interface. You can try with setting advertised.host.name=kafka's
server hostname in the server.properties and run it again.
On Sun, Feb 8, 2015 at 8:38 AM, Alex Melville wrote:
> Howdy all,
>
> I recently
Hi team,
If I set offsets.storage=kafka can I still use auto.commit.enable to turn
off auto commit and auto.commit.interval.ms to control commit interval ? As
the documentation mentions that the above two properties are used to
control offset to zookeeper.
--
Regards,
Tao
Hi team,
I got java.nio.channels.ClosedByInterruptException when
closing ConsumerConnector using kafka 0.8.2
Here is the exception
2015-02-09 19:04:19 INFO kafka.utils.Logging$class:68 -
[test12345_localhost], ZKConsumerConnector shutting down
2015-02-09 19:04:19 INFO kafka.utils.Logging$clas
It happens every time I shutdown the connector. It doesn't block the
shutdown process though
On Tue, Feb 10, 2015 at 1:09 AM, Guozhang Wang wrote:
> Is this exception transient or consistent and blocking the shutdown
> process?
>
> On Mon, Feb 9, 2015 at 3:07 AM, tao xiao wro
when I exec the delete command,return information is below:
It mark the kafka-topic.sh not support the delete parameter.
my package is compiled by myself.
Hi team,
I am comparing the differences between
ConsumerConnector.createMessageStreams
and ConsumerConnector.createMessageStreamsByFilter. My understanding is
that createMessageStreams creates x number of threads (x is the number of
threads passed in to the method) dedicated to the specified topic
not really for throughput / latency,
> but rather consumption semantics.
>
> Guozhang
>
>
> On Tue, Feb 10, 2015 at 3:02 AM, tao xiao wrote:
>
> > Hi team,
> >
> > I am comparing the differences between
> > ConsumerConnector.createMessageStrea
=> 2) a total of 5 threads will
> be created, and consuming AC-1,AC-2,AC-3,BC-1/2/3,BC-4/5/6 respectively;
>
> With createMessageStreamsByFilter("*C" => 3) a total of 3 threads will be
> created, and consuming AC-1/BC-1/BC-2, AC-2/BC-3/BC-4, AC-3/BC-5/BC-6
> r
; Guozhang
>
> On Tue, Feb 10, 2015 at 6:24 PM, tao xiao wrote:
>
> > Thank you Guozhang for your detailed explanation. In your example
> > createMessageStreamsByFilter("*C" => 3) since threads are shared among
> > topics there may be situation where all 3
Hi team,
I was trying to migrate my consumer offset from kafka to zookeeper.
Here is the original settings of my consumer
props.put("offsets.storage", "kafka");
props.put("dual.commit.enabled", "false");
Here is the steps
1. set dual.commit.enabled=true
2. restart my consumer and monitor offse
k those up and see what they report?
>
> On Thu, Feb 12, 2015 at 07:32:39PM +0800, tao xiao wrote:
> > Hi team,
> >
> > I was trying to migrate my consumer offset from kafka to zookeeper.
> >
> > Here is the original settings of my consumer
> >
> > props.
he MaxLag mbean in the consumer
> > to verify that the maxlag is zero?
> >
> > On Thu, Feb 12, 2015 at 11:24:42PM +0800, tao xiao wrote:
> > > Hi Joel,
> > >
> > > When I set dual.commit.enabled=true the count value of both metrics got
ird. Are you by any chance running an older version of the
> offset checker? Is this straightforward to reproduce?
>
> On Fri, Feb 13, 2015 at 09:57:31AM +0800, tao xiao wrote:
> > Joel,
> >
> > No, the metric was not increasing. It was 0 all the time.
> >
> &
t checker always use that offset.
>
> On 2/12/15, 7:30 PM, "tao xiao" wrote:
>
> >I used the one shipped with 0.8.2. It is pretty straightforward to
> >reproduce the issue.
> >
> >Here are the steps to reproduce:
> >1. I have a consumer using high leve
Hi team,
Is there a metric that shows the consumer lag of a particular consumer
group? similar to what offset checker provides
--
Regards,
Tao
o longer committing to offset topic on broker, while
> > offset checker always use that offset.
> >
> > On 2/12/15, 7:30 PM, "tao xiao" wrote:
> >
> > >I used the one shipped with 0.8.2. It is pretty straightforward to
> > >reproduce the issue.
>
o just monitor MaxLag as that reports the maximum
> of all the lag metrics.
>
> On Fri, Feb 13, 2015 at 05:03:28PM +0800, tao xiao wrote:
> > Hi team,
> >
> > Is there a metric that shows the consumer lag of a particular consumer
> > group? similar to what offset
Alex,
Are you sure you have data continually being sent to the topic in source
cluster after you bring up MM? By default auto.offset.reset=largest in MM
consumer config which means MM only fetches the largest offset if the
consumer group has no initial offset in zookeeper.
You can have MM print m
You can get the partition number and offset of the message by
MessageAndMetadata.partition() and MessageAndMetadata.offset().
To your scenario you can turn off auto commit auto.commit.enable=false and
then commit by yourself after finishing message consumption.
On Mon, Feb 16, 2015 at 1:40 PM, Ar
> committed consumer offsets. This allows us to catch a broken consumer, as
> well as an active consumer that is just falling behind.
>
> -Todd
>
>
> On Fri, Feb 13, 2015 at 9:34 PM, tao xiao wrote:
>
> > Thanks Joel. But I discover that both MaxLag and FetcherLagMetr
ka, rather than a select set.
>
> -Todd
>
>
> On Mon, Feb 16, 2015 at 12:27 AM, tao xiao wrote:
>
> > Thank you Todd for your detailed explanation. Currently I export all
> > metrics to graphite using the reporter configuration. is there a way I
> can
> > do si
Bsxx15A
>
> When I run the mirrormaker and then spin up a console consumer to read from
> the source cluster, I get 0 messages consumed.
>
>
> Alex
>
> On Sun, Feb 15, 2015 at 3:00 AM, tao xiao > wrote:
>
> > Alex,
> >
> > Are you sure you have da
Gaurav,
You can get the partition number the message belongs to via
MessageAndMetadata.partition()
On Fri, Feb 27, 2015 at 5:16 AM, Jun Rao wrote:
> The partition api is exposed to the consumer in 0.8.2.
>
> Thanks,
>
> Jun
>
> On Thu, Feb 26, 2015 at 10:53 AM, Gaurav Agarwal
> wrote:
>
> > Af
een.
> > >
> > >
> > > How can I query a broker or zookeeper for the number of partitions in a
> > > given topic? I'm trying to write a custom partitioner that sends a
> > message
> > > to every partition within a topic, and so I
Hi team,
I had a replica node that was shutdown improperly due to no disk space
left. I managed to clean up the disk and restarted the replica but the
replica since then never caught up the leader shown below
Topic:test PartitionCount:1 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leade
4:27 PM, Harsha wrote:
> you can increase num.replica.fetchers by default its 1 and also try
> increasing replica.fetch.max.bytes
> -Harsha
>
> On Fri, Feb 27, 2015, at 11:15 PM, tao xiao wrote:
> > Hi team,
> >
> > I had a replica node that was shutdown impro
Hi team,
I have 2 brokers (0 and 1) serving a topic mm-benchmark-test. I did some
tests on the two brokers to verify how leader got elected. Here are the
steps:
1. started 2 brokers
2. created a topic with partition=1 and replication-factor=2. Now brokers 1
was elected as leader
3. sent 1000 mess
r groups when you restarted your
> >broker?
> >
> >
> >
> >
> >On Mon, Mar 2, 2015 at 3:15 AM, tao xiao wrote:
> >> Hi team,
> >>
> >> I have 2 brokers (0 and 1) serving a topic mm-benchmark-test. I did some
> >> tests on the tw
ned earlier will not happen.
>
> Jiangjie (Becket) Qin
>
> On 3/2/15, 7:16 PM, "tao xiao" wrote:
>
> >Since I reused the same consumer group to consume the messages after step
> >6
> >data there was no data loss occurred. But if I create a new consumer grou
You can set the consumer config auto.offset.reset=largest
Ref: http://kafka.apache.org/documentation.html#consumerconfigs
On Tue, Mar 3, 2015 at 8:30 PM, Achanta Vamsi Subhash <
achanta.va...@flipkart.com> wrote:
> Hi,
>
> We are using HighLevelConsumer and when a new subscription is added to the
Thanks guy. with unclean.leader.election.enable set to false the issue is
fixed
On Tue, Mar 3, 2015 at 2:50 PM, Gwen Shapira wrote:
> of course :)
> unclean.leader.election.enable
>
> On Mon, Mar 2, 2015 at 9:10 PM, tao xiao wrote:
> > How do I achieve point 3? is there a con
Hi team,
Is there a built-in metric that can measure the end to end latency in MM?
--
Regards,
Tao
estamp in the message header, and let a separate
> consumer to fetch the message on both ends to measure the latency.
>
> Guozhang
>
> On Wed, Mar 4, 2015 at 11:07 PM, tao xiao wrote:
>
> Hi team,
>
> Is there a built-in metric that can measure the end to end latency in MM?
>
> --
> Regards,
> Tao
>
>
>
>
> --
> -- Guozhang
>
>
>
--
Regards,
Tao
The reason you need to use "a".getBytes is because the default serializer.class
is kafka.serializer.DefaultEncoder which takes byte[] as input. The way the
array returns hash code is not based on equality of the elements hence
every time a new byte array is created which is the case in your sample
Hi team,
After reading the source code of AbstractFetcherManager I found out that
the usage of num.consumer.fetchers may not match what is described in the
Kafka doc. My interpretation of the Kafka doc is that the number of
fetcher threads is controlled by the value of
property num.consumer.fetc
Hi team,
I am having java.util.IllegalFormatConversionException when running
MirrorMaker with log level set to trace. The code is off latest trunk with
commit 8f0003f9b694b4da5fbd2f86db872d77a43eb63f
The way I bring up is
bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config
~/Downloa
A bit more context: I turned on async in producer.properties
On Sat, Mar 7, 2015 at 2:09 AM, tao xiao wrote:
> Hi team,
>
> I am having java.util.IllegalFormatConversionException when running
> MirrorMaker with log level set to trace. The code is off latest trunk w
I think I worked out the root cause
Line 593 in MirrorMaker.scala
trace("Updating offset for %s to %d".format(topicPartition, offset)) should
be
trace("Updating offset for %s to %d".format(topicPartition, offset.element))
On Sat, Mar 7, 2015 at 2:12 AM, tao xiao wrote:
cket) Qin
>
> On 3/6/15, 10:15 PM, "tao xiao" wrote:
>
> >I think I worked out the root cause
> >
> >Line 593 in MirrorMaker.scala
> >
> >trace("Updating offset for %s to %d".format(topicPartition, offset))
> >should
> >be
> &g
Qin
>
> On 3/6/15, 1:23 AM, "tao xiao" wrote:
>
> >Hi team,
> >
> >After reading the source code of AbstractFetcherManager I found out that
> >the usage of num.consumer.fetchers may not match what is described in the
> >Kafka doc. My interpret
Ctrl+c is clean shutdown. kill -9 is not
On Mon, Mar 9, 2015 at 2:32 AM, Alex Melville wrote:
> What does a "clean shutdown" of the MM entail? So far I've just been using
> Ctrl + C to send an interrupt to kill it.
>
>
> Alex
>
> On Sat, Mar 7, 2015 at 10:59 PM, Jiangjie Qin
> wrote:
>
> > If a
pic name in destination cluster, i mean can i
> have different topic names for source and destination cluster for
> mirroring. If yes how can i map source topic with destination topic name ?
>
> SunilKalva
>
> On Mon, Mar 9, 2015 at 6:41 AM, tao xiao wrote:
>
> > Ctrl+
Hi,
I created a message stream in my consumer using connector
.createMessageStreamsByFilter(new Whitelist("mm-benchmark-test\\w*"), 5); I
have 5 topics in my cluster and each of the topic has only one partition.
My understanding of wildcard stream is that multiple streams are shared
between selec
I ended up running kafka-reassign-partitions.sh to reassign partitions to
different nodes
On Tue, Mar 10, 2015 at 11:31 AM, sy.pan wrote:
> Hi, tao xiao and Jiangjie Qin
>
> I encounter with the same issue, my node had recovered from high load
> problem (caused by other application)
The default partitioner of old producer API is a sticky partitioner that
keeps sending messages to the same partition for 10 secs (I don't remember
the exact length duration) before switch to another partition if no key is
specified in message. You can easily override this by setting
partitioner.cl
Hi,
I have an user case where I need to consume a list topics with name that
matches pattern topic.* except for one that is topic.10. Is there a way
that I can combine the use of whitelist and blacklist so that I can achieve
something like accept all topics with regex topic.* but exclude topic.10?
I actually mean if we can achieve this in mirror maker.
On Tue, Mar 10, 2015 at 10:52 PM, tao xiao wrote:
> Hi,
>
> I have an user case where I need to consume a list topics with name that
> matches pattern topic.* except for one that is topic.10. Is there a way
> that I can com
org.apache.kafka.clients.producer.Producer is the new api producer
On Tue, Mar 10, 2015 at 11:22 PM, Corey Nolet wrote:
> Thanks Jiangie! So what version is considered the "new api"? Is that the
> javaapi in version 0.8.2?.
>
> On Mon, Mar 9, 2015 at 2:29 PM, Jiangjie Qin
> wrote:
>
> > The sti
g
>
> On Tue, Mar 10, 2015 at 7:58 AM, tao xiao > wrote:
>
> > I actually mean if we can achieve this in mirror maker.
> >
> > On Tue, Mar 10, 2015 at 10:52 PM, tao xiao > wrote:
> >
> > > Hi,
> > >
> > > I have an user case where I
not receive messages
> from the other paritions?
>
> Thanks,
> -James
>
>
> On Feb 11, 2015, at 8:13 AM, Guozhang Wang wrote:
>
> > The new consumer will be released in 0.9, which is targeted for end of
> this
> > quarter.
> >
> > On Tue, Feb 10, 20
r
> >> messages from getting through?
> >>
> >> What about createMessageStreams("AC" => 1)? That creates a single stream
> >> that contains messages from multiple partitions, which might be on
> >> different brokers. Does that also
gt;
> In MM people can pass in consumer configs, in which people can specify
> consumption topics, either in regular topic list format or whitelist /
> blacklist. So I think it already does what you need?
>
> Guozhang
>
> On Tue, Mar 10, 2015 at 10:09 PM, tao xiao wrote:
>
Did you stop mirror maker?
On Thu, Mar 12, 2015 at 8:27 AM, Saladi Naidu wrote:
> We have 3 DC's and created 5 node Kafka cluster in each DC, connected
> these 3 DC's using Mirror Maker for replication. We were conducting
> performance testing using Kafka Producer Performance tool to load 100
>
le some other topics should get Y streams".
>
> Guozhang
>
> On Wed, Mar 11, 2015 at 11:59 PM, tao xiao wrote:
>
> > The topic list is not specified in consumer.properties and I don't think
> > there is any property in consumer config that allows us to specify
ith
> "--whitelist" you could already specify regex to do filtering.
>
> On Thu, Mar 12, 2015 at 5:56 AM, tao xiao wrote:
>
> > Hi Guozhang,
> >
> > I was meant to be topicfilter not topic-count. sorry for the confusion.
> > What I want to achieve is to pass my o
blacklist and whitelist I can easily achieve this by having something like
--whitelist topic.* --blacklist topic.1
On Thu, Mar 12, 2015 at 9:10 PM, tao xiao wrote:
> something like dynamic filtering that can be updated at runtime or deny
> all but allow a certain set of topics that cannot be spe
e crazily long.
>
> Guozhang
>
> On Thu, Mar 12, 2015 at 6:10 AM, tao xiao wrote:
>
> > something like dynamic filtering that can be updated at runtime or deny
> all
> > but allow a certain set of topics that cannot be specified easily by
> regex
> >
> > On
not achieve your goal, since it is still static.
>
> Guozhang
>
> On Thu, Mar 12, 2015 at 6:30 AM, tao xiao wrote:
>
> > Thank you Guozhang for your advice. A dynamic topic filter is what I need
> > so that I can stop a topic consumption when I need to at runtime.
after you stop consuming from it?
>
> Jiangjie (Becket) Qin
>
> On 3/12/15, 8:05 AM, "tao xiao" wrote:
>
> >Yes, you are right. a dynamic topicfilter is more appropriate where I can
> >filter topics at runtime via some kind of interface e.g. JMX
> >
> >
" since the offsets will be
> committed. If you change the filtering dynamically back to whilelist these
> topics, you will lose the data that gets consumed during the period of the
> blacklist.
>
> Guozhang
>
> On Thu, Mar 12, 2015 at 10:01 PM, tao xiao wrote:
>
> >
> per broker.
> >
> > Thanks Tao & Guozhang!
> > -James
> >
> >
> > On Mar 11, 2015, at 5:00 PM, tao xiao xiaotao...@gmail.com>> wrote:
> >
> >> Fetcher thread is per broker basis, it ensures that at lease one fetcher
> >> thread
Hi,
I wanted to know that how I can shutdown mirror maker safely (ctrl+c) when
there is no message coming to consume. I am using mirror maker off trunk
code.
--
Regards,
Tao
; Thanks for clarifying,
> -Zakee
>
>
>
> > On Mar 12, 2015, at 11:11 PM, tao xiao > wrote:
> >
> > The number of fetchers is configurable via num.replica.fetchers. The
> > description of num.replica.fetchers in Kafka documentation is not quite
> >
afely
shutdown MM
On Saturday, March 14, 2015, Jiangjie Qin wrote:
> ctrl+c should work. Did you see any issue for that?
>
> On 3/12/15, 11:49 PM, "tao xiao" >
> wrote:
>
> >Hi,
> >
> >I wanted to know that how I can shutdown mirror maker safely (ctrl+c)
Hi team,
I have two consumer instances with the same group id connecting to two
different topics with 1 partition created for each. One consumer uses
partition.assignment.strategy=roundrobin and the other one uses default
assignment strategy. Both consumers have 1 thread spawned internally and
con
-localhost-1426605370072-904d6fba-0
On Tue, Mar 17, 2015 at 11:30 PM, tao xiao wrote:
> Hi team,
>
> I have two consumer instances with the same group id connecting to two
> different topics with 1 partition created for each. One consumer uses
> partition.assignment.strategy=roundrobi
out a warning to
consumer when doing rebalancing? but at least I'd suggest we document
somewhere to warn people not to use different assignment strategies for the
same consumer group
On Wed, Mar 18, 2015 at 8:28 AM, Xiao wrote:
> I think this is a usability issue. It might need an extra ad
You can set producer property retries not equal to 0. Details can be found
here
http://kafka.apache.org/documentation.html#newproducerconfigs
On Fri, Mar 20, 2015 at 3:01 PM, Samuel Chase wrote:
> Hello Everyone,
>
> In the the new Java Producer API, the Callback code in
> KafkaProducer.send is
icate other errors like buffer memory not enough.
On Fri, Mar 20, 2015 at 5:12 PM, Samuel Chase wrote:
> @Tao,
>
> On Fri, Mar 20, 2015 at 12:39 PM, tao xiao wrote:
> > You can set producer property retries not equal to 0. Details can be
> found
> > here
> > http://
here is the slide
http://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
On Sat, Mar 21, 2015 at 2:36 AM, Xiao wrote:
> Hi, James,
>
> Thank you for sharing it!
>
> The links of videos and slides are the same. Could you check the link of
> slides?
Hi,
I was running a mirror maker and got
java.lang.IllegalMonitorStateException that caused the underlying fetcher
thread completely stopped. Here is the log from mirror maker.
[2015-03-21 02:11:53,069] INFO Reconnect due to socket error:
java.io.EOFException: Received -1 when reading from chann
Linkedin has an excellent tool that monitors lag/data loss/data duplication
and etc. Here is the reference
http://www.slideshare.net/JonBringhurst/kafka-audit-kafka-meetup-january-27th-2015
it is not open sourced though.
On Mon, Mar 23, 2015 at 3:26 PM, sunil kalva wrote:
> Hi
> What is best p
, TimeUnit.MILLISECONDS)
}
On Mon, Mar 23, 2015 at 1:50 PM, tao xiao wrote:
> Hi,
>
> I was running a mirror maker and got
> java.lang.IllegalMonitorStateException that caused the underlying fetcher
> thread completely stopped. Here is the log from mirror maker.
>
> [2015-03-
1 - 100 of 252 matches
Mail list logo