Hi,
I was running a mirror maker and got
java.lang.IllegalMonitorStateException that caused the underlying fetcher
thread completely stopped. Here is the log from mirror maker.
[2015-03-21 02:11:53,069] INFO Reconnect due to socket error:
java.io.EOFException: Received -1 when reading from chann
I thought that ordering is guaranteed within the partition or mirror maker
doesn't preserve partitions?
On Fri, Mar 20, 2015 at 4:44 PM, Guozhang Wang wrote:
> I think 1) will work, but not sure if about 2), since messages replicated
> at two clusters may be out of order as well, hence you may
Hi Emmanuel,
You can firstly run a kafka producer perf (bin/kafka-producer-perf-test.sh)
test with your storm consumers and kafka consumer perf (bin/
kafka-consumer-perf.test.sh) test with your own producers respectively to
see if the bottleneck is really in kafka.
Thanks,
Manu Zhang
On Mon, Mar
Hi Emmanuel,
Can you post your kafka server.properties and in your producer are your
distributing your messages into all kafka topic partitions.
--
Harsha
On March 20, 2015 at 12:33:02 PM, Emmanuel (ele...@msn.com) wrote:
Kafka on test cluster:
2 Kafka nodes, 2GB, 2CPUs
3 Zookeeper no
Hi Guozhang,
Thanks for the note. So if we do not deserialize till the last moment like
Jay suggested we would not need extra buffers for deserialization. Unless
we need random access to messages it seems like we can deserialize right at
the time of iteration and allocate objects only if the Consu
Thanks for the insight Jay. That seems like a good plan. I'll take a look
at it ASAP.
I have no idea how much things would improve in a general application with
this. Like you said CRC and decompression could still be the dominant
factor. In my experience cutting down allocation to 0 helps with 9
Hi Joe,
I think this is a bug on the javadoc, it should be pointing to the
newproducerconfigs, and the java producer only accept these configs.
Guozhang
On Sat, Mar 21, 2015 at 5:29 PM, Joseph Lawson wrote:
> Hi everyone,
>
>
> I was reviewing the javadocs for the 082 and 083 Kafka Producer (
Rajiv,
A side note for re-using ByteBuffer: in the new consumer we do plan to add
some memory management module such that it will try to reuse allocated
buffer for fetch responses. But as Jay mentioned, for now inside the poll()
call de-serialization and de-compression is done which requires to al
Jiangjie,
Yeah, I welcome the round-robin strategy, as the 'range' strategy ('til now
the only one available), is not always good at balancing partitions, as you
observed above.
The main thing I'm bringing up in this thread though is the question of why
there needs to be a restriction to having a
Dear All,
Based on the URL: http://kafka.apache.org/07/configuration.html, there are
some Important configuration properties that can be tuned for better
performance.
It will be great if you can advise how we can change or override those
setting such as broker configuration. Thanks.
Reg
Zijing, the new consumer will be in the next release. We don't have a hard
date for this yet.
Rajiv, I'm game if we can show a >= 20% performance improvement. It
certainly could be an improvement, but it might also be that the CRC
validation and compression dominate.
The first step would be
htt
11 matches
Mail list logo