Thanks,
Jun
On Fri, Apr 12, 2013 at 1:08 PM, Marc Labbe wrote:
> I updated the Developer setup page. Let me know if it's not clear enough or
> if I need to change anything.
>
> On another note, since the idea plugin is already there, would it be
> possible to add the sbteclipse plugin permanen
Hi all,
I posted an update on the post (
https://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/) to
test the effect of disabling ack messages from brokers. It appears this
only makes a big difference (~2x improvement ) when using synthetic log
messages, but only a modest 12% improv
In the producer config, we use the zk connect string:
zk001,zk002,zk003/kafka.
Both brokers have registered themselves with zookeeper. Because only the
first broker has ever received any writes, only the first broker is
registered for the topic in question.
--Tom
On Fri, Apr 12, 2013 at 3:32 PM
Do you use a VIP or zookeeper for producer side load balancing ? In
other words, what are the values you override for "broker.list" and
"zk.connect" in the producer config ?
Thanks,
Neha
On Fri, Apr 12, 2013 at 12:16 PM, Tom Brown wrote:
> We have recently setup a new kafka (0.7.1) cluster with
That is not available for performance reasons. Broker uses zero-copy
to transfer data from disk to the network on the consumer side. If we
post process data already written to disk before sending it to
consumer, we will lose the performance advantage that we have due to
zero copy.
Thanks,
Neha
On
I updated the Developer setup page. Let me know if it's not clear enough or
if I need to change anything.
On another note, since the idea plugin is already there, would it be
possible to add the sbteclipse plugin permanently as well?
On Fri, Apr 12, 2013 at 10:52 AM, Jun Rao wrote:
> MIS, Marc
Thanks for the replay Neha, but that's is end-to-end and I am looking
for a broker-consumer compression.
So:
Producer -> uncompressed -> broker -> compressed -> consumer
Regards
Pablo
2013/4/12 Neha Narkhede :
> Kafka already supports end-to-end compression which means data
> transfer between
Kafka already supports end-to-end compression which means data
transfer between brokers and consumers is compressed. There are two
supported compression codecs - GZIP and Snappy. The latter is lighter
on CPU consumption. See this blog post for comparison -
http://geekmantra.wordpress.com/2013/03/28
We have recently setup a new kafka (0.7.1) cluster with two brokers. Each
topic has 2 partitions per server. We have a two processes that that write
to the cluster using the class: kafka.javaapi.producer.Producer.Producer.
The problem is that the first process only writes to the first broker. The
Hi
Is it possible to enable compression between the broker and the consumer?
We are thinking in develop this feature in kafka 0.7 but first I would
like to check if there is something out there.
Our escenario is like this:
- the producer is a CPU bounded machine, so we want to keep the CPU
cons
>>But it shouldn't almost never happen.
Obviously I mean it should almost never happen. Not shouldn't.
Philip
Correct, I should've been more specific. "key.serializer.class"
defaults to whatever "serializer.class" is set to.
Thanks,
Neha
On Fri, Apr 12, 2013 at 9:12 AM, Soby Chacko wrote:
> Hi Neha,
>
> I could be understanding it wrong. I am looking at
> https://issues.apache.org/jira/browse/KAFKA-544
Hi Neha,
I could be understanding it wrong. I am looking at
https://issues.apache.org/jira/browse/KAFKA-544 and see the following
comment.
This patch does the following:
1. Change Encoder and Decoder to map between object and byte[] rather than
between Message and object.
2. Require two encoders
On Fri, Apr 12, 2013 at 8:27 AM, S Ahmed wrote:
> Interesting topic.
>
> How would buffering in RAM help in reality though (just trying to work
> through the scenerio in my head):
>
> producer tries to connect to a broker, it fails, so it appends the message
> to a in-memory store. If the broker
It defaults both key and value serializer to DefaultEncoder, but you
can customize both independently through "key.serializer.class" and
"serializer.class" config options.
Thanks,
Neha
On Fri, Apr 12, 2013 at 8:33 AM, Soby Chacko wrote:
> Thanks for the reply. But, when I did some more research,
Thanks for the reply. But, when I did some more research, it seems like its
using the same encoder for both. For example, if I provide serializer.class
explicitly, this serializer is used for both key and value. However, if I
don't specify any serializer, then it appears that Kafka defaults to
Defa
Interesting topic.
How would buffering in RAM help in reality though (just trying to work
through the scenerio in my head):
producer tries to connect to a broker, it fails, so it appends the message
to a in-memory store. If the broker is down for say 20 minutes and then
comes back online, won't
This is just my opinion of course (who else's could it be? :-)) but I think
from an engineering point of view, one must spend one's time making the
Producer-Kafka connection solid, if it is mission-critical.
Kafka is all about getting messages to disk, and assuming your disks are
solid (and 0.8 ha
Hi Itai,
It will be easier to explain things if we know your use case. I'll
take a stab at your questions -
1. At most one consumer per topic.
2. That this single consumer would consume all partitions.
If you have one consumer in a group, you can achieve this. However, I
still wonder w
Another way to handle this is to provision enough client and broker servers
so that the peak load can be handled without spooling.
Thanks,
Jun
On Thu, Apr 11, 2013 at 5:45 PM, Piotr Kozikowski wrote:
> Jun,
>
> When talking about "catastrophic consequences" I was actually only
> referring to t
MIS, Marc,
Thanks for the update. Could you put those notes to that wiki?
Jun
On Thu, Apr 11, 2013 at 10:11 PM, MIS wrote:
> here is a brief about setting up Kafka in eclipse- 3.6.2. with scala IDE
> installed as a plugin. Scala version used is 2.9
>
> 1) follow instructions as described here
I don't know if anyone else has done that or if there is any indication
against doing it but I found that adding the sbteclipse plugin in the
project/plugins.sbt to be particularly easy to do and it worked for me. I
am only using to look/edit the code but I am not running anything from
eclipse thou
If a F500 company wants commercial support for Kafka, who would they turn
to?
It appears that there seems to be natural fit with real time processing
schemes aka storm&trident.
I am sure that someone in the community must have come across this issue.
Thanks
Milind
23 matches
Mail list logo