Putting the consumer and producer in their own packages might hopefully
> alleviate some of this.
>
I like this idea. Moving forward, the biggest dependencies that the broker
will have and the producer/consumer clients won't are the
zookeeper/zkclient jars. It might be worth looking into this. Ple
It's worth noting that we currently run kakfa at LinkedIn with a 5G heap
(not 3G, still using the CMS GC though - should update that), and the
info on that wiki is aimed at 0.7.
We are actively working on things for 0.8 - don't have a 'this works for
us', much less a 'recommendedation' there
We put a lot of info here:
https://cwiki.apache.org/confluence/display/KAFKA/Operations
Does that help?
-Jay
On Tue, Jan 22, 2013 at 7:14 PM, S Ahmed wrote:
> In the wild, what sort of memory usage patterns have you guys seen with
> kafka?
>
> I'm not that well versed with java and its memory
Hey Guys,
One other potentially large benefit is to decouple broker dependencies
from consumer/producer dependencies. This makes upgrading the
consumer/producer and managing jar conflicts a lot less of a hassle.
Putting the consumer and producer in their own packages might hopefully
alleviate some
Hi Jay,
Actually, it's mostly the ability to easily cross-build; also the ease of
understanding the code (less code to grok) and implementing alternatives (I
guess all of those falls under cleanliness).
thanks,
Evan
On Tue, Jan 22, 2013 at 12:47 PM, Jay Kreps wrote:
> Hi Evan,
>
> Makes sen
Hi Evan,
Makes sense. Is your goal in separating the client shrinking the jar size?
or just general cleanliness?
-Jay
On Tue, Jan 22, 2013 at 10:53 AM, Evan Chan wrote:
> Jay,
>
> Comments inlined.
>
> On Tue, Jan 22, 2013 at 10:15 AM, Jay Kreps wrote:
>
> > Hey Evan,
> >
> > Great points, s
Hi Jason,
This is included with Kafka 0.8 - kafka.tools.KafkaMigrationTool. It runs a
0.7 consumer and 0.8 producer to copy the data between your 0.7 Kafka
cluster and 0.8 Kafka cluster.
Thanks,
Neha
On Tue, Jan 22, 2013 at 12:21 PM, Jason Rosenberg wrote:
> Hi Neha,
>
> Can you describe the
Hi Joel
Thanks for the hints. Apparently it was a configuration error at operating
system level.
We are using Debian Linux. Kafka uses setsockopt call with SO_SNDBUF to
setup the buffer size (socket.send.buffer). The operating system then set
the real buffer size to min(socket.send.buffer, net.co
Hi Neha,
Can you describe the migration tool you mention below, for copying data
from 0.7 to 0.8? Is this something provided with 0.8? Or do apps need to
write custom migration tools?
Thanks,
Jason
On Tue, Jan 15, 2013 at 11:06 AM, Neha Narkhede wrote:
> Broadly, the strategy stays the same,
I like that API too!
On Tue, Jan 22, 2013 at 10:53 AM, Evan Chan wrote:
> Jay,
>
> Comments inlined.
>
> On Tue, Jan 22, 2013 at 10:15 AM, Jay Kreps wrote:
>
> > Hey Evan,
> >
> > Great points, some comments:
> > - Not sure if I understand what you mean by separating consumer and main
> > logi
Jay,
Comments inlined.
On Tue, Jan 22, 2013 at 10:15 AM, Jay Kreps wrote:
> Hey Evan,
>
> Great points, some comments:
> - Not sure if I understand what you mean by separating consumer and main
> logic.
>
I just meant having a separate Scala/Java client jar, so it's more
lightweight and easier
Hey Evan,
Great points, some comments:
- Not sure if I understand what you mean by separating consumer and main
logic.
- Yes, cross-building, I think this is in progress now for kafka as a whole
so it should be in either 0.8 or 0.8.1
- Yes, forgot to mention offset initialization, but that is defi
Jay,
For the consumer:
- Separation of the consumer logic from the main logic
- Making it easier to build the consumer for different versions of Scala
(say 2.10)
- Make it easier to read from any offset you want, while being able to keep
partition management features
- Better support for Akka and
13 matches
Mail list logo