Hi Jun
Great presentation, great feature
On 04/09/2013 07:48 AM, Jun Rao wrote:
Piotr,
Thanks for sharing this. Very interesting and useful study. A few comments:
1. For existing 0.7 users, we have a migration tool that mirrors data from
an 0.7 cluster to an 0.8 cluster. Applications can up
Piotr,
Thanks for sharing this. Very interesting and useful study. A few comments:
1. For existing 0.7 users, we have a migration tool that mirrors data from
an 0.7 cluster to an 0.8 cluster. Applications can upgrade to 0.8 by
upgrading consumers first, followed by producers.
2. Have you looked
Yes, Kafka broker writes data to disk. There is a time-based and size-based
retention policy that determine how long the data are kept.
Thanks,
Jun
On Mon, Apr 8, 2013 at 3:23 AM, Oleg Ruchovets wrote:
> Yes , I resolve this by changing a configuration path in zookeeper
> properties dataDir=/
Hi,
At LiveRamp we are considering replacing Scribe with Kafka, and as a first
step we run some tests to evaluate producer performance. You can find our
preliminary results here:
https://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/. We
hope this will be useful for some folks, and
Sorry for the late reply. Totally understand your point in developing a
quality software. But this scenario is a deployment situation where server
going down is inevitable (and not so much software is not reliable in this
case). What I am really looking for is Kafka producer library (and consum
Yes , I resolve this by changing a configuration path in zookeeper
properties dataDir=/tmp/zookeeper. I made scala code debug and got that I
have cuple of topics from previous executions. One of the topic cause the
exception.
By the way : do I understand correct that kafka serialize the data on