Are you sure about that? Our latest tests show that loosing the drive in a
jbod setup makes the broker fail (unfortunately).
On Apr 18, 2014 9:01 PM, "Bello, Bob" wrote:
> Yes you would lose the topic/partitions on the drive. I'm not quite sure
> if Kafka can determine what topics/partitions are
don't make it to the proper aggregators.
>
> I'm looking into either forking or rewriting this library using Codahale
> Metrics v 3.0.1, and supporting multicast more explicitly. Is this
> something you could do better/faster than me, or should I proceed? :)
>
> -Andrew
I think it currently is a java (signed) integer or maybe this was zookeeper?
We are generating the id from IP address for now but this is not ideal (and
can cause integer overflow with java signed ints)
On Oct 1, 2013 12:52 PM, "Aniket Bhatnagar"
wrote:
> I would like to revive an older thread ar
We would love to see kerberos authentication + some unix-like permission
system for topics (where one topic is a file and users/groups have read
and/or write access).
I guess this is not high-priority but it enables some sort of
kafka-as-a-service possibility with multi tenancy. You could integrat
Hi all,
Since I couldn't find any other way to publish kafka metrics to ganglia
from kafka 0.8 (beta), I just published on github a super-simple ganglia
metrics reporter for Kafka. It is configurable through the kafka config
file and you can use it on the broker side and on your consumers/producer
We sort of have the same situation where our analytics DC is one of the
main producer DC too. If you use Kafka only for analytics it is fine to
produce directly to the analytics cluster from that DC and mirror the rest.
However we also want to be able to run things locally that will consume
local
By the way, having an official contrib package with graphite, ganglia and
other well-known reporters would be awesome so that not everyone has to
write their own.
On Jul 1, 2013 10:27 PM, "Joel Koshy" wrote:
> Also, there are several key metrics on the broker and client side - we
> should compile
Have you thought about integrating Kafka into a distributed resource
management framework like Hadoop YARN (which would probably leverage HDFS)
or Mesos?
On May 23, 2013 11:31 PM, "Neha Narkhede" wrote:
> This paper talks about how to do that -
> http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf
> It
Thanks for your response, here is the JIRA:
https://issues.apache.org/jira/browse/KAFKA-732
On Thu, Jan 24, 2013 at 1:16 AM, Neha Narkhede wrote:
> We haven't tried to run this. Please can you file a JIRA ?
>
>
> On Wed, Jan 23, 2013 at 9:30 AM, Maxime Brugidou
> wrot
always turn off shallow iteration
> since we have to decompress data in 0.7 format before sending it to a 0.8
> broker.
>
> Thanks,
> Neha
>
>
> On Wed, Jan 23, 2013 at 6:39 AM, Maxime Brugidou
> wrote:
>
> > Hi all,
> >
> > I am working with MirrorMake
I'm not sure what design doc you are looking at (v1 probably?, v3 is here:
https://cwiki.apache.org/KAFKA/kafka-detailed-replication-design-v3.html )
but If I understand correctly, consistent hashing for partitioning is more
about remapping as few keys as possible when adding/deleting partitions,
w
Thanks for your response. I think the work-around is not really acceptable
for me since it will consume 3x the resources (because replication of 3 is
the minimum acceptable) and it will still make the cluster less available
anyway (unless i have only 3 brokers).
The thing is that 0.7 was making th
Hello, I am currently testing the 0.8 branch (and it works quite well). We
plan to not use the replication feature for now since we don't really need
it, we can afford to lose data in case of unrecoverable failure from a
broker.
However, we really don't want to have producers/consumers fail if a b
13 matches
Mail list logo