+1
On Friday, February 20, 2015, Guozhang Wang wrote:
> +1 binding.
>
> Checked the md5, and quick start.
>
> Some minor comments:
>
> 1. The quickstart section would better include the building step after
> download and before starting server.
>
> 2. There seems to be a bug in Gradle 1.1x with
The Apache project doesn't have a web console for kafka.
Have you taken a look at https://github.com/yahoo/kafka-manager as of yet?
I haven't myself hoping to get sometime tonight/this weekend to-do so.
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.ly
- - - - - - - - - - -
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whypartitionleadersmigratethemselvessometimes
?
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.ly
- - - - - - - - - - - - - - - - -
On Fri, Feb 20, 2015 at 7:19 PM, Sa Li wrote:
> Hi, All
>
> My dev cluster has three
Hi, All
My dev cluster has three nodes (1, 2, 3), but I've seen quite often that
the 1 node just not work as a leader, I run preferred-replica-election many
time, every time I run replica election, I see 1 turn out to be leader for
some partitions, but it just stop leadership after a while, and th
Hi, All
I 've like to use kafka web console to monitor the offset/topics stuff, it
is easy to use, however, it is freezing/stopping or dying too frequently.
I don't think it's a problem on the OS level.
Seems to be a problem on the application level.
I've already fixed open file handlers to 98000
Found the problem - it is a bug with Partitions of kafka client. Can you
guys confirm and patch in kafka clients?
for (int i = 0; i < numPartitions; i++) {
int partition = Utils.abs(counter.getAndIncrement()) % numPartitions;
if (partitions.get(partition).leader() != null) {
return
Update:
I am using kafka.clients 0.8.2-beta. Below are the test steps
1. setup local kafka clusters with 2 brokers, 0 and 1
2. create topic X with replication fact 1 and 4 partitions
3. verify that each broker has two partitions
4. shutdown broker 1
5. start a producer sending dat
Hello,
I am experimenting sending data to kafka using KafkaProducer and found that
when a partition is completely offline, e.g. a topic with replication
factor = 1 and some broker is down, KafkaProducer seems to be hanging
forever. Not even exit with the timeout setting. Can you take a look?
I ch
We store offsets in INT64, so you can go as high as:
9,223,372,036,854,775,807
messages per topic-partition before looping around :)
Gwen
On Fri, Feb 20, 2015 at 12:21 AM, Clement Dussieux | AT Internet <
clement.dussi...@atinternet.com> wrote:
> Hi,
>
>
> I am using Kafka_2.9.2-0.8.2 and play a
Hi Daniel,
I think you can still use the same logic you had in the custom partitioner
in the old producer. You just move it to the client that creates the
records.
The reason you don't cache the result of partitionsFor is that the producer
should handle the caching for you, so its not necessarily
Hello Kafka-users!
I am facing a migration from a kind of ( a bit self plumbed) kafka 0.8.1
producer to the new kafka-clients API. I just recognized, that the new
KafkaProducer initializes its own Partitioner that cannot be changed (final
field, no ctor-param, no
Class.forName(config.getPartit
Hi,
I am using Kafka_2.9.2-0.8.2 and play a bit with offsets in my code.
I would like to know how is implemented the offset system for message posting.
The main question here is: for every message posted, it gets an offset greater
that the previous one, meaning that message1 gets offset x and m
Great. Thanks for sharing!
On Thu, Feb 19, 2015 at 8:51 PM, Jim Hoagland
wrote:
> Hi Folks,
>
> At the recent Kafka Meetup in Mountain View there was interest expressed
> about the encryption through Kafka proof of concept that Symantec did a
> few months ago, so I have created a blog post with
Neha and Guozhang,
This thread is several months old now but I'd like to follow up on it as I
have a couple more questions related to it.
1. Guozhang, you suggested 3 steps I take to ensure each piece of data
remains on the same partition from source to target cluster. In particular
you suggest
14 matches
Mail list logo