Without fixing KAFKA-1017, the issue is that the producer will maintain a
socket connection per min(#partitions, #brokers). If you have lots of
producers, the open file handlers on the broker could be an issue.
So, what KAFKA-1017 fixes is to pick a random partition and stick to it for
a configura
Hi Guozhang, Joe, Drew
In our case we have been running for the past 3 weeks and it has been
consistently writing only to to the first partition. The rest of the
partitions have empty index files.
Not sure if I am hitting any issue here.
I am using offset checker as my barometer. Also introspec
Ah ok. Thanks for sharing that.
On Fri, Sep 13, 2013 at 2:50 PM, Rajasekar Elango wrote:
> We have 3 zookeeper node in the cluster with a hardware load balancer . In
> one of the zookeeper, we did not configure ensemble correctly (server.n
> property in zoo.cfg) . So it ended up as like 2 node
Hello Joe,
The reason we make the producers to produce to a fixed partition for each
metadata-refresh interval are the following:
https://issues.apache.org/jira/browse/KAFKA-1017
https://issues.apache.org/jira/browse/KAFKA-959
So in a word the randomness is still preserved but within one
metada
We have 3 zookeeper node in the cluster with a hardware load balancer . In
one of the zookeeper, we did not configure ensemble correctly (server.n
property in zoo.cfg) . So it ended up as like 2 nodes in one cluster, one
node in other cluster. The load balancer is randomly hitting one of 2
zookeep
I am using kafka 08 version ...
On Thu, Sep 12, 2013 at 8:44 PM, Jun Rao wrote:
> Which revision of 0.8 are you using? In a recent change, a producer will
> stick to a partition for topic.metadata.refresh.interval.ms (defaults to
> 10
> mins) time before picking another partition at random.
> T
Hi,
We're considering the following network architecture:
Each kafka server will have 2 network cards:
one will be used to send/receive messages to/from consumer/producer
(external cluster traffic)
the other one will be used to send and receive replication messages between
the brokers (internal c
Q: Do you also use Avro?
Regards,
Henrik
On 13 sep 2013, at 08:33, Grégoire Seux wrote:
> Hi Richard,
>
> we are currently writing a C# driver as all c# drivers are not very active.
> It has started as a simple translation of the scala driver (producer only)
> and targets 0.8 protocol.
>
>
We are currently trying out AVRO although we're not being very fancy with it.
We're actually trying to build on top of this client code right now:
https://github.com/ExactTargetDev/kafka.git
Out of all the repos I looked at it seems to show the most promise and had some
work within this year. We
Isn't this a bug?
I don't see why we would want users to have to code and generate random
partition keys to randomly distributed the data to partitions, that is
Kafka's job isn't it?
Or if supplying a null value tell the user this is not supported (throw
exception) in KeyedMessage like we do for
I ran into this problem as well Prashant. The default partition key was
recently changed:
https://github.com/apache/kafka/commit/b71e6dc352770f22daec0c9a3682138666f032be
It no longer assigns a random partition to data with a null partition key.
I had to change my code to generate random partiti
Thanks Chris for the patches and Neha for reviewing and committing them!!!
It is great we now have support for Scala 2.10 in Kafka trunk and also 0.8
branch and without losing any existing support for anything else.
/***
Joe Stein
Founder, Principal Consu
Just curious to know, what was the misconfiguration?
On Fri, Sep 13, 2013 at 10:02 AM, Rajasekar Elango
wrote:
> Thanks Neha and Jun, It turned out to be miss configuration in our
> zookeeper cluster. After correcting it everything looks good.
>
> Thanks,
> Raja.
>
>
> On Fri, Sep 13, 2013 at 10
Thanks Neha and Jun, It turned out to be miss configuration in our
zookeeper cluster. After correcting it everything looks good.
Thanks,
Raja.
On Fri, Sep 13, 2013 at 10:13 AM, Jun Rao wrote:
> Any error in the controller and the state-change log? Are brokers 2,3,4
> alive?
>
> Thanks,
>
> Jun
Thanks Neha
I will try applying this property and circle back.
Also, I have been attempting to execute kafka-producer-perf-test.sh and I
receive the following error
Error: Could not find or load main class
kafka.perf.ProducerPerformance
I am running against 0.8.0-beta1
Seems like perf i
Do you see any errors in the controller.log or the state change log?
Thanks,
Neha
On Sep 12, 2013 10:47 PM, "Rajasekar Elango" wrote:
> We are seeing a problem that we we try to send messages to new topic it
> fails kafka.common.LeaderNotAvailableException. But usually this problem
> will be tra
This means broker 1 is the controller. It uses a generic zookeeper based
leader election module which is where this log4j message is coming from.
Thanks,
Neha
On Sep 12, 2013 10:52 PM, "Lu Xuechao" wrote:
> Thanks Rao. I found both log.dir and log.dirs worked.
>
> When I start up all my brokers,
Yes, I was trying to find out how we can scale out with our Kafka cluster later
if we wanted to add more topics. But as you say, it might be simpler just to
use another Kafka node at some point.
Thanks for your response, it was very helpful!
Xuyen
-Original Message-
From: Neha Narkhede
I am using kafka 0.8.0-beta1 ..
Seems like messages are being delivered only to one partition (since
installation)
Should I upgrade or apply a patch to mitigate this issue.
Please advice
On Thu, Sep 12, 2013 at 8:44 PM, Jun Rao wrote:
> Which revision of 0.8 are you using? In a recent change
As Jun suggested, one reason could be that the
topic.metadata.refresh.interval.ms is too high. Did you observe if the
distribution improves after topic.metadata.refresh.interval.ms has passed ?
Thanks
Neha
On Fri, Sep 13, 2013 at 4:47 AM, prashant amar wrote:
> I am using kafka 08 version ...
Any error in the controller and the state-change log? Are brokers 2,3,4
alive?
Thanks,
Jun
On Thu, Sep 12, 2013 at 4:56 PM, Rajasekar Elango wrote:
> We are seeing a problem that we we try to send messages to new topic it
> fails kafka.common.LeaderNotAvailableException. But usually this probl
Hi, Everyone,
We have been stabilizing the 0.8 branch since the beta1 release. I think we
are getting close to an 0.8 final release. I made an initial list of the
remaining jiras that should be fixed in 0.8.
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D
This indicates the controller (see the end of
http://kafka.apache.org/documentation.html#replication) of the cluster.
Thanks,
Jun
On Thu, Sep 12, 2013 at 5:29 PM, Lu Xuechao wrote:
> Thanks Rao. I found both log.dir and log.dirs worked.
>
> When I start up all my brokers, I see below log mess
Hi Richard,
we are currently writing a C# driver as all c# drivers are not very active. It
has started as a simple translation of the scala driver (producer only) and
targets 0.8 protocol.
--
Grégoire
Which revision of 0.8 are you using? In a recent change, a producer will
stick to a partition for topic.metadata.refresh.interval.ms (defaults to 10
mins) time before picking another partition at random.
Thanks,
Jun
On Thu, Sep 12, 2013 at 1:56 PM, prashant amar wrote:
> I created a topic with
Are you using Kafka 07 or 08 ?
On Thu, Sep 12, 2013 at 1:56 PM, prashant amar wrote:
> I created a topic with 4 partitions and for some reason the producer is
> pushing only to one partition.
>
> This is consistently happening across all topics that I created ...
>
> Is there a specific config
>> So my question is if we go with a hardware load balancer, do all the
broker nodes have to treated equally? Ie: All broker nodes will have the
same topics and number of partitions for each topic?
All the brokers behind the same hardware load balancer or virtual IP will
be treated equally.
>> Or
27 matches
Mail list logo