Did you create the topic without a # of partitions then try to
delete/recreate it? I've had that happen to me before. Try shutting down
everything (including zookeeper) and restarting.
On Tue, Apr 23, 2013 at 9:08 PM, Jun Rao wrote:
> Does this happen on every message that you type in producer
Just got logging cranked up. Will let you know when I see it again.
Thanks,
Karl
On Apr 23, 2013, at 8:11 PM, Jun Rao
wrote:
> This means that the broker closed the socket connection for some reason.
> The broker log around the same time should show the reason. Could you dig
> that out?
>
> T
Hi,
I am running the performance script to do some benchmarking tests.
I notice the script frequently drops connections and always reconnect
after that. After reconnection, will it start sending the messages from
the beginning or resume from where it stopped previously? Thanks.
Regards,
Libo
Thanks for the information. I will keep providing feedback regarding 0.8.
Regards,
Libo
Thanks a lot.
Regards,
Libo
Chris,
The following are some comments on the SimpleConsumer wiki.
1. PartitionMetadata.leader() can return null if the new leader is not
elected yet. We need to handle that.
2. When using FetchRequestBuilder, it's important NOT to set replicaId
since this is only meant to be used by fetchers in
Thanks Andrew,
I'm not seeing the event queue exception but, I'm running my cluster on a set
of virtual machines which share the same physical hardware (I know, exactly
what I'm not supposed to do) and I'm getting some slow fsync zookeeper warnings
in my logs. I imagine that my broker writes a
Hi Everyone,
I just wanted to follow-up on a previous thread concerning our investigation of
identifying a stable Node-Kafka client. To date we have tested the following:
1. Franz-Kafka (https://github.com/dannycoates/franz-kafka)
2. Node-Kafka (v2.1, https://github.com/radekg/node-kafka)
3. Nod
Hi,
I am running kafka-producer-perf-test.sh for performance test.
I notice this line from the log:
INFO Property request.required.acks is overridden to -1
So what does -1 mean in this case? Is acknowledgement enabled?
In producer.properties, I set request.required.acks to 1 and started
the job.
Hi Jun,
This exception also gave me a hard time. In my case, I didn't create the topic
before using it for producing.
Regards,
Libo
According to what I tried, kafka 0.8 works with zookeepr 3.4.3.
Regards,
Libo
Hi Everyone,
I have been experimenting with the libraries listed below and experienced the
same problems.
I have not found any another other node clients. I am interested in finding a
node solution as well.
Happy to contribute on a common solution.
Christian Carollo
On Apr 24, 2013, at 10:
Hi,
I'm aware of KAFKA-188 which implements multiple log directories in kafka
0.8. Is there a way to backport this functionality in 0.7.2?
Thanks,
Anand
Hi Jun,
I've made some of the changes:
#1 - was doing this in the leader identification, but not on startup. I've
cleaned that up
#2 - thoughts on how to word this comment? I'm not sure how to point out
not to do something we didn't do :)
#3 Fixed
#4 I'll need to spend a bunch more time refact
So I'm seeing CancelledKeyExceptions cropping up about the time that the
connections get reset.
Is this a zookeeper error that I'm hitting?
Karl
On Apr 24, 2013, at 9:55 AM, Karl Kirch
wrote:
> Just got logging cranked up. Will let you know when I see it again.
>
> Thanks,
> Karl
>
> On Apr
So switched to sync producer to see what would happen.
I still get the connection reset by peer error randomly (I say randomly, but
seems to be connected to some zookeeper CancelledKeyExceptions), but
unfortunately it throws an error on the message after the one that didn't get
sent.
Is that t
Any advice on using a static broker list vs using zookeeper? I keep having fits
with keeping things stable with zookeeper involved (i.e. dropped connections).
If I use a static broker list do I still get failover if a broker goes down?
(i.e. 1 broker goes down, will my producers still try to sen
Is there a best practice on how to handle producer objects for long running
apps?
Right now I have an app that is long running and will sit for large stretches
of time (days/weeks) with next to none load and then get slammed. In that case
I'd like to cache the producer so I don't incur a hit i
I got this error while running produce performance test.
This is from server:
[2013-04-24 15:23:19,082] ERROR Error while fetching metadata for partition
[test5,2] (kafka.admin.AdminUtils$)
kafka.common.ReplicaNotAvailableException
at kafka.admin.AdminUtils$$anonfun$3.apply(AdminUtils.sca
I figured out the scenario. I have three machines, one server on each of them.
I created a topic with three partitions and replication factor 2. After using
the
Topic for some time, I shut down one server. When producer sent data to the
same topic, the error occurred. I still don't know what is th
I'm pretty sure a replication factor of 2 means leader and 2 slaves.
Shutting down one means the 2 slaves requirement isn't met.
On Wed, Apr 24, 2013 at 3:42 PM, Yu, Libo wrote:
> I figured out the scenario. I have three machines, one server on each of
> them.
> I created a topic with three par
This implementation is what I worked on while at Tagged, which was forked
from Marcus' version, but I don't think it ever merged back to Marcus':
https://github.com/tagged/node-kafka
It was in production for about a year when I left Tagged about 6 months
ago. I know that there were some internal
We did an examination of the tagged branch but the version was 0.1.7 and its
been static for over a year now. I will say that the Node-Kafka (v2.3) producer
has been stable however. A previous thread concerning Node-Kafka client
development revealed that a C library will be out for 0.8, supporti
This exception happens only once. But there is another error for each producer
request
[2013-04-24 14:47:39,077] WARN [KafkaApi-0] Produce request: Leader not local
for partition [topic1,0] on broker 0 (kafka.server.KafkaApis) Here is the
information I get from kafka-list-topic, which indicat
Which performance tests are you running, producer or consumer? Is it your
own script?
Thanks,
Jun
On Wed, Apr 24, 2013 at 9:06 AM, Yu, Libo wrote:
> Hi,
>
> I am running the performance script to do some benchmarking tests.
> I notice the script frequently drops connections and always reconne
Anand,
We try not to do too much development on 0.7 except for critical bug fixes.
Thanks,
Jun
On Wed, Apr 24, 2013 at 10:43 AM, anand nalya wrote:
> Hi,
>
> I'm aware of KAFKA-188 which implements multiple log directories in kafka
> 0.8. Is there a way to backport this functionality in 0.7.
Typically, if you use broker list, you will set up a VIP in a load balancer
in front of all brokers.
Thanks,
Jun
On Wed, Apr 24, 2013 at 11:41 AM, Karl Kirch wrote:
> Any advice on using a static broker list vs using zookeeper? I keep having
> fits with keeping things stable with zookeeper in
You don't want to create a new producer for every batch of messages.
Keeping the producer connection open should be fine.
Thanks,
Jun
On Wed, Apr 24, 2013 at 11:45 AM, Karl Kirch wrote:
> Is there a best practice on how to handle producer objects for long
> running apps?
>
> Right now I have
Actually, replication factor 2 means a total of 2 replicas (a leader and a
follower). If the leader is down, another replica should automatically take
over as the leader. There could be some transient errors in the producer,
but they shouldn't last long.
Thanks,
Jun
On Wed, Apr 24, 2013 at 12:4
What output do you get if you add the --unavailable-partitions option in
list topic?
Thanks,
Jun
On Wed, Apr 24, 2013 at 5:20 PM, Yin Yin wrote:
> This exception happens only once. But there is another error for each
> producer request
> [2013-04-24 14:47:39,077] WARN [KafkaApi-0] Produce r
request.required.acks=-1 in fact is the strongest durability guarantee on
the producer. It means the producer waits for all replicas to write the
data before receiving an ack.
Thanks,
Neha
On Wednesday, April 24, 2013, Yu, Libo wrote:
> Hi,
>
> I am running kafka-producer-perf-test.sh for perfor
It is highly recommended that Kafka and Zookeeper be deployed on different
boxes. Also make sure they get dedicated disks, separate from log4j and the
OS.
Thanks,
Neha
On Wednesday, April 24, 2013, Karl Kirch wrote:
> So switched to sync producer to see what would happen.
> I still get the conne
It typically means wait until all replicas have received the message. For
details, see
http://www.slideshare.net/junrao/kafka-replication-apachecon2013 (-1 ==
wait until message is committed).
Thanks,
Jun
On Wed, Apr 24, 2013 at 10:36 AM, Yu, Libo wrote:
> Hi,
>
> I am running kafka-producer-
For #5, if you always start from an offset returned by getOffsetBefore, it
won't happen since getOffsetBefore will always return offset at the
compressed messageSet boundary. However, if you start consuming from an
arbitrary offset, you may see this.
Thanks,
Jun
On Wed, Apr 24, 2013 at 11:12 AM
With the option --unavailable-partitions --topic topic1, kafka-list-topic
doesn't show anything related to topic1> Date: Wed, 24 Apr 2013 20:58:09 -0700
> Subject: Re: LeaderNotAvailable Exception
> From: jun...@gmail.com
> To: users@kafka.apache.org
>
> What output do you get if you add the --un
35 matches
Mail list logo