Did you set host.name property as described @
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-OnEC2%2Cwhycan%27tmyhighlevelconsumersconnecttothebrokers%3F
?
When accessing brokers from outside AWS, host.name should be set to public
domain/IP. This also means that all brokers would need
Because Kafka was detecting localhost.domain as hostname, I commented out
the line "127.0.0.1 localhost.localdomain localhost" and added
"127.0.0.1 ip-10-0-1-20.localdomain" in etc/hosts. When I restart Kafka
(issues kill -15 pid), writes to existing topics are failing and I see
several
Hi Mark,
Sorry for the delay. We're not using a load balancer if it's what you mean by
LB.
After applying the change I mentioned last time (the netfilter thing), I
couldn't see any improvement. We even restart kafka, but since the restart, I
saw connection count slowly getting higher.
I am using kafka as a buffer for data streaming in from various sources.
Since its a time series data, I generate the key to the message by
combining source ID and minute in the timestamp. This means I can utmost
have 60 partitions per topic (as each source has its own topic). I have
set num.partit
Hi Nicolas,
we did run into a similar issue here (lots of ESTABLISHED connections on
the brokers, but non on the consumers/producers). In our case, it was a
firewall issue where connections that were inactive for more than a
certain time were silently dropped by the firewall (but no TCP RST was
se
When a broker starts up, it receives a LeaderAndIsrRequest from the
controller broker telling the broker which partitions it should host and
either lead or follow those partitions. If clients send requests to the
broker before it has received this request, it throws this error you see.
Did you rest
You probably want to think of this in terms of number of partitions on a
single broker, instead of per topic since I/O is the limiting factor in
this case. Another factor to consider is total number of partitions in the
cluster as Zookeeper becomes a limiting factor there. 30 partitions is not
too
This will improve efficiency on the client side greatly. And multiple threads
don't have to synchronize
before committing offsets. Thanks, Jason.
Regards,
Libo
-Original Message-
From: Jason Rosenberg [mailto:j...@squareup.com]
Sent: Thursday, October 03, 2013 4:13 PM
To: users@kafka.
Hi team,
Is it possible to use a single producer with more than one threads? I am not
sure
If its send() is thread safe.
Regards,
Libo
The send() is thread safe, so the short answer would be yes.
On Fri, Oct 4, 2013 at 9:14 AM, Yu, Libo wrote:
> Hi team,
>
> Is it possible to use a single producer with more than one threads? I am
> not sure
> If its send() is thread safe.
>
> Regards,
>
> Libo
>
>
--
-- Guozhang
I did restart broker very quickly. I saw similar errors for about 5 mins
and that's when I decided to shutdown all kafka brokers and start them one
by one. That seems to have enabled writes in kafka instantly after brokers
were back up.
How do I do a controlled shutdown? The kafka shutdown script
Controlled shutdown is described here -
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-1.ControlledShutdown
On Fri, Oct 4, 2013 at 10:18 AM, Aniket Bhatnagar <
aniket.bhatna...@gmail.com> wrote:
> I did restart broker very quickly. I saw similar errors for a
All,
I'm having an issue with an integration test I've setup. This is using
0.8-beta1.
The test is to verify that no messages are dropped (or the sender gets an
exception thrown back if failure), while doing a rolling restart of a
cluster of 2 brokers.
The producer is configured to use 'request
Great. Thanks.
Regards,
Libo
-Original Message-
From: Guozhang Wang [mailto:wangg...@gmail.com]
Sent: Friday, October 04, 2013 12:27 PM
To: users@kafka.apache.org
Subject: Re: producer API thread safety
The send() is thread safe, so the short answer would be yes.
On Fri, Oct 4, 2013
The occasional single message loss could happen since
required.request.acks=1 and the leader is shut down before the follower
gets a chance to copy the message. Can you try your test with num acks set
to -1 ?
Thanks,
Neha
On Oct 4, 2013 1:21 PM, "Jason Rosenberg" wrote:
> All,
>
> I'm having an
It is very weird, I have a kafka cluster in EC2, There is no any problem to
produce message from one of node by same producer. But when I move the
producer to my local machine at home, then it gives me the below error:
Failed to send messages after 3 tries.
Can anyone tell me how do I fix this iss
Neha,
I'm not sure I understand. I would have thought that if the leader
acknowledges receipt of a message, and is then shut down cleanly (with
controlled shutdown enabled), that it would be able to reliably persist any
in memory buffered messages (and replicate them), before shutting down.
Shou
17 matches
Mail list logo