Neha, thanks for the answer, I want to understand what is the case when:
>>Also, bringing down 1 node our of a 3 node zookeeper cluster is risky,
since any subsequent leader election might not reach a quorum
I was thinking zookeeper guarantees quorum if only 1 node out of 3 fails?
Thanks.
On Tue
My understanding is that "bringing down 1 node our of a 3 node zookeeper
cluster is risky,
since any subsequent leader election *might* not reach a quorum"and "It is
less likely but still risky to some
extent" - *"it might not reach a quorum"*, because you need both of the
remaining nodes to be up
Zach, that error will occur if different brokers are advertising themselves
in such a way where they are resolving to the same IP address. The
advertised hostname is the hostname that will be given out to producers,
consumers, and other brokers to connect to from the fetch meta data request.
Chec
Hello All,
I am seeing this issue very frequently when running a high volume of
messages through Kafka. It starts off well, and it can go on for minutes
that way, but eventually it reaches a point where the connection to Kafka
dies, then it reconnects and carries on. This repeats more frequently w
Kafka should not reset offset to zero by itself, do you see any exceptions
on the Zookeeper logs? There are some known bugs on ZK that can cause
broker registration node deleted, but I am not sure if some bugs can cause
offset reset.
Guozhang
On Tue, Jun 24, 2014 at 8:44 AM, Luke Forehand <
luke
Hello Ahmed,
Did you see any exceptions on the broker logs?
Guozhang
On Wed, Jun 25, 2014 at 7:47 AM, Ahmed H. wrote:
> Hello All,
>
> I am seeing this issue very frequently when running a high volume of
> messages through Kafka. It starts off well, and it can go on for minutes
> that way, bu
Are you referring to the zookeeper logs? If so, I am seeing a lot of those:
2014-06-25 11:15:02 NIOServerCnxn [WARN] caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid
0x146958701700371, likely client has closed socket
at org.apache.zookeeper
This worked. Thank you.
> On Jun 25, 2014, at 7:04 AM, Joe Stein wrote:
>
> Zach, that error will occur if different brokers are advertising themselves
> in such a way where they are resolving to the same IP address. The
> advertised hostname is the hostname that will be given out to producers
Michael, as I understand it's "risky" if another (2nd) node would
fail, in this case your zookeeper will be not operational, right? But
if no other issues it shouldn't affect operations.
On Wed, Jun 25, 2014 at 2:08 AM, Michal Michalski
wrote:
> My understanding is that "bringing down 1 node our
Hi Prakash,
How many open files do you expect a broker to be able to handle? It seems
like this broker is crashing at around 4100 or so open files.
Thanks,
Paul Lung
On 6/24/14, 11:08 PM, "Lung, Paul" wrote:
>Ok. What I just saw was that when the controller machine reaches around
>4100+ files,
Without knowing the intricacies of Kafka, i think the default open file
descriptors is 1024 on unix. This can be changed by setting a higher ulimit
value ( typically 8192 but sometimes even 10 ).
Before modifying the ulimit I would recommend you check the number of
sockets stuck in TIME_WAIT mo
We monitor producers or for that matter any process/service using JMX
metrics. Every server and service in LinkedIn sends metrics in a Kafka
message to a metrics Kafka cluster. We have subscribers that connect to the
metrics cluster to index that data in RRDs.
Our aim is to expose all important me
One possible issue: the brokers need to talk directly to each other,
broker-to-broker, right? And they will try to talk to each other via the
VIP endpoints (vip1a, vip2a)?
The brokers communicate with each other and they use the
advertised.host.name for the same. So you will need to ensure that is
Do you see something like "begin rebalancing consumer" in your consumer
logs? Could you send around the full log4j of the consumer?
On Wed, Jun 25, 2014 at 8:19 AM, Ahmed H. wrote:
> Are you referring to the zookeeper logs? If so, I am seeing a lot of those:
>
> 2014-06-25 11:15:02 NIOServerCnx
Hi Joe,
Thanks for the info. I am aware of the reassignment thingy. I was
trying to understand why the uneven distribution in the first place.
Regards,
Virendra
On 6/24/14, 8:41 PM, "Joe Stein" wrote:
>Take a look at
>
>bin/kafka-reassign-partitions.sh
>
>Option
Unfortunately I do not have the logs on hand anymore, they were cleared
already.
With that said, I do recall seeing some rebalancing. It attempts to
rebalance a few times and eventually succeeds. In the past, I have had
cases where it tries rebalancing 4 times and gives up because it reached
it's
Hi,
I get the following error from my producer when sending a message:
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages
after 3 tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.P
Hi Neha,
You are correct. I checked the controller.log and found that even
though I had assumed that the producers were started after whole kafka
cluster, that was not true.
And topic1 and topic3 creation request came in only when broker 1 and 2
were alive. And then in split second all the oth
If rebalance succeeded, then those error messages are harmless. Though I
agree we shouldn't log those in the first place.
On Wed, Jun 25, 2014 at 2:12 PM, Ahmed H. wrote:
> Unfortunately I do not have the logs on hand anymore, they were cleared
> already.
>
> With that said, I do recall seeing
Cool. Thanks for circling back with the verification.
On Wed, Jun 25, 2014 at 2:49 PM, Virendra Pratap Singh <
vpsi...@yahoo-inc.com.invalid> wrote:
> Hi Neha,
>
> You are correct. I checked the controller.log and found that even
> though I had assumed that the producers were started after w
Thanks for the info Joe - yes, I do think this will be very useful. Will look
out for this, eh?!
On June 24, 2014 at 10:32:08 AM, Joe Stein (joe.st...@stealth.ly) wrote:
You could then chunk the data (wrapped in an outer message so you have meta
data like file name, total size, current chunk si
Could you provide information on why each retry failed. Look for an error
message that says "Failed to send producer request".
On Wed, Jun 25, 2014 at 2:18 PM, England, Michael
wrote:
> Hi,
>
> I get the following error from my producer when sending a message:
> Caused by: kafka.common.FailedTo
Neha,
I don’t see that error message in the logs. The error that I included in my
original email is the only error that I see from Kafka.
Do I need to change log levels get the info that you need?
Mike
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Wedn
It should be at WARN.
On Wed, Jun 25, 2014 at 3:42 PM, England, Michael
wrote:
> Neha,
>
> I don’t see that error message in the logs. The error that I included in
> my original email is the only error that I see from Kafka.
>
> Do I need to change log levels get the info that you need?
>
> Mik
Ok, at WARN level I see the following:
2014-06-25 16:46:16 WARN kafka-consumer-sp_lead.index.processor1
kafka.producer.BrokerPartitionInfo - Error while fetching metadata
[{TopicMetadata for topic lead.indexer ->
No partition metadata for topic lead.indexer due to
kafka.common.LeaderNotAvaila
By the way, this is what I get when I describe the topic:
Topic:lead.indexer PartitionCount:53ReplicationFactor:1 Configs:
Topic: lead.indexer Partition: 0Leader: 2 Replicas: 2 Isr: 2
Topic: lead.indexer Partition: 1Leader: 1 Replicas: 1 Isr: 1
Hi,
I am currently running a Kafka 0.8.1.1 cluster with 8 servers. I would like
to add a new broker in the cluster. Each kafka instance has 400Go of data
and we are using replication factor equals to 3 with 50 partitions for each
topics.
I have check the documentation and especially the section d
Hi,
I am currently running a Kafka 0.8.1.1 cluster with 8 servers. I would like
to add a new broker in the cluster. Each kafka instance has 400Go of data
and we are using replication factor equals to 3 with 50 partitions for each
topics.
I have check the documentation and especially the section d
Hi,
I am currently running a Kafka 0.8.1.1 cluster with 8 servers. I would like
to add a new broker in the cluster. Each kafka instance has 400Go of data
and we are using replication factor equals to 3 with 50 partitions for each
topics.
I have check the documentation and especially the section d
Hi,
Currently we have the command (/brokers/ids/0) to get the individual broker
registration information.
How do I get all the registered brokers from the zookeeper.
Thanks
Bala
I'm not sure I understood your question. If you want to know all registered
brokers, could you list the broker ids "ls /brokers/ids" and then read each
of the returned children nodes?
On Wed, Jun 25, 2014 at 7:29 PM, Balasubramanian Jayaraman <
balasubramanian.jayara...@autodesk.com> wrote:
> H
The output from the list topic tool suggests that a leader is available for
all partitions. Is the LeaderNotAvailableException repeatable? Are you
running Kafka in the cloud?
On Wed, Jun 25, 2014 at 4:03 PM, England, Michael
wrote:
> By the way, this is what I get when I describe the topic:
>
>
I am aware of lack of programmatic way of deleting topics in kafka 0.8.0. So
using the sledge hammer approach.
This is what I am doing:
1. Bring whole of my kafka cluster down.
2. Delete all the content on all the kafka clusters pointed via logs.dir
setting.
3. Delete the topic metadata from zoo
33 matches
Mail list logo