ition it is running runs out of disk
> space?
>
> Do producers get error? Does the entire kafka stop running? etc...
>
>
> Appreciate your help.
>
>
> Thanks.
>
Hi
What is kaka's behavior if the partition it is running runs out of disk
space?
Do producers get error? Does the entire kafka stop running? etc...
Appreciate your help.
Thanks.
Thanks Jason.
We did run out of disk space and noticed IOExceptions too. No, the broker
did not shut itself down. Is there some configuration that would enable
this for one or all brokers? That would be a better scenario to be in.
Right now, we have setup some alerts when disk space goes beyond a
Hi Jananee,
Do you for sure that you ran out of disk space completely? Did you see an
IOExceptions failing to write? Normally, when that happens, the broker is
supposed to immediately shut itself down. Did the one broker shut itself
down?
The NotLeaderForPartitionException's are normal
Hi,
We have the following setup -
Number of brokers: 3
Number of zookeepers: 3
Default replication factor: 3
Offets Storage: kafka
When one of our brokers ran out of disk space, we started seeing lot of
errors in the broker logs at an alarming rate. This caused the other
brokers also to run
ce tool to load 100
> million rows into 7 topics. We expected that data will be loaded evenly
> across 7 topics but 4 topics got loaded with ~2 million messages and
> remaining 3 topics loaded with 90 million messages. The nodes that were
> leaders of those 3 topics ran out of disk space
s 7 topics but 4 topics got
loaded with ~2 million messages and remaining 3 topics loaded with 90 million
messages. The nodes that were leaders of those 3 topics ran out of disk space
and nodes went down.
We tried to bring back these 2 nodes by doing following
1. Stopped Kafka Service 2. Deleted
ty of the disks used by the cluster nodes
On Fri, Nov 21, 2014 at 9:58 AM, Nilesh Chhapru <
nilesh.chha...@ugamsolutions.com> wrote:
> Hi All,
>
> Can anyone give some inputs about retention policy as I am trying to save
> larger data in the topics and hence going out of
Hi All,
Can anyone give some inputs about retention policy as I am trying to save
larger data in the topics and hence going out of disk space.
Regards,
Nilesh Chhapru.
---Disclaimer
Thanks Guozhang for the pointer to the mapped NIO. The issue in my case was
related to the disk still being out of space (I thought I did free up some,
but I actually didn't). Curiously, I ran out of space on two occasions. In
one case the error message was clear "No space left on device", and in
a
This is interesting as I have not seen it before. Searched a bit on the web
and this seems promising?
http://stackoverflow.com/questions/2949371/java-map-nio-nfs-issue-causing-a-vm-fault-a-fault-occurred-in-a-recent-uns
Guozhang
On Fri, Nov 14, 2014 at 5:38 AM, Yury Ruchin wrote:
> Hello,
>
>
Hello,
I've run into an issue with Kafka 0.8.1.1 broker. The broker stopped
working after the disk it was writing to ran out of space. I freed up some
space and tried to restart the broker. It started some recovery procedure,
but after some short time in the logs I see the following strange error
as in our case), that number is determined
> by the most productive consumer. I would prefer a limit
> for the machine.
>
> Regards,
>
> Libo
>
> From: Yu, Libo [ICG-IT]
> Sent: Monday, June 10, 2013 11:24 AM
> To: 'users@kafka.apache.org'
> Subject: out o
, 2013 11:24 AM
To: 'users@kafka.apache.org'
Subject: out of disk space
Hi,
The volume I used for kafka has about 100G space.
I set log.retention.bytes to about 60G. But at some
point, the disk was full and the processes crashed.
I remember other people reported the same issue.
Has this
log.retention.bytes is per partition. Do you just have a single
topic/partition in the cluster?
Thanks,
Jun
On Mon, Jun 10, 2013 at 8:24 AM, Yu, Libo wrote:
> Hi,
>
> The volume I used for kafka has about 100G space.
> I set log.retention.bytes to about 60G. But at some
> point, the disk was
Forgot to mention that log.cleanup.interval.mins has been set to 1 in my case.
Regards,
Libo
From: Yu, Libo [ICG-IT]
Sent: Monday, June 10, 2013 11:24 AM
To: 'users@kafka.apache.org'
Subject: out of disk space
Hi,
The volume I used for kafka has about 100G space.
I set log.retentio
Hi,
The volume I used for kafka has about 100G space.
I set log.retention.bytes to about 60G. But at some
point, the disk was full and the processes crashed.
I remember other people reported the same issue.
Has this been fixed?
Regards,
Libo
17 matches
Mail list logo