@Amar/Mohit, Do you see any issues with Posix reserve feature?

> On 06-Mar-2020, at 9:11 AM, David Cunningham <[email protected]> 
> wrote:
> 
> Hi Aravinda,
> 
> That's what was reporting 54% used, at the same time that GlusterFS was 
> giving no space left on device errors. It's a bit worrying that they're not 
> reporting the same thing.
> 
> Thank you.
> 
> 
> On Fri, 6 Mar 2020 at 16:33, Aravinda VK <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi David,
> 
> What is it reporting for brick’s `df` output?
> 
> ```
> df /nodirectwritedata/gluster/gvol0
> ```
> 
> —
> regards
> Aravinda Vishwanathapura
> https://kadalu.io <https://kadalu.io/>
> 
>> On 06-Mar-2020, at 2:52 AM, David Cunningham <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hello,
>> 
>> A major concern we have is that "df" was reporting only 54% used and yet 
>> GlusterFS was giving "No space left on device" errors. We rely on "df" to 
>> report the correct result to monitor the system and ensure stability. Does 
>> anyone know what might have been going on here?
>> 
>> Thanks in advance.
>> 
>> 
>> On Thu, 5 Mar 2020 at 21:35, David Cunningham <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Hi Aravinda,
>> 
>> Thanks for the reply. This test server is indeed the master server for 
>> geo-replication to a slave.
>> 
>> I'm really surprised that geo-replication simply keeps writing logs until 
>> all space is consumed, without cleaning them up itself. I didn't see any 
>> warning about it in the geo-replication install documentation which is 
>> unfortunate. We'll come up with a solution to delete log files older than 
>> the LAST_SYNCED time in the geo-replication status. Is anyone aware of any 
>> other potential gotchas like this?
>> 
>> Does anyone have an idea why in my previous note some space in the 2GB 
>> GlusterFS partition apparently went missing? We had 0.47GB of data, 1GB 
>> reported used by .glusterfs, which even if they were separate files would 
>> only add up to 1.47GB used, meaning 0.53GB should have been left in the 
>> partition. If less space is actually being used because of the hard links 
>> then it's even harder to understand where the other 1.53GB went. So why 
>> would GlusterFS report "No space left on device"?
>> 
>> Thanks again for any assistance.
>> 
>> 
>> On Thu, 5 Mar 2020 at 17:31, Aravinda VK <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Hi David,
>> 
>> Is this Volume is uses Geo-replication? Geo-replication feature enables 
>> Changelog to identify the latest changes happening in the GlusterFS volume. 
>> 
>> Content of .glusterfs directory also includes hardlinks to the actual data, 
>> so the size shown in .glusterfs is including data. Please refer the comment 
>> by Xavi 
>> https://github.com/gluster/glusterfs/issues/833#issuecomment-594436009 
>> <https://github.com/gluster/glusterfs/issues/833#issuecomment-594436009>
>> 
>> If Changelogs files are causing issue, you can use archival tool to remove 
>> processed changelogs.
>> https://github.com/aravindavk/archive_gluster_changelogs 
>> <https://github.com/aravindavk/archive_gluster_changelogs>
>> 
>> —
>> regards
>> Aravinda Vishwanathapura
>> https://kadalu.io <https://kadalu.io/>
>> 
>> 
>>> On 05-Mar-2020, at 9:02 AM, David Cunningham <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Hello,
>>> 
>>> We are looking for some advice on disk use. This is on a single node 
>>> GlusterFS test server.
>>> 
>>> There's a 2GB partition for GlusterFS. Of that, 470MB is used for actual 
>>> data, and 1GB is used by the .glusterfs directory. The .glusterfs directory 
>>> is mostly used by the two-character directories and the "changelogs" 
>>> directory. Why is so much used by .glusterfs, and can we reduce that 
>>> overhead?
>>> 
>>> We also have a problem with this test system where GlusterFS is giving "No 
>>> space left on device" errors. That's despite "df" reporting only 54% used, 
>>> and even if we add the 470MB to 1GB used above, that still comes out to 
>>> less than the 2GB available, so there should be some spare.
>>> 
>>> Would anyone be able to advise on these please? Thank you in advance.
>>> 
>>> The GlusterFS version is 5.11 and here is the volume information:
>>> 
>>> Volume Name: gvol0
>>> Type: Distribute
>>> Volume ID: 33ed309b-0e63-4f9a-8132-ab1b0fdcbc36
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0
>>> Options Reconfigured:
>>> transport.address-family: inet
>>> nfs.disable: on
>>> geo-replication.indexing: on
>>> geo-replication.ignore-pid-check: on
>>> changelog.changelog: on
>>> 
>>> -- 
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/ <http://voisonics.com/>
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>> ________
>>> 
>>> 
>>> 
>>> Community Meeting Calendar:
>>> 
>>> Schedule -
>>> Every Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968 <https://bluejeans.com/441850968>
>>> 
>>> Gluster-users mailing list
>>> [email protected] <mailto:[email protected]>
>>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>>> <https://lists.gluster.org/mailman/listinfo/gluster-users>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> -- 
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/ <http://voisonics.com/>
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>> 
>> 
>> -- 
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/ <http://voisonics.com/>
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
> 
> 
> 
> 
> 
> 
> 
> -- 
> David Cunningham, Voisonics Limited
> http://voisonics.com/ <http://voisonics.com/>
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782

Aravinda Vishwanathapura
https://kadalu.io



________



Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to