Hi,

I found solution.

Problem is in wrong detection of shared brick fs. In my case some of bricks on some of hosts were detected as residing on brick on a shared fs. If you have all bricks on its own fs you should see shared-brick-count as 0 or 1 (on brick on host you are checking)

grep shared-brick-count /var/lib/glusterd/vols/*/* | grep -v rpmsave | grep -Ev "shared-brick-count (0|1)"

if you see count >1 check brick-fsid on bricks

grep brick-fsid /var/lib/glusterd/vols/<volume>/bricks/*

if you see duplicate fsid (on brick on host you are checking) you can edit that file with brick configuration and delete line with brick-fsid. Then restart gluster daemon

systemctl restart glusterd

and do some change on volume to repopulate shared-brick-count

gluster volume set <volume> min-free-disk 10%
gluster volume reset <volume> min-free-disk

do it on all hosts with shared-brick-count >1

all is mentioned in

https://github.com/gluster/glusterfs/issues/3642#issuecomment-1188788173

Cheers,

Jiri




On 11/8/24 11:27, Jiří Sléžka via Users wrote:
Hello,

I have 3 node HCI cluster (Rocky Linux 8, 4.5.7-0.master.20240415165511.git7238a3766d.el8). I had 2 SSD in each node, each as separate brick. Recently I added third SSD and expanded volume to 3 x 3 topology. Despite this, free space on volume was not changed.

gluster volume info vms

Volume Name: vms
Type: Distributed-Replicate
Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.0.4.11:/gluster_bricks/vms/vms
Brick2: 10.0.4.13:/gluster_bricks/vms/vms
Brick3: 10.0.4.12:/gluster_bricks/vms/vms
Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
Brick7: 10.0.4.11:/gluster_bricks/vms3/vms3
Brick8: 10.0.4.12:/gluster_bricks/vms3/vms3
Brick9: 10.0.4.13:/gluster_bricks/vms3/vms3
Options Reconfigured:
cluster.shd-max-threads: 1
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
performance.stat-prefetch: off
cluster.granular-entry-heal: enable
storage.health-check-interval: 0

df -h on all nodes looks the same

...
10.0.4.11:/engine                             100G   23G   78G  23% / rhev/data-center/mnt/glusterSD/10.0.4.11:_engine 10.0.4.11:/vms                                1.7T  773G  952G  45% / rhev/data-center/mnt/glusterSD/10.0.4.11:_vms
...
/dev/mapper/gluster_vg_sdb-gluster_lv_engine  100G   22G   79G  22% / gluster_bricks/engine /dev/mapper/gluster_vg_sdb-gluster_lv_vms     794G  476G  319G  60% / gluster_bricks/vms /dev/mapper/gluster_vg_sdd-gluster_lv_vms2    930G  553G  378G  60% / gluster_bricks/vms2 /dev/mapper/gluster_vg_vms3-gluster_lv_vms3   932G  6.6G  925G   1% / gluster_bricks/vms3
...

size of mounted vms volume is reported as 1.7T which is old value (sum of two bricks - 794G + 930G). Correct size should be sum of all bricks - around 2.6T.

what step I am missing?

Cheers,

Jiri



_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOVHJZUUEI3SEPGKFYEJWIKCKQKMYMZ6/

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YHOOLOXTAYJA5756BY6D4EA72LTBT6A/

Reply via email to