On January 30, 2020 12:20:06 PM GMT+02:00, "Goorkate, B.J." 
<[email protected]> wrote:
>Hi,
>
>Thanks for the info! 
>
>I tried the full heal and the stat, but the unsynced entries still
>remain. 
>
>Just to be sure: the find/stat command needs to be done on files in the
>fuse-mount, right?
>Or on the brick-mount itself?
>
>And other than 'gluster volume heal vmstore1 statistics', I cannot find
>a way to ensure that the full heal really started, let alone if it
>finished correctly...
>
>Regards,
>
>Bertjan
>
>On Mon, Jan 27, 2020 at 08:11:14PM +0200, Strahil Nikolov wrote:
>> On January 27, 2020 4:17:26 PM GMT+02:00, "Goorkate, B.J."
><[email protected]> wrote:
>> >Hi all,
>> >
>> >I'm in the process of upgrading oVirt-nodes from 4.2 to 4.3. 
>> >
>> >After upgrading the first of 3 oVirt/gluster nodes, there are
>between
>> >600-1200 unsynced entries for a week now on 1 upgraded node and one
>> >not-yet-upgraded node. The third node (also not-yet-upgraded) says
>it's
>> >OK (no unsynced entries).
>> >
>> >The cluster doesn't seem to be very busy, but somehow self-heal
>doesn't
>> >complete.
>> >
>> >Is this because of different gluster versions across the nodes and
>will
>> >it resolve as soon as I upgraded all nodes? Since it's our
>production
>> >cluster, I don't want to take any risk...
>> >
>> >Does anybody recognise this problem? Of course I can provide more
>> >information if necessary.
>> >
>> >Any hints on troubleshooting the unsynced entries are more than
>> >welcome!
>> >
>> >Thanks in advance!
>> >
>> >Regards,
>> >
>> >Bertjan
>> >
>>
>>------------------------------------------------------------------------------
>> >
>> >De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
>> >uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
>> >onterecht
>> >ontvangt, wordt u verzocht de inhoud niet te gebruiken en de
>afzender
>> >direct
>> >te informeren door het bericht te retourneren. Het Universitair
>Medisch
>> >Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin
>van
>> >de W.H.W.
>> >(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
>> >geregistreerd bij
>> >de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
>> >
>> >Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
>> >
>>
>>------------------------------------------------------------------------------
>> >
>> >This message may contain confidential information and is intended
>> >exclusively
>> >for the addressee. If you receive this message unintentionally,
>please
>> >do not
>> >use the contents but notify the sender immediately by return e-mail.
>> >University
>> >Medical Center Utrecht is a legal person by public law and is
>> >registered at
>> >the Chamber of Commerce for Midden-Nederland under no. 30244197.
>> >
>> >Please consider the environment before printing this e-mail.
>> >_______________________________________________
>> >Users mailing list -- [email protected]
>> >To unsubscribe send an email to [email protected]
>> >Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>>
>>https://lists.ovirt.org/archives/list/[email protected]/message/OSF5DPTRS4WS3GG6JA6GOEQP6CGPOC5Y/
>> 
>> I don't  wabt to scare you, but I don't think it's related to the
>different versions.
>> 
>> Have yiu tried the following:
>> 1. Run 'gluster volume heal <VOLNAME> full'
>> 2. Run a stat to force an update from client side (wait for the full
>heal to finish).
>> find /rhev/data-center/mnt/glusterSD  -iname '*' -exec stat {} \;
>> 
>> Best Regards,
>> Strahil Nikolov

Yes, the stat is against the FUSE , not the bricks.

What is the output of 'gluster  volume heal <volname> info' ?

Best Regards,
Strahil Nikolov
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/H46CF4UV5OHEHKUQCMPCIOOTVEN75B66/

Reply via email to