Hello Karthik,

thank you very much. That was exactly the problem.
Running the command (cat <mount-path>/.meta/graphs/active/<vol-name>-client-*/private | egrep -i 'connected') on the clients revealed that a few were not connected to all bricks.
After restarting them, everything went back to normal.

Regards,
Ulrich

Am 06.02.20 um 12:51 schrieb Karthik Subrahmanya:
Hi Ulrich,

From the problem statement, seems like the client(s) have lost connection with brick. Can you give the following information? - How many clients are there for this volume and which version they are in? - gluster volume info <vol-name> & gluster volume status <vol-name> outputs
- Check whether all the clients are connected to all the bricks.
If you are using the fuse clients give the output of the following from all the clients cat <mount-path>/.meta/graphs/active/<vol-name>-client-*/private | egrep -i 'connected' -If you are using non fuse clients generate the statedumps (https://docs.gluster.org/en/latest/Troubleshooting/statedump/) of each clients and give the output of
grep -A 2 "xlator.protocol.client" /var/run/gluster/<dump-file>
(If you have changed the statedump-path replace the path in the above command)

Regards,
Karthik

On Thu, Feb 6, 2020 at 5:06 PM Ulrich Pötter <[email protected] <mailto:[email protected]>> wrote:

    Dear Gluster Users,

    we are running the following Gluster setup:
    Replica 3 on 3 servers. Two are CentOs 7.6 with Gluster 6.5 and
    one was
    upgraded to Centos 7.7 with Gluster 6.7.

    Since the upgrade to gluster 6.7 on one of the servers, we
    encountered
    the following issue:
    New healing entries appear and get healed, but soon afterwards new
    healing entries appear.
    The abovementioned problem started after we upgraded the server.
    The healing issues do not only appear on the upgraded server, but
    on all
    three.

    This does not seem to be a split brain issue as the output of the
    command "gluster volume head <vol> info split-brain" is "number of
    entries in split-brain: 0"

    Has anyone else observed such behavior with different Gluster
    versions
    in one replica setup?

    We hesitate with updating the other nodes, as we do not know if this
    standard Gluster behaviour or if there is more to this problem.

    Can you help us?

    Thanks in advance,
    Ulrich

    ________

    Community Meeting Calendar:

    APAC Schedule -
    Every 2nd and 4th Tuesday at 11:30 AM IST
    Bridge: https://bluejeans.com/441850968

    NA/EMEA Schedule -
    Every 1st and 3rd Tuesday at 01:00 PM EDT
    Bridge: https://bluejeans.com/441850968

    Gluster-users mailing list
    [email protected] <mailto:[email protected]>
    https://lists.gluster.org/mailman/listinfo/gluster-users


________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to