Dear developers/users,

Please suggest a solution for this.

Regards,
Naranderan R

On Fri, 22 May 2020 at 21:46, Naranderan Ramakrishnan <[email protected]>
wrote:

> Dear team,
> We are using Gluster(v 7.0) as our primary data storage system and faced
> an issue recently. Please find the details below.
>
> *Simple Background:*
> A 2x3(DxR) volume is mounted in a few main-clients via FUSE mount. From
> these main clients, a lot of sub-clients will consume the required subset
> of data(a folder) via rsync. These sub-clients will also produce data to
> these main clients via rsync which will be propagated Gluster. In a
> simplified form,
> Gluster(Brick1, Brick2 .. Brick6) --> Main-clients(FUSE mount of Gluster)
> --> Sub-clients(rsync from/to main-client)
>
> *Issue:*
> Due to some network issues, 2 bricks belong to the same replica
> sub-volume(say replica1) went unreachable from a main-client. This triggers
> 'client quorum is not met' - the client quorum policy is 'auto' &
> quorum-count is 2 due to this policy - so the replica1 went unavailable for
> this main-client.
> So dirs&files in this replica1 were not listed but replica2 dirs&files
> were listed in the mount-point of the main-client. But the sub-clients were
> not aware of these background issues, they have read the listed files(of
> replica2 only) which resulted in undesired and unintentional behaviors.
>
> *Expectation:*
> This is totally unexpected that subset of dirs&files will be available in
> a mount-point. A main-client should list either all the dirs & files or
> nothing. This is very critical to our application nature. Our application
> prefers consistency and atomicity to HA.
> It would be much better if there is an option to enable atomic read even
> during these kinds of unexpected issues. Please let us know how can we
> achieve this.
>
> Thanks in advance.
>
> Regards,
> Naranderan R
>
>
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to