+1 .. I agree....

Thanks
Swami

On Tue, Jul 19, 2016 at 4:57 PM, Lionel Bouton <lionel+c...@bouton.name> wrote:
> Hi,
>
> On 19/07/2016 13:06, Wido den Hollander wrote:
>>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy <swamire...@gmail.com>:
>>>
>>>
>>> Thanks for the correction...so even one OSD reaches to 95% full, the
>>> total ceph cluster IO (R/W) will be blocked...Ideally read IO should
>>> work...
>> That should be a config option, since reading while writes still block is 
>> also a danger. Multiple clients could read the same object, perform a 
>> in-memory change and their write will block.
>>
>> Now, which client will 'win' after the full flag has been removed?
>>
>> That could lead to data corruption.
>
> If it did, the clients would be broken as normal usage (without writes
> being blocked) doesn't prevent multiple clients from reading the same
> data and trying to write at the same time. So if multiple writes (I
> suppose on the same data blocks) are possibly waiting the order in which
> they are performed *must not* matter in your system. The alternative is
> to prevent simultaneous write accesses from multiple clients (this is
> how non-cluster filesystems must be configured on top of Ceph/RBD, they
> must even be prevented from read-only accessing an already mounted fs).
>
>>
>> Just make sure you have proper monitoring on your Ceph cluster. At nearfull 
>> it goes into WARN and you should act on that.
>
>
> +1 : monitoring is not an option.
>
> Lionel
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to