Hello,
We use purely cephfs in out ceph cluster (version 14.2.7). The cephfs
data is an EC pool (k=4, m=2) with hdd OSDs using bluestore. The
default file layout (i.e. 4MB object size) is used.
We see the following output of ceph df:
---
RAW STORAGE:
CLASS SIZEAVAIL USED
Marco,
Could you please share what was done to make your cluster stable again?
On Fri, May 1, 2020 at 4:47 PM Marco Pizzolo wrote:
>
> Thanks Everyone,
>
> I was able to address the issue at least temporarily. The filesystem and
> MDSes are for the time staying online and the pgs are being rema
Do I misunderstand this script, or does it not _quite_ do what’s desired here?
I fully get the scenario of applying a full-cluster map to allow incremental
topology changes.
To be clear, if this is run to effectively freeze backfill during / following a
traumatic event, it will freeze that adap
Hi Frank,
Reviving this old thread as to whether the performance on these raw NL-SAS
drives is adequate? I was wondering if this is a deep archive with almost
no retrieval, or how many drives are used? In my experience with large
parallel writes, WAL/DB with bluestore, or journal drives on SSD w
Hello All,
One of the use cases (e.g. machine learning workloads) for RBD volumes in
our production environment is that, users could mount an RBD volume in RW
mode in a container, write some data to it and later use the same volume in
RO mode into a number of containers in parallel to consume the
I'm pretty sure to XFS, "read-only" is not quite "read-only." My
understanding is that XFS replays the journal on mount, unless it is also
mounted with norecovery.
--
Adam
On Sun, May 3, 2020, 22:14 Void Star Nill wrote:
> Hello All,
>
> One of the use cases (e.g. machine learning workloads) fo
Are you mounting the RO with noatime?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Brad, Adam,
Thanks for the quick responses.
I am not passing any arguments other than "ro,nouuid" on mount.
One thing I forgot to mention is that, there could be more than one mount
of the same volume on a host - I dont know how this plays out for xfs.
Appreciate your inputs.
Regards,
Sh
Hello,
I wanted to know if rbd will flush any writes in the page cache when a
volume is "unmap"ed on the host, of if we need to flush explicitly using
"sync" before unmap?
Thanks,
Shridhar
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe