[ceph-users] large difference between "STORED" and "USED" size of ceph df

2020-05-03 Thread Lee, H. (Hurng-Chun)
Hello, We use purely cephfs in out ceph cluster (version 14.2.7). The cephfs data is an EC pool (k=4, m=2) with hdd OSDs using bluestore. The default file layout (i.e. 4MB object size) is used. We see the following output of ceph df: --- RAW STORAGE: CLASS SIZEAVAIL USED

[ceph-users] Re: 14.2.9 MDS Failing

2020-05-03 Thread Sasha Litvak
Marco, Could you please share what was done to make your cluster stable again? On Fri, May 1, 2020 at 4:47 PM Marco Pizzolo wrote: > > Thanks Everyone, > > I was able to address the issue at least temporarily. The filesystem and > MDSes are for the time staying online and the pgs are being rema

[ceph-users] Re: upmap balancer and consequences of osds briefly marked out

2020-05-03 Thread Anthony D'Atri
Do I misunderstand this script, or does it not _quite_ do what’s desired here? I fully get the scenario of applying a full-cluster map to allow incremental topology changes. To be clear, if this is run to effectively freeze backfill during / following a traumatic event, it will freeze that adap

[ceph-users] Re: What's the best practice for Erasure Coding

2020-05-03 Thread Alex Gorbachev
Hi Frank, Reviving this old thread as to whether the performance on these raw NL-SAS drives is adequate? I was wondering if this is a deep archive with almost no retrieval, or how many drives are used? In my experience with large parallel writes, WAL/DB with bluestore, or journal drives on SSD w

[ceph-users] mount issues with rbd running xfs - Structure needs cleaning

2020-05-03 Thread Void Star Nill
Hello All, One of the use cases (e.g. machine learning workloads) for RBD volumes in our production environment is that, users could mount an RBD volume in RW mode in a container, write some data to it and later use the same volume in RO mode into a number of containers in parallel to consume the

[ceph-users] Re: mount issues with rbd running xfs - Structure needs cleaning

2020-05-03 Thread Adam Tygart
I'm pretty sure to XFS, "read-only" is not quite "read-only." My understanding is that XFS replays the journal on mount, unless it is also mounted with norecovery. -- Adam On Sun, May 3, 2020, 22:14 Void Star Nill wrote: > Hello All, > > One of the use cases (e.g. machine learning workloads) fo

[ceph-users] Re: mount issues with rbd running xfs - Structure needs cleaning

2020-05-03 Thread brad . swanson
Are you mounting the RO with noatime? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: mount issues with rbd running xfs - Structure needs cleaning

2020-05-03 Thread Void Star Nill
Hello Brad, Adam, Thanks for the quick responses. I am not passing any arguments other than "ro,nouuid" on mount. One thing I forgot to mention is that, there could be more than one mount of the same volume on a host - I dont know how this plays out for xfs. Appreciate your inputs. Regards, Sh

[ceph-users] page cache flush before unmap?

2020-05-03 Thread Void Star Nill
Hello, I wanted to know if rbd will flush any writes in the page cache when a volume is "unmap"ed on the host, of if we need to flush explicitly using "sync" before unmap? Thanks, Shridhar ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe