[ceph-users] Re: [Ceph-announce] v18.2.4 Reef released

2024-07-26 Thread Adam Tygart
Are you saying that the el9 rpms work on el8 hosts? The complaint so far is that a minor release in Ceph has silently and unexpectedly seemed to remove support for an OS version that is still widely deployed. If this was intentional, there is a question as to why it didn't even warrant a note

[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Adam Tygart
I would expect it to be: systemctl disable ceph-osd@${instance} If you're wanting to disable them all I believe you can even use wildcards: systemctl disable ceph-osd@\* -- Adam On 8/16/24 2:24 PM, Dan O'Brien wrote: This email originated from outside of K-State. I am 100% using cephadm and

[ceph-users] Re: Inconsistent Space Usage reporting

2020-11-03 Thread Adam Tygart
I'm not sure exactly what you're doing with your volumes. It looks like fcp might be size 3. nfs is size 1, possibly with an 200TB rbd volume inside nbd mounted into another box. If so, it is likely you can reclaim space from deleted files with fstrim, if your filesystem supports it. -- Adam On

[ceph-users] Re: CentOS

2020-12-08 Thread Adam Tygart
Marc, That video may be out of date. https://centos.org/distro-faq/#q6-will-there-be-separateparallelsimultaneous-streams-for-8-9-10-etc -- Adam On Tue, Dec 8, 2020 at 3:50 PM wrote: > > Marc; > > I'm not happy about this, but RedHat is suggesting that those of us running > CentOS for product

[ceph-users] Re: CephFS max_file_size

2020-12-11 Thread Adam Tygart
I've had this set to 16TiB for several years now. I've not seen any ill effects. -- Adam On Fri, Dec 11, 2020 at 12:56 PM Patrick Donnelly wrote: > > Hi Mark, > > On Fri, Dec 11, 2020 at 4:21 AM Mark Schouten wrote: > > There is a default limit of 1TiB for the max_file_size in CephFS. I altere

[ceph-users] Re: Identifying files residing in a cephfs data pool

2022-07-21 Thread Adam Tygart
You can list the objects in the pool and get their parent xattr, from there, decode that attribute and see its location in the tree. Only the objects with an all 0 suffix after the . should have a parent attribute. This came from the mailing list some time ago: rados --pool $pool_name getxattr

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Adam Tygart
The release notes [1] specify only partial support for CentOS 7. "Note that the dashboard, prometheus, and restful manager modules will not work on the CentOS 7 build due to Python 3 module dependencies that are missing in CentOS 7." You will need to move to CentOS 8, or potentially containerize

[ceph-users] Re: mount issues with rbd running xfs - Structure needs cleaning

2020-05-03 Thread Adam Tygart
I'm pretty sure to XFS, "read-only" is not quite "read-only." My understanding is that XFS replays the journal on mount, unless it is also mounted with norecovery. -- Adam On Sun, May 3, 2020, 22:14 Void Star Nill wrote: > Hello All, > > One of the use cases (e.g. machine learning workloads) fo

[ceph-users] Re: Move on cephfs not O(1)?

2020-03-26 Thread Adam Tygart
Is there is a possibility that there was a quota involved? I've seen moves between quota zones to cause a copy then delete. -- Adam On Thu, Mar 26, 2020 at 9:14 AM Gregory Farnum wrote: > > On Thu, Mar 26, 2020 at 5:49 AM Frank Schilder wrote: > > > > Some time ago I made a surprising observati

[ceph-users] Re: Possible to "move" an OSD?

2020-04-11 Thread Adam Tygart
As far as Ceph is concerned, as long as there are no separate journal/blockdb/wal devices, you absolutely can transfer osds between hosts. If there are separate journal/blockdb/wal devices, you can do it still, provided they move with the OSDs. With Nautilus and up, make sure the osd bootstrap key