Are you saying that the el9 rpms work on el8 hosts?
The complaint so far is that a minor release in Ceph has silently and
unexpectedly seemed to remove support for an OS version that is still
widely deployed. If this was intentional, there is a question as to why
it didn't even warrant a note
I would expect it to be:
systemctl disable ceph-osd@${instance}
If you're wanting to disable them all I believe you can even use wildcards:
systemctl disable ceph-osd@\*
--
Adam
On 8/16/24 2:24 PM, Dan O'Brien wrote:
This email originated from outside of K-State.
I am 100% using cephadm and
I'm not sure exactly what you're doing with your volumes.
It looks like fcp might be size 3. nfs is size 1, possibly with an
200TB rbd volume inside nbd mounted into another box. If so, it is
likely you can reclaim space from deleted files with fstrim, if your
filesystem supports it.
--
Adam
On
Marc,
That video may be out of date.
https://centos.org/distro-faq/#q6-will-there-be-separateparallelsimultaneous-streams-for-8-9-10-etc
--
Adam
On Tue, Dec 8, 2020 at 3:50 PM wrote:
>
> Marc;
>
> I'm not happy about this, but RedHat is suggesting that those of us running
> CentOS for product
I've had this set to 16TiB for several years now.
I've not seen any ill effects.
--
Adam
On Fri, Dec 11, 2020 at 12:56 PM Patrick Donnelly wrote:
>
> Hi Mark,
>
> On Fri, Dec 11, 2020 at 4:21 AM Mark Schouten wrote:
> > There is a default limit of 1TiB for the max_file_size in CephFS. I altere
You can list the objects in the pool and get their parent xattr, from there,
decode that attribute and see its location in the tree. Only the objects with
an all 0 suffix after the . should have a parent attribute.
This came from the mailing list some time ago:
rados --pool $pool_name getxattr
The release notes [1] specify only partial support for CentOS 7.
"Note that the dashboard, prometheus, and restful manager modules will
not work on the CentOS 7 build due to Python 3 module dependencies
that are missing in CentOS 7."
You will need to move to CentOS 8, or potentially containerize
I'm pretty sure to XFS, "read-only" is not quite "read-only." My
understanding is that XFS replays the journal on mount, unless it is also
mounted with norecovery.
--
Adam
On Sun, May 3, 2020, 22:14 Void Star Nill wrote:
> Hello All,
>
> One of the use cases (e.g. machine learning workloads) fo
Is there is a possibility that there was a quota involved? I've seen
moves between quota zones to cause a copy then delete.
--
Adam
On Thu, Mar 26, 2020 at 9:14 AM Gregory Farnum wrote:
>
> On Thu, Mar 26, 2020 at 5:49 AM Frank Schilder wrote:
> >
> > Some time ago I made a surprising observati
As far as Ceph is concerned, as long as there are no separate
journal/blockdb/wal devices, you absolutely can transfer osds between
hosts. If there are separate journal/blockdb/wal devices, you can do
it still, provided they move with the OSDs.
With Nautilus and up, make sure the osd bootstrap key
10 matches
Mail list logo