Hi Daniel,
On Wed, Dec 28, 2022 at 3:17 AM Daniel Kovacs wrote:
>
> Hello!
>
> I'd like to create a CephFS subvol, with these command: ceph fs
> subvolume create cephfs_ssd subvol_1
> I got this error: Error EINVAL: invalid value specified for
> ceph.dir.subvolume
> If I use another cephfs volume
Hi Jonas,
On Mon, Jan 2, 2023 at 10:52 PM Jonas Schwab
wrote:
>
> Thank you very much! Works like a charm, except for one thing: I gave my
> clients the MDS caps 'allow rws path=' to also be able
> to create snapshots from the client, but `mkdir .snap/test` still returns
> mkdir: cannot crea
Look closely at your output. The PGs with 0 objects. Are only “every other” due
to how the command happened to order the output.
Note that the empty PGs all have IDs matching “3.*”. The numeric prefix of a PG
ID reflects the cardinal ID of the pool to which it belongs. I strongly
suspect that
Thanks for the reply. I’ll give that a try, I wasn’t using the balancer.
> On Jan 2, 2023, at 1:55 AM, Pavin Joseph wrote:
>
> Hi Jeff,
>
> Might be worth checking the balancer [0] status, also you probably want to
> use upmap mode [1] if possible.
>
> [0]: https://docs.ceph.com/en/latest/ra
One side affect of using sub volumes is that you can then only take a snap
at the sub volume level, nothing further down the tree.
I find you can use the same path on the auth without the sub volume unless
I’m missing something in this thread.
On Mon, Jan 2, 2023 at 10:21 AM Jonas Schwab <
jonas.
Thank you very much! Works like a charm, except for one thing: I gave my
clients the MDS caps 'allow rws path=' to also be able
to create snapshots from the client, but `mkdir .snap/test` still returns
mkdir: cannot create directory ‘.snap/test’: Operation not permitted
Do you have an idea
Sent prematurely.
I meant to add that after ~3 years of service, the 1 DWPD drives in the
clusters I mentioned mostly reported <10% of endurance burned.
Required endurance is in part a function of how long you expect the drives to
last.
>> Having said that, for a storage cluster where write p
> Having said that, for a storage cluster where write performance is expected
> to be the main bottleneck, I would be hesitant to use drives that only have
> 1DWPD endurance since Ceph has fairly high write amplification factors. If
> you use 3-fold replication, this cluster might only be able
Hi all,
I have a similar question regarding a cluster configuration consisting
of HDDs, SSDs and NVMes. Let's say I would setup a OSD configuration in
a yaml file like this:
service_type:osd
service_id:osd_spec_default
placement:
host_pattern:'*'
spec:
data_devices:
model:HDD-Model-XY
db_devi
Depends.
In theory, each OSD will have access to 1/4 of the separate WAL/DB device, so
to get better performance you need to find an NVMe device that delivers
significantly more than 4x the IOPS rate of the pm1643 drives, which is not
common.
That assumes the pm1643 devices are connected to a
Hi Kotresh,
The issue is fixed for now I followed the steps below.
I have an unmounted kernel client and restarted mds service which brought
back mds to normal. But even after this "1 MDSs behind on trimming issue"
didn't solve I waited for about 20 - 30 mins which automatically fixed the
trimmi
Hi Chris,
The actually limits are not software. Usually Ceph teams on Cloud Providers or
Universities running out at physical resources at first: racks, racks power or
network (ports, EOL switches that can't be upgraded) or hardware lifetime
(There is no point in buying old hardware, and the n
Hi Experts,I am seeking for if there is achievable significant write
performance improvements when separating WAL/DB in a ceph cluster with all SSD
type OSD.I have a cluster with 40 SSD (PM1643 1.8 TB SSD Enterprise Samsung). I
have 10 Storage node each with 4 OSD. I want to know that can I get
13 matches
Mail list logo