[ceph-users] Re: RadosGW S3 range on a 0 byte object gives 416 Range Not Satisfiable

2022-03-22 Thread Kai Stian Olstad
On 21.03.2022 15:35, Ulrich Klein wrote: RFC 7233 4.4 . 416 Range Not Satisfiable The 416 (Range Not Satisfiable) status code indicates that none of the ranges in the request's Range header field (Section 3.1

[ceph-users] Re: octopus (15.2.16) OSDs crash or don't answer heathbeats (and get marked as down)

2022-03-22 Thread Boris Behrens
Good morning K, the "freshly done" host, where it happened last got: * 21x 8TB TOSHIBA MG06ACA800E (Spinning) * No block.db devices (just removed the 2 cache SSDs by syncing the disks out, wiping them and adding them back without block.db) * 1x Intel(R) Xeon(R) Gold 5115 CPU @ 2.40GHz * 256GB ECC

[ceph-users] Re: RadosGW S3 range on a 0 byte object gives 416 Range Not Satisfiable

2022-03-22 Thread Ulrich Klein
Yup, completely agree. I find the 416 also a bit surprising, whether in Ceph/RGW or plain HTTP. But I guess it’s just consistent with specifying a range of 1-100 on a one-byte object or any range that can’t be satisfied. After all, the range is part of the request and 4xx means “something wrong w

[ceph-users] Re: RadosGW S3 range on a 0 byte object gives 416 Range Not Satisfiable

2022-03-22 Thread Kai Stian Olstad
On 22.03.2022 09:40, Ulrich Klein wrote: Yup, completely agree. I find the 416 also a bit surprising, whether in Ceph/RGW or plain HTTP. Consistency between other highly used software would be nice. Just to make sure: I am not at all involved in Ceph development, so don’t send a feature requ

[ceph-users] Pacific : ceph -s Data: Volumes: 1/1 healthy

2022-03-22 Thread Rafael Diaz Maurin
Hi cephers, Under Pacific, I just noticed a new info when running a 'ceph -s': [...]   date:     volume: 1/1 healthy [...] I can't find the info in the Ceph docs, does anyone know what "volume" refers to ? It seems to be CephFS. But what does it really mean ? Thank you, Rafael -- Rafael Di

[ceph-users] Re: Pacific : ceph -s Data: Volumes: 1/1 healthy

2022-03-22 Thread Eugen Block
How about this one? https://docs.ceph.com/en/latest/cephfs/fs-volumes/ Zitat von Rafael Diaz Maurin : Hi cephers, Under Pacific, I just noticed a new info when running a 'ceph -s': [...]   date:     volume: 1/1 healthy [...] I can't find the info in the Ceph docs, does anyone know what "v

[ceph-users] Re: Pacific : ceph -s Data: Volumes: 1/1 healthy

2022-03-22 Thread Rafael Diaz Maurin
Le 22/03/2022 à 11:26, Eugen Block a écrit : How about this one? https://docs.ceph.com/en/latest/cephfs/fs-volumes/ Great :) It's exactly the information I need. Thank you Eugen !! Rafael Zitat von Rafael Diaz Maurin : Hi cephers, Under Pacific, I just noticed a new info when running

[ceph-users] What is "register_cache_with_pcm not using rocksdb"?

2022-03-22 Thread Jan Kasprzak
Hello, Ceph users, what does the following message mean? Mar 22 11:59:07 mon2.host.name ceph-mon[1148]: 2022-03-22T11:59:07.286+0100 7f32d2b07700 -1 mon.mon2@1(peon).osd e2619840 register_cache_with_pcm not using rocksdb It appears in the journalctl -u ceph-mon@ on all three mons of my

[ceph-users] Re: octopus (15.2.16) OSDs crash or don't answer heathbeats (and get marked as down)

2022-03-22 Thread Boris Behrens
Norf, I missed half of the answers... * the 8TB disks hold around 80-90 PGs (16TB around 160-180) * per PG we've around 40k objects 170m objects in 1.2PiB of storage Am Di., 22. März 2022 um 09:29 Uhr schrieb Boris Behrens : > Good morning K, > > the "freshly done" host, where it happened last g

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-22 Thread Chris Page
db_statistics { "rocksdb_compaction_statistics": "", "": "", "": "** Compaction Stats [default] **", "": "LevelFiles Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop",

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-22 Thread Igor Fedotov
Hi Chris, Unfortunately "bluefs stats" is of a little help so far.  It's not that verbose when single disk per osd is in use. :( Instead it would be nice to get the ouput for 'ceph-bluestore-tool --path /var/lib/ceph/osd/ceph- --command bluefs-log-dump' command. To be executed against an off

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-22 Thread Igor Fedotov
Chirs, yeah, this apparently reveals the root cause to a major degree: wal files aren't recycled properly. And RocksDB replays them on startup. At this point I'm pretty sure your rocksdb settings are the culprit. So please remove these custome settings and revert back to defaults. Then resta

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-22 Thread Igor Fedotov
Yes that's apparently true, if you set them through config file not through monitor config DB (i.e. via ceph config cmd). Just in case I would though recommend to restart OSD one by one and make sure specific OSD starts properly before proceeding to another one. Who knows what bug/issue might

[ceph-users] Ceph multitenancy

2022-03-22 Thread Budai Laszlo
Dear all, is it possible to use standalone ceph for provisioning storage resources for multiple tenants in a "Self service" way? (users log in to the dashboard, and they can manage their own resources). Any documentation link or other reference is highly appreciated. Thank you, Laszlo ___

[ceph-users] Re: octopus (15.2.16) OSDs crash or don't answer heathbeats (and get marked as down)

2022-03-22 Thread Konstantin Shalygin
180PG per OSD is usually overhead, also 40k obj per PG is not much, but I don't think this will works without block.db NVMe. I think your "wrong out marks" evulate in time of rocksdb compaction. With default log settings you can try to grep 'latency' strings Also, https://tracker.ceph.com/issue

[ceph-users] Path to a cephfs subvolume

2022-03-22 Thread Robert Vasek
Hello, I have a question about cephfs subvolume paths. The path to a subvol seems to be in the format of //, e.g.: /volumes/csi/csi-vol-59c3cb5a-a9ee-11ec-b412-0242ac110004/b2b5a0b3-e02b-4f93-a3f5-fdcef80ebbea I'm wondering about the segment. Where is it coming from, why is there this indirecti

[ceph-users] Re: Path to a cephfs subvolume

2022-03-22 Thread Burkhard Linke
Hi, On 22.03.22 16:23, Robert Vasek wrote: Hello, I have a question about cephfs subvolume paths. The path to a subvol seems to be in the format of //, e.g.: /volumes/csi/csi-vol-59c3cb5a-a9ee-11ec-b412-0242ac110004/b2b5a0b3-e02b-4f93-a3f5-fdcef80ebbea I'm wondering about the segment. Where

[ceph-users] Re: Path to a cephfs subvolume

2022-03-22 Thread Robert Sander
Am 22.03.22 um 16:40 schrieb Burkhard Linke: Or can you give me hints where to look for this in the code? These kind of path elements are used in the cephfs CSI plugin for kubernetes. They are not related to cephfs itself. They are also created when a subvolume is created manually with the

[ceph-users] Re: octopus (15.2.16) OSDs crash or don't answer heathbeats (and get marked as down)

2022-03-22 Thread Boris Behrens
The number 180 PGs is because of the 16TB disks. 3/4 of our OSDs had cache SSDs (not nvme though and most of them are 10OSDs one SSD) but this problem only came in with octopus. We also thought this might be the db compactation, but it doesn't match up. It might happen when the compactation run, b

[ceph-users] Re: Ceph multitenancy

2022-03-22 Thread Ernesto Puerta
Hi Laszlo, Following on the conversation from #ceph-dashboard IRC: My question was about how can I assign a certain cephx identity to a > dashboard user ? (if that is possible at all ...) Or what is the Ceph > Dasboard solution for multitenancy? I would like to let the users to be > able to creat

[ceph-users] Re: Ceph multitenancy

2022-03-22 Thread Budai Laszlo
Hi Ernesto I was disconnected from the chat, and I have missed the messages. I don't know if there is any way to see past messages, so I turned to the mailing list ... :) Thank you for your response. So the Dashboard doesn't supports my use case. Then what could be a solution for for self ser

[ceph-users] Re: Path to a cephfs subvolume

2022-03-22 Thread Ramana Venkatesh Raja
On Tue, Mar 22, 2022 at 11:24 AM Robert Vasek wrote: > > Hello, > > I have a question about cephfs subvolume paths. The path to a subvol seems > to be in the format of //, e.g.: > > /volumes/csi/csi-vol-59c3cb5a-a9ee-11ec-b412-0242ac110004/b2b5a0b3-e02b-4f93-a3f5-fdcef80ebbea > > I'm wondering abo

[ceph-users] ceph namespace access control

2022-03-22 Thread Budai Laszlo
Hello all, what capabilities a ceph user should have in order to be able to create rbd images in one namespace only? I have tried the following: [root@ceph1 ~]# rbd namespace ls --format=json [{"name":"user1"},{"name":"user2"}] [root@ceph1 ~]# ceph auth get-or-create client.user2 mon 'profile

[ceph-users] Re: octopus (15.2.16) OSDs crash or don't answer heathbeats (and get marked as down)

2022-03-22 Thread Boris Behrens
Good morning Istvan, those are rotating disks and we don't use EC. Splitting up the 16TB disks into two 8TB partitions and have two OSDs on one disk also sounds interesting, but would it solve the problem? I also thought to adjust the PGs for the data pool from 4096 to 8192. But I am not sure if t