[ceph-users] Re: Pool full but the user cleaned it up already

2020-05-22 Thread Eugen Block
Maybe remove the quota to get rid of the warning and then re-enable it? Zitat von "Szabo, Istvan (Agoda)" : Hello, here it is, I usually set just space quota not object quota. NAME ID QUOTA OBJECTS QUOTA BYTES USED%USED MAX AVAIL OBJECT

[ceph-users] Re: Luminous, OSDs down: "osd init failed" and "failed to load OSD map for epoch ... got 0 bytes"

2020-05-22 Thread Fulvio Galeazzi
Hallo Dan, thanks for your reply! Very good to know about compression... will not try to use it before upgrading to Nautilus. Problem is, I did not activate it on this cluster (see below). Moreover, that would only account for the issue on disks dedicated to object storage, if I understand it c

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-22 Thread Gencer W . Genç
Hi Ashley, Thank you for the warning. I will not update to 15.2.2 atm. And yes, I did not get any email from Sebastian but its there in ceph list. I replied using email but i cannot see Sebastian's email address so im not sure if he seen my previous reply or not. I've sent mgr logs but i hope

[ceph-users] Re: Luminous, OSDs down: "osd init failed" and "failed to load OSD map for epoch ... got 0 bytes"

2020-05-22 Thread Dan van der Ster
The procedure to overwrite a corrupted osdmap on a given osd is described at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036592.html I wouldn't do that type of low level manipulation just yet -- better to understand the root cause of he corruptions first before potentially maki

[ceph-users] S3 key prefixes and performance impact on Ceph?

2020-05-22 Thread malinsk
I've just set up a Ceph cluster and I'm accessing it via object gateway with S3 API. One thing I don't see documented anywhere is - how does Ceph performance scale with S3 key prefixes? In AWS S3, performance scales linearly with key prefix (see: https://docs.aws.amazon.com/AmazonS3/latest/dev

[ceph-users] Re: Nautilus: (Minority of) OSDs with huge buffer_anon usage - triggering OOMkiller in worst cases.

2020-05-22 Thread Mark Nelson
Hi Sam and All, Adam did some digging and we've got a preliminary theory.  Last summer we changed the way the bluestore cache does trimming. Previously we used the mempool thread in bluestore to periodically trim the bluestore caches every 50ms or so  At the time we would also figure out how

[ceph-users] Re: S3 key prefixes and performance impact on Ceph?

2020-05-22 Thread Matt Benjamin
Hi, The current behavior is effectively that of a flat namespace. As the number of objects in a bucket becomes large, RGW partitions the index, and a hash of the key name is used to place it. Reads on the partitions are done in parallel (unless unordered listing is requested, an RGW extension).

[ceph-users] Bluestore config recommendations

2020-05-22 Thread Adrian Nicolae
 Hi,   I'm planning to install a new Ceph cluster (Nautilus) using 8+3 EC, SATA-only storage. We want to store here only big files (from 40-50MB to 200-300GB each). The write load will be higher than the read load .   I was thinking at the following Bluestore config to reduce the load on the

[ceph-users] Re: 15.2.2 Upgrade - Corruption: error in middle of record

2020-05-22 Thread Igor Fedotov
Status update: Finally we have the first patch to fix the issue in master: https://github.com/ceph/ceph/pull/35201 And ticket has been updated with root cause analysis:https://tracker.ceph.com/issues/45613On 5/21/2020 2:07 PM, Igor Fedotov wrote: @Chris - unfortunately it looks like the co

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-22 Thread Gencer W . Genç
Hi Sebastian, I cannot see my replies in here. So i put attachment as a body here: 2020-05-21T18:52:36.813+ 7faf19f20040 0 set uid:gid to 167:167 (ceph:ceph) 2020-05-21T18:52:36.813+ 7faf19f20040 0 ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable), process

[ceph-users] Re: S3 key prefixes and performance impact on Ceph?

2020-05-22 Thread Alisa Malinskaya
Awesome, thanks for confirming Matt! On Fri, May 22, 2020 at 9:46 AM Matt Benjamin wrote: > Hi, > > The current behavior is effectively that of a flat namespace. As the > number of objects in a bucket becomes large, RGW partitions the index, > and a hash of the key name is used to place it. Re

[ceph-users] Re: Luminous, OSDs down: "osd init failed" and "failed to load OSD map for epoch ... got 0 bytes"

2020-05-22 Thread Fulvio Galeazzi
Hallo Dan, thanks for your patience! Il 5/22/2020 1:57 PM, Dan van der Ster ha scritto: The procedure to overwrite a corrupted osdmap on a given osd is described at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036592.html I wouldn't do that type of low level manipulation just

[ceph-users] question on ceph node count

2020-05-22 Thread tim taler
Hi all, stumbling over a new ceph cluster setup, I got a basic question regarding the behaviour of ceph. The cluster I found runs 4 hardware nodes as hyper convergent instances - 3 nodes running MON and several OSD instances while one node only runs several OSD. At the same time, all nodes serve a

[ceph-users] Re: 15.2.2 Upgrade - Corruption: error in middle of record

2020-05-22 Thread Ashley Merrick
Thanks Igor, Do you have any idea on a e.t.a or plan for people that are running 15.2.2 to be able to patch / fix the issue. I had a read of the ticket and seems the corruption is happening but the WAL is not read till OSD restart, so I imagine we will need some form of fix / patch we can