[ceph-users] RGW DNS bucket names with multi-tenancy

2019-11-01 Thread Florian Engelmann
 Hi, is it possible to access buckets like: https://../? Some SDKs use DNS bucket names only. Using such an SDK the enpoint would look like ".". All the best, Florian smime.p7s Description: S/MIME cryptographic signature ___ ceph-users mailing li

Re: [ceph-users] is rgw crypt default encryption key long term supported ?

2019-06-06 Thread Florian Engelmann
Am 5/28/19 um 5:37 PM schrieb Casey Bodley: On 5/28/19 11:17 AM, Scheurer François wrote: Hi Casey I greatly appreciate your quick and helpful answer :-) It's unlikely that we'll do that, but if we do it would be announced with a long deprecation period and migration strategy. Fine, just

[ceph-users] sync rados objects to other cluster

2019-05-02 Thread Florian Engelmann
Hi, we need to migrate a ceph pool used for gnocchi to another cluster in another datacenter. Gnocchi uses the python rados or cradox module to access the Ceph cluster. The pool is dedicated to gnocchi only. The source pool is based on HDD OSDs while the target pool got SSD only. As there are

Re: [ceph-users] rbd cache limiting IOPS

2019-03-07 Thread Florian Engelmann
behaviour to write-through as far as I understood. I expected the latency to increase to at least 0.6 ms what was the case but I also expected the IOPS to increase to up to 60.000 which was not the case. IOPS was constant at ~ 14.000IOPS (4 jobs, QD=64). Am 3/7/19 um 11:41 AM schrieb F

[ceph-users] rbd cache limiting IOPS

2019-03-07 Thread Florian Engelmann
Hi, we are running an Openstack environment with Ceph block storage. There are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a P4800X Optane for rocksdb and WAL. The decision was made to use rbd writeback cache with KVM/QEMU. The write latency is incredible good (~85 µs) a

[ceph-users] Openstack RBD EC pool

2019-02-15 Thread Florian Engelmann
Hi, I tried to add a "archive" storage class to our Openstack environment by introducing a second storage backend offering RBD volumes having their data in an erasure coded pool. As I will have to specify a data-pool I tried it as follows: ### keyring files: ceph.client.cind

[ceph-users] Optane still valid

2019-02-04 Thread Florian Engelmann
Hi, we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB each and one Intel DC P4800X 375GB Optane each. Up to 10x P4510 can be installed in each node. WAL and RocksDBs for all P4510 should be stored on the Optane (approx. 30GB per RocksDB incl. WAL). Internally, discussion

[ceph-users] HDD spindown problem

2018-12-03 Thread Florian Engelmann
Hello, we are fighting a HDD spin-down problem on our production ceph cluster since two weeks now. The problem is not ceph related but I guess this topic is interesting to the list and to be honest I hope to find a solution here. We do use 6 OSD Nodes like: OS: Suse 12 SP3 Ceph: SES 5.5 (12.

Re: [ceph-users] RocksDB and WAL migration to new block device

2018-11-21 Thread Florian Engelmann
cally though. Thanks, Igor On 11/21/2018 11:11 AM, Florian Engelmann wrote: Great support Igor Both thumbs up! We will try to build the tool today and expand those bluefs devices once again. Am 11/20/18 um 6:54 PM schrieb Igor Fedotov: FYI: https://github.com/ceph/ceph/pull/25187 On 11/20/2

Re: [ceph-users] RocksDB and WAL migration to new block device

2018-11-21 Thread Florian Engelmann
Engelmann wrote: Am 11/20/18 um 4:59 PM schrieb Igor Fedotov: On 11/20/2018 6:42 PM, Florian Engelmann wrote: Hi Igor, what's your Ceph version? 12.2.8 (SES 5.5 - patched to the latest version) Can you also check the output for ceph-bluestore-tool show-label -p ceph-bluestore-tool

Re: [ceph-users] RocksDB and WAL migration to new block device

2018-11-20 Thread Florian Engelmann
Am 11/20/18 um 4:59 PM schrieb Igor Fedotov: On 11/20/2018 6:42 PM, Florian Engelmann wrote: Hi Igor, what's your Ceph version? 12.2.8 (SES 5.5 - patched to the latest version) Can you also check the output for ceph-bluestore-tool show-label -p ceph-bluestore-tool show-

Re: [ceph-users] RocksDB and WAL migration to new block device

2018-11-20 Thread Florian Engelmann
7;.bluefs'" did recognize the new sizes. But we are 100% sure the new devices are used as we already deleted the old once... We tried to delete the "key" "size" to add one with the new value but: ceph-bluestore-tool rm-label-key --dev /var/lib/ceph/osd/ceph-0/block.db -k

[ceph-users] RocksDB and WAL migration to new block device

2018-11-20 Thread Florian Engelmann
Hi, today we migrated all of our rocksdb and wal devices to new once. The new once are much bigger (500MB for wal/db -> 60GB db and 2G WAL) and LVM based. We migrated like: export OSD=x systemctl stop ceph-osd@$OSD lvcreate -n db-osd$OSD -L60g data || exit 1 lvcreate -n wal

Re: [ceph-users] Large omap objects - how to fix ?

2018-10-26 Thread Florian Engelmann
| t: +41-21-693-9670 EPFL / BBP Biotech Campus Chemin des Mines 9 1202 Geneva Switzerland ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ce

Re: [ceph-users] RGW how to delete orphans

2018-10-26 Thread Florian Engelmann
Hi, we've got the same problem here. Our 12.2.5 RadosGWs crashed (unrecognised by us) about 30.000 times with ongoing multipart uploads. After a couple of days we ended up with: xx-1.rgw.buckets.data 6 N/A N/A 116TiB 87.22 17.1TiB 36264870 36.26M

[ceph-users] understanding % used in ceph df

2018-10-19 Thread Florian Engelmann
Hi, Our Ceph cluster is a 6 Node cluster each node having 8 disks. The cluster is used for object storage only (right now). We do use EC 3+2 on the buckets.data pool. We had a problem with RadosGW segfaulting (12.2.5) till we upgraded to 12.2.8. We had almost 30.000 radosgw crashes leading