Re: [ceph-users] Can a cephfs be recreated with old data?

2018-05-20 Thread Yan, Zheng
On Mon, May 21, 2018 at 3:22 AM, Philip Poten wrote: > Hi, > > I managed to mess up the cache pool on an erasure coded cephfs: > > - I split pgs on the cache pool, and got a stray/unknown pg somehow > - added a second cache pool in the hopes that I'll be allowed to remove the > first, broken one

Re: [ceph-users] Too many objects per pg than average: deadlock situation

2018-05-20 Thread Sage Weil
On Sun, 20 May 2018, Mike A wrote: > Hello! > > In our cluster, we see a deadlock situation. > This is a standard cluster for an OpenStack without a RadosGW, we have a > standard block access pools and one for metrics from a gnocchi. > The amount of data in the gnocchi pool is small, but objects

[ceph-users] Too many objects per pg than average: deadlock situation

2018-05-20 Thread Mike A
Hello! In our cluster, we see a deadlock situation. This is a standard cluster for an OpenStack without a RadosGW, we have a standard block access pools and one for metrics from a gnocchi. The amount of data in the gnocchi pool is small, but objects are just a lot. When planning a distribution o

[ceph-users] Can a cephfs be recreated with old data?

2018-05-20 Thread Philip Poten
Hi, I managed to mess up the cache pool on an erasure coded cephfs: - I split pgs on the cache pool, and got a stray/unknown pg somehow - added a second cache pool in the hopes that I'll be allowed to remove the first, broken one - and now have two broken/misconfigured cache pools and no worki

[ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-20 Thread thg
Hi all@list, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too