On Mon, May 21, 2018 at 3:22 AM, Philip Poten wrote:
> Hi,
>
> I managed to mess up the cache pool on an erasure coded cephfs:
>
> - I split pgs on the cache pool, and got a stray/unknown pg somehow
> - added a second cache pool in the hopes that I'll be allowed to remove the
> first, broken one
On Sun, 20 May 2018, Mike A wrote:
> Hello!
>
> In our cluster, we see a deadlock situation.
> This is a standard cluster for an OpenStack without a RadosGW, we have a
> standard block access pools and one for metrics from a gnocchi.
> The amount of data in the gnocchi pool is small, but objects
Hello!
In our cluster, we see a deadlock situation.
This is a standard cluster for an OpenStack without a RadosGW, we have a
standard block access pools and one for metrics from a gnocchi.
The amount of data in the gnocchi pool is small, but objects are just a lot.
When planning a distribution o
Hi,
I managed to mess up the cache pool on an erasure coded cephfs:
- I split pgs on the cache pool, and got a stray/unknown pg somehow
- added a second cache pool in the hopes that I'll be allowed to remove
the first, broken one
- and now have two broken/misconfigured cache pools and no worki
Hi all@list,
my background: I'm doing Xen since 10++ years, many years with DRBD for
high availability, since some time I'm using preferable GlusterFS with
FUSE as replicated storage, where I place the image-files for the vms.
In my current project we started (successfully) with Xen/GlusterFS too