Hi,
My cluster is giving one stuck pg which seems to be backfilling for days now.
Any suggestions on how to solve it?
HEALTH_WARN 1 pgs backfilling; 1 pgs stuck unclean; recovery 32/6000626
degraded (0.001%)
pg 206.3f is stuck unclean for 557655.601540, current state
active+remapped+backfillin
Hi,
My cluster is giving one stuck pg which seems to be backfilling for days now.
Any suggestions on how to solve it?
HEALTH_WARN 1 pgs backfilling; 1 pgs stuck unclean; recovery 32/5989217
degraded (0.001%)
pg 206.3f is stuck unclean for 294420.424122, current state
active+remapped+backfillin
Hi,
I'm running a small Ceph cluster (Emperor), with 3 servers, each running a
monitor and two 280 GB OSDs (plus an SSD for the journals). Servers have 16 GB
memory and a 8 core Xeon processor and are connected with 3x 1 gbps (lacp
trunk).
As soon as I give the cluster some load from a client
Hi,
We're using a 24 server / 48 OSD (3 replicas) Ceph cluster (version 0.67.3) for
RBD storage only and it is working great, but if a failed disk is replaced by a
brand new one and the system starts to backfill it gives a lot of slow requests
messages for 5 to 10 minutes. Then it does become s
Hi,
I'm trying to copy a sparse provisioned rbd image from pool A to pool B (both
are replicated three times). The image has a disksize of 8 GB and contains
around 1.4 GB of data. I do use:
rbd cp PoolA/Image PoolB/Image
After copying "ceph -s" tells me that 24 GB diskspace extra is in use. Th
Hi,
In the /var/lib/ceph/mon/ceph-l16-s01/store.db/ directory there are two very
large files LOG and LOG.OLD (multiple GB's) and my diskspace is running low.
Can I safely delete those files?
Regards,
Erwin
___
ceph-users mailing list
ceph-users@lists.
hreven:
>
>
> Hello,
>
> On Thu, 8 Oct 2015 09:38:02 +0200 Erwin Lubbers wrote:
>
>> Hi,
>>
>> In the /var/lib/ceph/mon/ceph-l16-s01/store.db/ directory there are two
>> very large files LOG and LOG.OLD (multiple GB's) and my diskspace is
>&g