ty and failed too
> delete the main folder.
>
>
> On Thu, Apr 20, 2017, 2:46 AM Daniel Marks <mailto:daniel.ma...@codecentric.de>> wrote:
> Hi all,
>
> I am wondering when the PGs for a deleted pool get removed from their OSDs.
> http://docs.ceph.com/
ed the pool with id 15 two days ago, but I am still seeing the PG
directories on the OSD:
/var/lib/ceph/osd/ceph-118/current # ls -1 | grep "^15"
15.8f_head
15.8f_TEMP
15.99_head
15.99_TEMP
15.f4_head
15.f4_TEMP
Best regards,
Daniel Marks
signature.asc
Description: Message sig
the complete pool to get rid
of PG 4.33 and recreate it. This is clearly 'cracking a nut with a
sledgehammer'. However, after PG 4.33 was gone the cluster was able to fully
recover and remained stable since then. If the pool would have contained volume
or object storage data objects th
procedure to properly configure a
ceph_cluster network for a running cluster (maybe via "injectargs")? In which
order should OSDs, MONs and MDSs be configured?
Best regards,
Daniel Marks
___
ceph-users mailing list
ceph-users@lists.cep
uster software version is 0.94.1
Trying to list locks yields the following error message:
$ rbd lock list codecentric_rbd_bench/benchmark_image
rbd: error opening pool codecentric_rbd_bench: (2) No such file or directory
Any idea how I can get rid of the mapped device?
Br,
Daniel