Re: [ceph-users] Deleted a pool - when will a PG be removed from the OSD?

2017-04-20 Thread Daniel Marks
ty and failed too > delete the main folder. > > > On Thu, Apr 20, 2017, 2:46 AM Daniel Marks <mailto:daniel.ma...@codecentric.de>> wrote: > Hi all, > > I am wondering when the PGs for a deleted pool get removed from their OSDs. > http://docs.ceph.com/

[ceph-users] Deleted a pool - when will a PG be removed from the OSD?

2017-04-19 Thread Daniel Marks
ed the pool with id 15 two days ago, but I am still seeing the PG directories on the OSD: /var/lib/ceph/osd/ceph-118/current # ls -1 | grep "^15" 15.8f_head 15.8f_TEMP 15.99_head 15.99_TEMP 15.f4_head 15.f4_TEMP Best regards, Daniel Marks signature.asc Description: Message sig

[ceph-users] 'defect PG' caused heartbeat_map is_healthy timeout and recurring OSD breakdowns

2017-03-02 Thread Daniel Marks
the complete pool to get rid of PG 4.33 and recreate it. This is clearly 'cracking a nut with a sledgehammer'. However, after PG 4.33 was gone the cluster was able to fully recover and remained stable since then. If the pool would have contained volume or object storage data objects th

[ceph-users] Is there a way to configure a cluster_network for a running cluster?

2015-08-10 Thread Daniel Marks
procedure to properly configure a ceph_cluster network for a running cluster (maybe via "injectargs")? In which order should OSDs, MONs and MDSs be configured? Best regards, Daniel Marks ___ ceph-users mailing list ceph-users@lists.cep

[ceph-users] Mapped rbd device still present after pool was deleted

2015-08-04 Thread Daniel Marks
uster software version is 0.94.1 Trying to list locks yields the following error message: $ rbd lock list codecentric_rbd_bench/benchmark_image rbd: error opening pool codecentric_rbd_bench: (2) No such file or directory Any idea how I can get rid of the mapped device? Br, Daniel