Re: [ceph-users] osd backfills and recovery limit issue

2017-08-09 Thread Hyun Ha
Thank you for comment. I can understand what you mean. When one osd goes down, the osd has many PGs through whole ceph cluster nodes, so each nodes can have one backfill/recovery per osd and ceph culster shows many backfills/recoverys. The other side, When one osd goes up, the osd needs to copy PG

[ceph-users] Slow requet on node reboot

2017-08-10 Thread Hyun Ha
Hi, Ramirez I have exactly same problem as yours. Did you solved that issue? Do you have expireences or solutions? Thank you. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Slow requet on node reboot

2017-08-10 Thread Hyun Ha
gt; node were to reboot, but not Mark the osds down, then all requires to those > osds would block until they got marked down. > > On Thu, Aug 10, 2017, 5:46 AM Hyun Ha wrote: > >> Hi, Ramirez >> >> I have exactly same problem as yours. >> Did you solved

[ceph-users] ceph pgs state forever stale+active+clean

2017-08-17 Thread Hyun Ha
Hi, Cephers! I'm currently testing the situation of double failure for ceph cluster. But, I faced that pgs are in stale state forever. reproduce steps) 0. ceph version : jewel 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b) 1. Pool create : exp-volumes (size = 2, min_size = 1) 2. rbd create : t

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-08-20 Thread Hyun Ha
, and complaining that the other mirror didn't pick up the data... > Don't delete all copies of your data. If your replica size is 2, you > cannot loose 2 disks at the same time. > > On Fri, Aug 18, 2017, 1:28 AM Hyun Ha wrote: > >> Hi, Cephers! >> >> I&#

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-08-21 Thread Hyun Ha
ty of talk about why in the ML archives. > > So if you're hoping to not lose data, then your only option is to try and > read the data off of the removed osds. If your goal is health_ok regardless > of data integrity, then your option is to delete the PGs. > > On Mon,

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-09-04 Thread Hyun Ha
Hi, I'm still having trouble with above issue. Is anybody there who have same issue or resolve this? Thanks. 2017-08-21 22:51 GMT+09:00 Hyun Ha : > Thanks for response. > > I can understand why size of 2 and min_size 1 is not an acceptable in > production. > but, I j

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-09-04 Thread Hyun Ha
be assessing how many RBD's have lost data, > how you're going to try and recover what data you can/need to, and figure > out where to go from there... downtime is likely the least of your worries > at this point. Being up is useless because the data can't be trusted or >