On 01/13/2015 07:33 PM, Chris Murray wrote:
> Hi all,
> 
>  
> 
> I think I know the answer to this already after reading similar queries,
> but I'll ask in case times have changed.
> 
>  
> 
> After an error on my part, I have a very small number of pgs in
> remapped+peering. They don't look like they'll get out of that state.
> Some IO is blocked too, as you might imagine. Entirely my fault; I
> removed two osds when the cluster wasn't healthy.
> 
>  
> 
> I gather the pool is now fundamentally broken because of these 2
> placement groups and I'll need to recreate another. Some VMs are
> throwaway, some I'll restore from backup. Not a great loss since I'm
> just testing. 
> 
>  
> 
> What has got me wondering is:  one VM had a ZFS filesystem across a
> mirror of two rbd images. The VM hangs indefinitely, which is a shame,
> because I figure it's unlikely that the same bits of data are missing
> from each half of the mirror.
> 
>  
> 
> Is it possible to make an IO fail rather than hang? This would be
> helpful in the recovery process, but I'll cut my losses now if it's
> simply not possible.
> 

No, I/O will block for those PGs as long as you don't mark them as lost.

Isn't there any way to get those OSDs back? If you can you can restore
the PGs.

>  
> 
> I'm on Ceph 0.80.7, Proxmox 3.3, which I understand to be on an 'old'
> Debian kernel.
> 
>  
> 
> Thank you,
> 
> Chris
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to