Bryan,
  If you can read the disk that was osd.102, you may wish to attempt this
process to recover your data:
https://ceph.com/community/incomplete-pgs-oh-my/

Good luck!

Michael J. Kidd
Sr. Software Maintenance Engineer
Red Hat Ceph Storage

On Mon, Jan 4, 2016 at 8:32 AM, Bryan Wright <bk...@virginia.edu> wrote:

> Gregory Farnum <gfarnum@...> writes:
>
> > I can't parse all of that output, but the most important and
> > easiest-to-understand bit is:
> >             "blocked_by": [
> >                 102
> >             ],
> >
> > And indeed in the past_intervals section there are a bunch where it's
> > just 102. You really want min_size >=2 for exactly this reason. :/ But
> > if you get 102 up stuff should recover; if you can't you can mark it
> > as "lost" and RADOS ought to resume processing, with potential
> > data/metadata loss...
> > -Greg
> >
>
>
> Ack!  I thought min_size was 2, but I see:
>
> ceph osd pool get data min_size
> min_size: 1
>
> Well that's a fine kettle of fish.
>
> The osd in question (102) has actually already been marked as lost, via
> "ceph osd lost 102 --yes-i-really-mean-it", and it shows up in "ceph osd
> tree" as "DNE".  If I can manage to read the disk, how should I try to add
> it back in?
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to