I would like to thank all of you very much for your assistance, help,
support and time.
I have to say that I totally agree with you regarding the number of
replicas and probably this is the best time to switch to 3 replicas
since all services have been stopped due to this emergency.
After I
The first step is to make sure that it is out of the cluster. Does `ceph
osd stat` show the same number of OSDs as in (it's the same as a line from
`ceph status`)? It should show 1 less for up, but if it's still
registering the OSD as in then the backfilling won't start. `ceph osd out
0` should
Thank you all for your time and support.
I don't see any backfilling in the logs and the number of
"active+degraded" as well as "active+remapped" and "active+clean"
objects is the same for some time now. The only thing I see is
"scrubbing".
Wido, I cannot do anything with the data in osd.0 s
> Op 16 november 2017 om 14:46 schreef Caspar Smit :
>
>
> 2017-11-16 14:43 GMT+01:00 Wido den Hollander :
>
> >
> > > Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis <
> > gior...@acmac.uoc.gr>:
> > >
> > >
> > > @Sean Redmond: No I don't have any unfound objects. I only have "stuc
2017-11-16 14:43 GMT+01:00 Wido den Hollander :
>
> > Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis <
> gior...@acmac.uoc.gr>:
> >
> >
> > @Sean Redmond: No I don't have any unfound objects. I only have "stuck
> > unclean" with "active+degraded" status
> > @Caspar Smit: The cluster
> Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis
> :
>
>
> @Sean Redmond: No I don't have any unfound objects. I only have "stuck
> unclean" with "active+degraded" status
> @Caspar Smit: The cluster is scrubbing ...
>
> @All: My concern is because of one copy left for the data
@Sean Redmond: No I don't have any unfound objects. I only have "stuck
unclean" with "active+degraded" status
@Caspar Smit: The cluster is scrubbing ...
@All: My concern is because of one copy left for the data on the failed
disk.
If I just remove the OSD.0 from crush map does that copy all i
2017-11-16 14:05 GMT+01:00 Georgios Dimitrakakis :
> Dear cephers,
>
> I have an emergency on a rather small ceph cluster.
>
> My cluster consists of 2 OSD nodes with 10 disks x4TB each and 3 monitor
> nodes.
>
> The version of ceph running is Firefly v.0.80.9
> (b5a67f0e1d15385bc0d60a6da6e7fc810b
Dear cephers,
I have an emergency on a rather small ceph cluster.
My cluster consists of 2 OSD nodes with 10 disks x4TB each and 3
monitor nodes.
The version of ceph running is Firefly v.0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
The cluster originally was build with "Replicated siz