that's a normal process running...

for more information
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
On Tue, Jun 2, 2015 at 9:55 AM, Никитенко Виталий <v1...@yandex.ru> wrote:

> Hi!
>
> I have ceph version 0.94.1.
>
> root@ceph-node1:~# ceph -s
>     cluster 3e0d58cd-d441-4d44-b49b-6cff08c20abf
>      health HEALTH_OK
>      monmap e2: 3 mons at {ceph-mon=
> 10.10.100.3:6789/0,ceph-node1=10.10.100.1:6789/0,ceph-node2=10.10.100.2:6789/0
> }
>             election epoch 428, quorum 0,1,2 ceph-node1,ceph-node2,ceph-mon
>      osdmap e978: 16 osds: 16 up, 16 in
>       pgmap v6735569: 2012 pgs, 8 pools, 2801 GB data, 703 kobjects
>             5617 GB used, 33399 GB / 39016 GB avail
>                 2011 active+clean
>                    1 active+clean+scrubbing+deep
>   client io 174 kB/s rd, 30641 kB/s wr, 80 op/s
>
> root@ceph-node1:~# ceph pg dump  | grep -i deep | cut -f 1
>   dumped all in format plain
>   pg_stat
>   19.b3
>
> In log file i see
> 2015-05-14 03:23:51.556876 7fc708a37700  0 log_channel(cluster) log [INF]
> : 19.b3 deep-scrub starts
> but no "19.b3 deep-scrub ok"
>
> then i do "ceph pg deep-scrub 19.b3", nothing happens and in logs file no
> any records about it.
>
> What can i do to pg return in "active + clean" station?
> is there any sense restart OSD or the entirely server where the OSD?
>
> Thanks.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to