(Found no response from the current list, so forwarded to ceph-us...@ceph.com. )

Sorry if it's duplicated.


-------- Original Message --------
Subject:        scrub error with ceph
Date:   Mon, 7 Dec 2015 14:15:07 -0700
From:   Erming Pei <erm...@ualberta.ca>
To:     ceph-users@lists.ceph.com



Hi,

I found there are 128 scrub errors in my ceph system. Checked with health detail and found many pgs with stuck unclean issue. Should I repair all of them? Or what I should do?

[root@gcloudnet ~]# ceph -s

    cluster a4d0879f-abdc-4f9d-8a4b-53ce57d822f1

health HEALTH_ERR 128 pgs inconsistent; 128 scrub errors; mds1: Client HTRC:cephfs_data failing to respond to cache pressure; mds0: Client physics-007:cephfs_data failing to respond to cache pressure; pool 'cephfs_data' is full

monmap e3: 3 mons at {gcloudnet=xxx.xxx.xxx.xxx:6789/0,gcloudsrv1=xxx.xxx.xxx.xxx:6789/0,gcloudsrv2=xxx.xxx.xxx.xxx:6789/0}, election epoch 178, quorum 0,1,2 gcloudnet,gcloudsrv1,gcloudsrv2

     mdsmap e51000: 2/2/2 up {0=gcloudsrv1=up:active,1=gcloudnet=up:active}

     osdmap e2821: 18 osds: 18 up, 18 in

      pgmap v10457877: 3648 pgs, 23 pools, 10501 GB data, 38688 kobjects

            14097 GB used, 117 TB / 130 TB avail

                   6 active+clean+scrubbing+deep

                3513 active+clean

                 128 active+clean+inconsistent

                   1 active+clean+scrubbing


P.S. I am increasing the pg and pgp numbers for cephfs_data pool.


Thanks,

Erming



--

----------------------------------------------------
Erming Pei, Ph.D, Senior System Analyst
HPC Grid/Cloud Specialist, ComputeCanada/WestGrid

Research Computing Group, IST
University of Alberta, Canada T6G 2H1
Email:erm...@ualberta.ca  <mailto:erm...@ualberta.ca>        erming....@cern.ch  
<mailto:erming....@cern.ch>
Tel. :+1 7804929914             Fax:+1 7804921729



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to