The settings are per OSD and the messages you are seeing aggregated on the 
cluster with multiple OSDs doing backfill (working on multiple PGs in 
parallel)..

Thanks & Regards
Somnath

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jimmy 
Goffaux
Sent: Tuesday, July 19, 2016 5:19 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Too much pgs backfilling


Hello,



This is my configuration :

-> "osd_max_backfills": "1",
-> "osd_recovery_threads": "1"
->  "osd_recovery_max_active": "1",
-> "osd_recovery_op_priority": "3",

-> "osd_client_op_priority": "63",



I have run command :  ceph osd crush tunables optimal

After  upgrade Hammer to Jewel.



My cluster is overloaded on : pgs backfilling  : 15  
active+remapped+backfilling . ..



Why 15 ? My configuration is bad ? normally I should have max 1



Thanks

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to