Hi
I am assuming that you do not have any near full osd  (either before or along 
the pg splitting process) and that your cluster is healthy. 

To minimize the impact on the clients during recover or operations like pg 
splitting, it is good to set the following configs. Obviously the whole 
operation will take longer to recover but the impact on clients will be 
minimized.

#  ceph daemon mon.rccephmon1 config show | egrep 
"(osd_max_backfills|osd_recovery_threads|osd_recovery_op_priority|osd_client_op_priority|osd_recovery_max_active)"
    "osd_max_backfills": "1",
    "osd_recovery_threads": "1",
    "osd_recovery_max_active": "1"
    "osd_client_op_priority": "63",
    "osd_recovery_op_priority": "1"

Cheers
G.
________________________________________
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Matteo 
Dacrema [mdacr...@enter.eu]
Sent: 18 September 2016 03:42
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Increase PG number

Hi All,

I need to expand my ceph cluster and I also need to increase pg number.
In a test environment I see that during pg creation all read and write 
operations are stopped.

Is that a normal behavior ?

Thanks
Matteo
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to